Friday, December 19, 2008

Review: Antix SMTP Server For Developers

If you are a developer using Windows Vista, you are may be a tad annoyed by the lack of a built-in SMTP server. I was too, but after I found Antix SMTP Server for Developers (download mirror), I was actually grateful that Microsoft didn't deliver a built-in option.

The Antix SMTP server has become one of those utilities that I just "can't live without" and it is a whole lot better for developers than any other SMTP server I've ever used.

The reason may suprise you though.... it's because the Antix SMTP server can't actually send emails.

Confused? Read on...

The Antix SMTP server is a just a simple little .NET application. You launch it manually and it runs as a user process, so it really isn't a "server" in the classic sense of the term.

I like this because it isn't sitting there when I'm not using it putting my system at risk or using up resources like a real SMTP server would.

The Antix SMTP server cannot actually send emails. It sits there listening for local apps to try and send mail then fools them into thinking they succeded. It just grabs the email and dups it to a file. It can't actually route or deliver the email though. This means you can't accidentally send emails from application you were debugging to your real customers by accident!

As an applicaiton goes, it is excessivly simple. It minimizes to the notification area, and when open it just has a little window that displays the list of emails that it has recieved. You can double click the emails to open and view them in your email viewer.

And the new version uses Microsoft's ClickOnce so it keeps itself up-to-date if new versions come out... very cool! I wish ALL those tiny little utilities I use did that!

Anyway... while it has no use as an actual email server, but for developers the Antix solution is far superior to a real local SMTP server.

Thursday, November 13, 2008

Opera browser - Problem with login using asp.net forms authentication cookies

For years I've been getting my ass-kicked with several of my asp.net sites. For reasons that remained unknown to me for several years, whenever I'd try to login to one of these applications from opera, opera would just fail to login.

There would be no error message. You'd type in your user and password, click the login button, the page would refresh, but you'd simply not be logged in.

Finally, I have discovered the cause and a solution to this annoying problem!

I always suspected the problem with Opera was related to the authentication cookie, but I was unable to really figure out why. All I knew was that on my apps that failed Opera wouldn't have an authentication cookie, while on other apps it would get a cookie just fine.

The particular applications where I have been having the problem all share some common unique attributes. They use some kind of URL Rewrting (though not always the same mechanism), they all use the built-in SQL Membership provider (or a customized variation of it), and they are are very complex applications. Most of them host multiple virtual sites within the same physical asp.net app.

I have other applications that work just fine with Opera though. Some of those were also just as advanced as the failing sites and used similar mechanisms.

So for years I've tried and failed to determine a common factor between the failing apps and the working ones.

In none of these cases though have I every had a problem with any other browser, and I generally test sites with 5 or more.

Google has been no help here either. While I find similar reports of this kind of problem, you have to dig very deep and when you do find someone reporting a similar problem there is never a solution offered.

But this week, I finally managed to track down and fix this annoying problem.

It turns out that Opera doesn't correctly handle cookie names that contain spaces.... or at least not when issued via the asp.net authentication system.

In web.config there is a <forms> element within the <authentication> section. This is where you setup details about how the forms authentication system should work. One of the properties is called "name" and this property sets the name of the cookie that will be issued to the browser with the user's authentication ticket. If you omit the name property, asp.net uses a default name of ".ASPXAUTH".

In all of my failing applications, I had manually set a value for the name propery, and that name contained spaces. In fact, most of these apps used the same name because I generally copied this section of configuration from one of the other apps.

Changing the cookie name to one that does not use spaces allows opera to correctly handle the cookie...  login failure solved!

Three years of frustration all because of a space...  but that's programming for ya!



Tuesday, November 4, 2008

Review: ReliableSite.net and webhost4life for shared web hosting

[UPDATE - 2/7/2011] I have terminated my relationship with ReliableSite.net. I cannot recommend this hosting provider to anyone anymore. Over the years since I wrote this post initially, their service quality has degraded drastically, and the technical support is abysmal.  

[UPDATE - 1/17/2010] after a year with ReliableSite.net, I have posted a newer review of them. You should still read this review, as I've not re-covered the same ground again in the update and what I wrote here still stands true.  

As I posted last week, I am no longer hosting my sites with webhost4life. Once upon a time webhost4life offered a fantastic service at a reasonable price, but over the last few years I've grown increasingly annoyed with them.

Instead, I've moved my hosting over to ReliableSite.net. So, I thought I'd spend a little time describing my experience with both providers for the benefit of anyone else that might be considering either hosting provider.

I first considered a switch to ReliableSite.net last year after hearing about them on a forum somewhere (dunno where). What got my attention was their pricing model; you buy the base service then customize the plan by purchasing additional services and features one-by-one as you need them.

Brilliant!

With other providers, I end up having to buy a lot more than I really need just to get enough of one minor feature that I overuse a tad. With ReliableSite.net though, I'd be able to pickup and pay just for features I actually need.

But at the time, ReliableSite.net had only been around a year or so. It had good reviews, but I've been through at least a dozen providers that failed shortly after starting up or were unable to scale their services as they grew. So I'm cautious about jumping onboard with new providers.

Webhost4life was hosting my personal sites, and I was increasingly unhappy with them, but I decided to wait another year at webhost4life to see if they improved and to see if ReliableSite.net would survive long enough to be a viable alternative.

Five years ago, when I first started using webhost4life they were amazing!
 
They were one of the earliest providers to have a decent base hosting package under $20 and they were also the only provider at the time to have a fully comprehensive online management system. And my favorite part was that webhost4life offered early acces to new Microsoft platforms while they were still in beta.

But about 2 years ago, webhost4life started sucking.

Stuff that cost webhost4life my business:
  • Starting about 2 years ago there was a noticable decrease in performance of my sites, and this has continued to worsen ever since. I have also seen my sites become inaccessable for no reason much too frequently. The worse part of this problem was that I often had problems maintaining a sustained connection while downloading files from the web site, or when uploading files over FTP. It is really annoying to have to restart a deployment of your web site 15 times because the connection keeps dropping.
          
  • They abandoned support for SQL Express on the claim that it didn't scale well. Of course, the real problem was that they were putting far too many users on their servers without scaling out the hardware and decided to drop SQLExpress so they could squeeze in a few more users.
          
  • They released a new control panel that was more convoluted than their older one, but added no relevant features or convieniences for their customers.
          
  • They had botched two email server upgrades during the time I was with them, and in one case I had to wait over a year to migrate to a newer system because the new system couldn't handle email lists. This would have been fine, except that the old system didn't have any anti-spam protection.
          
  • There was a significant decrese in the qualituy and response times from their support staff. I used to get a decent reply back in just a few hours, but the last few times I had an issue it took over 24 hours to get a reply and when the reply came back it was just some form-letter that had almost nothing to do with my actual question.
          
  • Worst of all was that they stopped offering early access to new platforms. When  .NET 3.5, Windows Server 2008, and SQL Server 2008 were in beta, I was left in the cold. Even after those platforms went into the public market, it still took webhost4life several more months to bring an upgrade option to their customers... and they still aren't offering SQL 2008 support yet.
     
    I evaluate new platforms by upgrading my personal sites before the platforms are released. This way I can stay on top of new stuff before I'm asked to use it in my larger professional projects. Not having access to new platforms until months after they release to the public is not acceptable to me.
        
So this year, when I was up for renewal at webhost4life, I decided to switch. ReliableSite.net had survived their second year and were still getting good reviews... though the reviews are rather sparse.

What I like about ReliableSite.net:
  • Managment Tools:
     
    Reliable uses DotNetPanel for their online managment. DotNetPanel is a pure joy to use compared to the clunky online managment tools I've used at other providers. Not only is it pretty, but it is exceptionally intuative to use. Managing IIS, web sites, file systems, databases, DNS, and email systems is NOT a very easy task, and I'm a certified expert in all of those areas.  But most online tools for doing this kind of managment are even harder to deal with.

    But I found that DotNetPanel makes things very simple, while not holding back on any critical options.
     
    DotNetPanel is so good, Microsoft should consider buying out the company and getting their developers write their own native admin tools.
     
    DotNetPanel is a shining example of what administering servers should be like!

    This is the first provider I've seen use this system, but as you can tell I am very impressed. Perhaps the best thing about it is that ReliableSite.net hasn't done much to customize the stock DotNetPanel. This isn't a problem since it is more than capable enough to get the job done. It also means that ReliableSite.net will be more able to upgrade as new versions arrive. Even better, I'm not a the mercy of ReliableSite.net's own developers to maintain and improve a custom tool over time. Instead, they can spend their time and resources making my service reliable and fast, and leave the development to a 3rd party with a direct financial incentive to improve the product.
          
  • Pricing Model:
     
    ReliableSite.net allows you to upgrade nearly everything about your account on a per-feature basis. This allows you to incrementally ramp up your services as you grow without paying for stuff you don't need.
     
    Another thing I like is the option to pay monthly, quarterly, or annually. I chose to take an annual payment option. Even better, when you add an upgrade to your service they pro-rate the charges to align them with your regular billing cycle.
          
  • Performance:

    So far, the site is fast... at least 10x faster than I was seeing on the degrading webhost4life account I had been using. It isn't like crazy fast, but it is certainly as fast or faster than I expected. I haven't had the account long enough to say much about reliablity, but so far I haven't had any downtime that I'm aware of and speed seems consistant even at peak usage times.
          
  • Affiliate and Reseller Programs:

    Though I no longer use these features, ReliableSite.net has a nice reseller system going. This is very useful if you are a free-lance developer or small site design company... you can offer your customers "hosting" as part of the deal, and still reap part of the reoccuring profits. And you don't have to deal with all the hard server and network stuff..  

What I don't like about ReliableSite.net:
  • Email Options:
     
    The base plan is a little shy with emails, only giving 5 boxes and 5 aliases. They have well priced add-ons for increasing these but you can't buy just aliases or just inboxes... you have to buy both together.
     
    The price isn't bad, and even the unlimited option is quite affordable. But I can't help but feel like I'm getting robbed on aliases... aliases are just redirectors and don't really "cost" the provider anything much. I had to buy additional email boxes just to increase the number of aliases.
     
    They use SmarterMail, which is a fantastic and popular system. It is also the same system that  webhost4life used. I like the system, but ReliableSite.net didn't enable the built-in admin tools via the SmarterMail web client.
     
    Instead you are stuck using the simpler DotNetPanel tools to add accounts, aliases, and lists. The DotNetPanel allows you to create aliases, but it only allows one target email address per alias.
     
    Had they enabled the built-in SmarterMail tools for "aliases" I could have had multiple destination addresses for a single email alias.
     
    Due to this odd limitation of the DotNetPanel alias feature, I had to create a full mailbox for these kinds of addresses. fortunatly I was able to setup multi-target forwards on the inboxes via the SmarterMail personal account settings tools, but it sure seems like a waste to have to deticate an entire inbox just to forward mail on to multiple destinations.
     
    None of these problems are deal-breakers, just minor annoyances... but they still seems like an artificial and unnecessary limitation.
          
  • There are a few differences in password requirments for some services. For example, the password policy for database user accounts is stricter than the requirments for the billing system, FTP accounts, and online control panel. This is REALLY annoying because I like to keep the same user and password for all services related to my hosting provider. While I was able to create the same user, my password didn't quite meet the policy requirments for their SQL server, so I had to go back and change all the other passwords to adhere to the stricter policy.
     
    On a similar note, there are too many user accounts and passwords. I have a billing account, a site managment account, an FTP account, a SQL user account, and an account for the online statistics feature. Too many accounts. Sure, I understand that each of these is a different system internally, but it would be nice if the system attempted to create the illusion of a unified user and password... at least for the primary account owner.
          
Stuff that is just strange about ReliableSite.net:
  • Some of the base package seems extraordinarily generous, while other parts seem overly restricted. In the base package you get unlimited DNS domains and web sites, but you can only setup 1 sub-domain with the base account. This seems odd because sub-domains are just DNS tricks while web sites actually use resources.
     
    You also have unlimited FTP accounts, but you only get 5 email aliases (and 5 email inboxes boxes).

    Not a problem, just an odd choice. I would have thought that paying for additional domains, web sites, and FTP user accounts would make sense, but unlimited sub-domains and email aliases would be thrown in for free.
          
  • When you buy packages, you get to choose a billing cycle (monthly, annually, biannually, etc.). When you buy add-ons you only get to choose based on a monthly rate. When you buy the add-on, it charges the monthly rate to your card. Then a few hours later another charge appears that is a pro-rated amount for the remaining billing term of the base package.

    I don't mind this at all... I'd rather they pro-rate add-ons and sync the billing to the same cycle as the base package, but I did find it odd that the checkout process did not indicate that this would happen. From the point of view of the buyer, it appears as if you are going to be purchasing monthly. There is no mention that you will also be billed a pro-rated amount. Not a problem for me, but if you were on a tight budget and are just expecting to be billed for one month then this could be a major problem.
          
  • I had to setup static machine keys in my configuration files. When I didn't do that, my sessions would just abruptly end and it would not persist logins. I assume that this must be a web farm setup, but nothing in the documentation or marketing mentioned that.
     
    Not a problem, but had I known this was a web farm environment it could have impacted my decision to host here. Fortunatly my apps are all adaptable to web farms, but I've had sites that were not in the past.
          
Overall I am happy with my initial experience with ReliableSite.net. The problems are very minor compared to any other provider I've used, and the advantages are significant. Hopefully, I'll remain as happy over time.

Wednesday, October 29, 2008

Reddnet has a new hosting provider

[UPDATE - 2/7/2011] I have terminated my relationship with ReliableSite.netI cannot recommend this hosting provider to anyone anymore. Over the years since I wrote this post initially, their service quality has degraded drastically, and the technical support is abysmal.  
For 4 years, reddnet has been hosted by Webhost4life, but over the last two years I've been increasingly unhappy with them.

So I've moved on to a new provider, ReliableSite.net.

I'll write up a full review after I've had a while to really get to know the new provider...


Saturday, October 11, 2008

Developing a custom web application with an eye towards a potential retail market too

I frequently take on custom software projects... you know, someone needs a web application to do X, Y, and Z and there just isn't anything already on the market that does the trick.

But the truth about software development is that it takes a lot of time and is very expensive. So it isn't unusual that I'm  asked to design the app in such a way that it can be re-sold to other people that might have similar needs.

In other words, one client will bankroll the costs of the custom software, but they'd like the option to recoup those up-front costs by re-selling later.

If you get asked to do software like this, here is my advice...

You have two choices.
  1. You can design for the potential mass-market first, then customize an instance of the application for the specific needs of your paying client when you are done.
      
  2. Write for your paying client's needs first, then extend and expand the initial design to meet the likely requirements of other clients later. 
Forget Option 1.

Look... If you are very experienced with reusable platform development, you have a lot of time for design before you start writing your code, and you have a large team then you might actually consider option 1. Otherwise you are about to embarke on a dissaster.

You only have one known customer and you are only guessing that there could be a wider market out there for the product... a product that doesn't yet even exist. You can make some guesses about your phantom market's needs, but stop kidding yourself...

It is hard enough to get decent requirments out of known customers. Expecting to find usable requirments from a phantom market is just moronic.

I know, your paying client really belives there is a market for the product. They might even be right. But just nod your head and say you'll write something they can re-sell. Then forget about any customer other than the one who has commissioned your project.

Take Option 2!

Assume that the customer bank-rolling your project is the only customer that will ever buy the product. Not only is this probably true, you also have to keep in mind that most custom software projects fail even when you only have one customer to contend with. 

Your primary goal is to make a successful product that can get the job done. If you can't produce software that does the job for your paying customer, then you aren't going to have a reatil market to worry about anyway.

Despite how careful you might be to gather good requirments, create a fantastic design, then code you ass off...  you will not know if your software can do the job until you deliver it to a customer and get to see it succeed in a production environment under regular usage.

If you are successful, and your software meets your paying customer's needs... then can you start thinking about a potential retail market.

The good news is that by the time you get a workable product for your paying customer, you've probably already built an application that is likely to meet the majority of your potential customer's needs too! 

Things to keep in mind for version 1.0: 
  • Use an agile process. You don't have to use a formal methodolgy, but stick to the agile ideals. Code the bare minimum you can get away with, deploy it to your paying customer for testing, get their feedback, then go back and code a little more.

    Repeat this process until you get all of the 1.0 requirements implemented and deployed. I also suggest that you price your deliverables this way too. Embrace your customer's tendancy to feature creep and shift the requirments as you progress towards the 1.0 release. Be willing and eager to change the design as you progress towards 1.0.

    If you are not directly employed by the paying customer, you do need to make sure you are getting paid immediatly after each delivery, and that you keep the customer informed of the exact price of the changes they request at each iteration.
      
  • Brutally cut everything you can from the customer's requirments for early iterations. Keep it as simple as possible and implement only the absolute bare essentials.

    Your client will insist that some fluffy features are "essential". Ignore them and cut those features anyway. You don't have to tell the client you are cutting the features, just tell them you have them scheduled for a later iteration... even if you don't.

    As you deploy iterations towards 1.0, your client will forget about a lot of those fancy features, and you will start to get an idea as to which things are "really" important to them and which ones aren't... all without having wasted time writing the stuff you don't need.
      
  • Avoid over-architecting your application. It is tempting to layer and componetize everything and follow all those best practices and academic OO techniques. You may also be tempted to split the app into a dozen different assemblies too.

    For version 1.0 you want to deliver the minimum necessary features that get the job done, and you want to deliver it as fast and as cheap as possible. Where OO techniques and best practices will reduce the effort it takes to write your 1.0 code, go for it... but if you can't say for sure exactly how your architecture is getting you to the finish line then scale it back.
        
  • Avoid 3rd party code as much as you can. This includes 3rd party frameworks as well as 3rd party UI components and code libraries. If you cannot produce an essential user facing feature with a built-in component or the stock framework libraries AND you cannot write the code for that user facing feature fairly easily... then you can consider using a 3rd party component.

    If  you do use 3rd party code, be sure you have a cheap or free license to redistribute it. While you aren't yet worried about a retail market, you don't want to trap youself by using stuff you can't legally redistribute.
        
  • Don't worry about making everything configurable via online admin tools.  Sure, it is nice to allow the client to use an admin tool to change page titles and text without having to change code... but remember that each and every configurable feature adds complexity to the code and is a potential bug.

    Also remember that admin tools are also features that you have to design, test, and debug... but they don't add much value to your application's core functionality directly. These tools can wait until a later version after the core application is deployed and proven successful.
      
  • Stick to the simplest data access mechanism that meets your needs and don't worry about supporting multiple database platforms. Pick a database platform that your paying client can live with, and stick to it. In your code though, you also need to make sure that it can adapt to changes in your database's design without you having to spend too much time fixing up existing code.
      
  • If there is one area you have to totally kick-ass with in version 1.0, it is reporting. If your application has any reporting needs, and most do, you need to be sure you start off with reports that dazzle your customer from day 1.

    Be sure the reports are pretty, interactive (sorting, filtering, paging), and most of all printable.

    Reporting is the part that your paying customer's management will use the most, and they are the ones that hold the purse strings. You HAVE to knock them dead with killer reports!

    Later, if you get a retail market for your app, it will be the reporting features that make or break the sale.   
After version 1.0: getting to the retail market:

I can't tell you how to find and convince other people to buy the application.

What I can tell you is that if you have a 1.0 product that makes your initial paying customer happy, then the odds are good that you also have an application that other similar companies or individuals will be pretty comfortable with too. 

You can probably make a few minor adjustments and take 1.0 to the market as is; and I highly recommend that you get it to market as soon as you can with as few changes as you can. 

Why the rush?
    • You want to recoup the sunk costs of the initial development as soon as possible.
        
    • You need some real clients so you can guage if there are additional requirments that need to be addressed in the next version that you don't already know about.
        
    • You can learn from potential clients that decline to buy your application too, just politely ask them why. Is it too expensive? Do they have some competing software already in place? Does your app lack essential features?
 So... what about 2.0?

If 1.0 was successful with your paying customer, you will likely be heading towards 2.0 even if you don't have a 3rd party market... but by now you should actually know if you have a 3rd party market or not.

If you don't have a retail market then you should consider abandoning the idea of having one later. This will keep the costs low for your paying customer and reduce the scope of your 2.0 project. You can concentrate on enhancing the application for your paying customer's more advanced and ambitious needs as well as shoring up any weaknesses in your 1.0 application's architecture without complications from other customers.

If you do have a 3rd party market, you will want to trim out as many new features as you can and stick to just the ones that are essential to your retail market and paying customer's needs... and I mean essential here... skip the nice-to-have stuff still. 

If there is a retail market buying your product though, you should keep in mind that you have two kinds of customer. Users and management are are one customer, but don't neglect other developers employed by your customers that may need to customize or extend your application. Make sure 2.0 has good developer documentation, a well documented and consistant API, and consider exposing APIs over web services too. 

Either way, 2.0 should be about fixing up sloppy code in 1.0, improving the underlying framework's design with an eye towards supporting all those new features on your client(s) wish-lists. The goal is to re-design, re-architect, and re-code now to support easier and faster development going forward.

In 1.0 you will have identified some fundamental architectual and design mistakes. Fix these in 2.0 to get it out of the way. If you don't, you'll just make it a lot harder to expand your application to meed new requriment later.

And that's it... once you get to 2.0 you will know far more about what you need to do in 3.0 than I do. 3.0 should be sitting on a core application that does a good job, and has been re-designed to be easy to code against. So 3.0 is where you can unleash new features that impress users and improve their lives.

Monday, September 15, 2008

StackOverflow is open to the public!

Stackoverf
>low

StackOverflow is open to the public!

StackOverflow is a community driven developer Q&A site... the general idea is to be an Experts Exchange type community that doesn't charge for sharing information and doesn't use dirty tactics to link-whore search rank. If you are interested you can read my previous rant about ExpertSexChange.

The StackOverflow site is a joint venture between the famous Joel Spolsky of Fog Creek Software and Joel on Software and  Jeff Atwood of the also famous Coding Horror blog, though a lot of the development effort for the site involved some other people that I don't know too much about too.

So far, I like the general feel of the site. I honestly can't say yet how useful it will be since there aren't enough questions or users yet...  but if anyone can pull together a definitive software development wiki-forum kinda thing, it would be these two industry heavy-weights.

My only complaint so far is that it uses OpenID... and I absolutely HATE OpenID... whatever... a minor issue.

Friday, September 12, 2008

Google Chrome: Under the hood!

Google's new Chrome browser, which I reviewed earlier this week, is planned as a platform on which Google will build out more ambitious web applications...

My own review covered Chrome mostly from the user's perspecive, but I didn't get too deep into the internal mechanics and future possibilities that Chrome offers... mostly because I'm still playing catch-up myself.

For a reasonably in-depth overview of Chrome's technical design, e-week has posted a fantastic overview. This article hits the technical highlights of the new design. If you want to dig deeper, Google has a decent start on developer resources... but hopefully we'll see more coherent and comprehensive developer documentation in the near future.

I should be interesting to see where Chrome goes in the next couple of years.

I also wanted to point out that, while Chrome is getting a lot of press related to the technical design and future plans Google has for making Chrome a full application platform, there is a lot of very similar stuff going on with Internet Explorer 8 too... it just isn't getting the same level of press coverage. If anyone falling behind in this area it is Firefox and Opera... though Firefox has a very good development team and a reputation for very rapid development, so I'm sure they should be able to keep pace. With Opera I'm not so sure though.


Thursday, September 11, 2008

Browser Reviews: Internet Explorer 8

Part 3 in my roundup of the new breed of web browsers. In this installment I'll discuss the beta version of Internet Explorer 8...

Browser Roundup Series:

Part 1: Firefox 3
Part 2: Google Chrome
Part 3: Internet Explorer 8

Ever since IE 5, Microsoft has been letting me down with each release of Internet Explorer... but I think with IE 8, Microsoft may have redeemed themselves.

IE 8 could, possibly, restore Microsoft to legitimate technical dominance, instead of just having the inherited market-share dominance that allowed previous versions of IE to skate by for so many years.

First off, IE 8 has finally dropped automatic backward compatibility with pages that make use of poor HTML techniques that run counter to W3C recommendations.

This has been the biggest problem that IE has faced over the years. IE is the only survivor of the original browser wars, and so it carried a lot of baggage with it. There was a time before there was a W3C to "decided" what was going to be "standard", and back then browsers were making their own rules.

IE has always had to maintain a certain level of backwards compatibility for those non-compliant pages simply because there were so many popular sites using them. Making things worse was the fact that some of those techniques made more sense and worked better than the official W3C way; so a lot of lazy developers continued to use non-standard IE specific techniques long after they had became obsolete.

I'm guilty of this myself.

Then making it even worse... newer browsers entering the market also had to support some of those non-compliant mechanisms too... which gave lazy developers even more room to continue using the IE specific techniques.

The result... 10 years later, there are still a LOT of crappy sites out there.

Microsoft has always felt compelled to tread carefully when adopting newer W3C recommendations where adoption would break backwards compatibility. They didn't want to "break" half the internet when IE users upgraded to a newer version.

But finally, IE 8 will embraced the W3C recommendations full on with the new "super-standards mode". Futher, this will be the default mode for IE 8.

For those sites that still suck, there is button at the address bar that reverts to the IE 7 style of rendering. .

IE 7 was a good step in fixing the security and privacy issues that plagued IE 5 and 6. But IE 8 has taken this to a whole new level. If you want a more detailed summary check out this post at the IE blog.

There are major improvements in every area of security, but my favorite part is in how IE 8 keeps the user aware of privacy and security conditions as they browse around. The security and privacy settings are also much friendlier this time around too.

Like Google's Chrome, IE 8 has a special super-privacy mode. IE calls it "InPrivate" while Chrome called it "Incognito", but they are essentially the same feature. It handy for those times when you don't want to leave a trail of history, cookies, or saved passwords behind you as you browse. Useful when you check your bank accounts on a public computer, but we all know that the REAL reason this will be popular is for surfing for car-bumper porn without anyone else finding out about your "special interests".

While it remains to be seen how secure the underlying browser actually is, the user features around security and privacy are much improved compared to previous versions and in most ways are better than those of rival browsers.

IE 8 has a mixed story with add-on support. IE has always had decent extensibility and support for add-ons, but security issues have been a bit of a problem in the past. IE 7 didn't really try to do much with add-ons except lock them down against abuse. This gave the competition, especially Firefox, a lot of time to gain ground with much newer and more modern add-on architectures and management features.

IE 8 still has the classic add-on mechanisms they've always had, though much improved under the hood. Management of add-ons is quite a lot better in IE 8 though. Compared to Firefox though, the add-on system still kinda sucks overall.

The good news is that "most" of the popular toolbars and media plug-ins for IE 7 will still work in IE 8 too.

Instead of a major overhaul with add-ons, IE 8 has added some features that are totally new in IE, and are a bit different from what you find in most other browsers.

"Accelerators" are a new type of add-on. What these do is allow you to select (highlight) something on a page and a semi-transparent button will appear. This allows you to select an accelerator. The specific accelerators that will be shown will depend on what exactly you selected on the page; it is pretty intelligent about not showing options that don't make sense for the selected text. I'm particularly fond of the "Define with Wikipedia" accelerator.

The other new type of add-ons are Web Slices. Web slices sit on the toolbar, and when clicked they pop-up little mini-windows that show content pulled from a web services somewhere on the internet. A classic example is the "Facebook status" web slice, which just pulls recent status updates from your account. Web sites that have support for web slices can expose those slices very similarly to how RSS feeds are exposed so that when you browse a site with an available Web Slice, a button will appear at the address bar to allows you to install that slice.

You can get slices, accelerators, toolbars, and add-ons from an online add-on gallery too, and thisis very similar Firefox's add-on system. IE also has a centralized add-on manager that resembles Firefox's equivalent. Firefox's add-on system still remains better overall, but IE 8 is taking a pretty good step in that direction.

Probably the most important change in IE 8 for me is the increased performance and much improved visual quality of the rendering engine.

IE has always been a tad on the slow side, and the ugly rendering has been a source of constant frustration. But pages on IE 8 look are almost as good as those rendered in Firefox, and is very comparable to Google's Chrome. The speed is amazing, much faster than Firefox 3 and very comparable to Chrome.

The majority of the UI features remains the same, or are very similar to those in IE 7. It is clean and professional. The only down-side is that it doesn't feel very "new" when you first upgrade from IE 7... so IE 8 has a little less "wow!" factor for the users.

A highly marketed feature is the color coding of tabs. Tabs are color coded when opened from the same source tab. This is kinda neat at first, but overall I don't find it very useful after having used it for several weeks. I do; however, find that the color coding detracts from the overall visual appeal of the browser making the tabs area seem noisy and out-of-sync with the clean and crisp appearance of the rest of the user interface.

IE 8 also improves the pop-up prompts that you see when you type in the address bar. Pretty much everyone has improved this feature, but I think IE 8 has done the best job organizing items in the pop-up suggestion box. Unlike Chrome though, IE 8 doesn't include suggestions from online searches in the address bar's pop-up... Instead it still has the separate search box. Oddly, the search box has it's own pop-up suggestions that does show suggestions from an online search provider, as well as suggestions from history, favorites, etc. that are pretty much the same as the address bar's pop-up.

While I find that the suggestion pop-ups are incredibly well done in IE, much better than those in the other browsers, I also think they should combine the search box with the address bar like Chrome does... It seems crazy to have two different suggestion boxes that look almost the same, but behave differently. It's even crazier since it was IE that actually invented the "search from the address bar" feature in the first place. It wasn't until IE 7 that there was a separate search box.

One step forwards, two steps back I guess.

The beta of IE 8 still has some rough edges, but it has narrowed the gap with Firefox for the majority of users. Microsoft is clearly taking the renewed competition in the browser space seriously. It has plenty of advanced features, is very fast, renders pages much better, has an intuitive UI, and still manages to keep a clean and professional design despite the highly advanced and complex feature set.

IE 8 will compete very well with Google's Chrome simply because the two share so much in common, but IE has features that remain absent from Chrome (for now).

Power users and developers may still prefer Firefox for the add-ons and customization advantages though.


Tuesday, September 9, 2008

Browser Reviews: Google Chrome

Part 2 in my roundup of the new breed of web browsers. In this installment I'll discuss the beta version of Google Chrome.

Browser Roundup Series:

Part 1: Firefox 3
Part 2: Google Chrome
Part 3: Internet Explorer 8


Google is the new kid on the block with the beta release of Chrome. This is a very interesting release in many ways, not the least of which was that there was not much in the way of a public announcement before the beta was delivered.

Overall, chrome is made up of a variety of 3rd party open-source components that Google has cobbled together into a workable browser.  

The result is a very clean, elegant, and sparse user interface on top of a very capable rendering engine.

Chrome may look simplistic and spartan, but there is plenty of elegant power and subtle complexity behind each and every one of those features. This is the mark of true software mastery... features so well designed that you aren't even consciously aware of how complex or advanced they really are.

Even Apple will have to be quite impressed by the slick nature of the Chrome UI --perhaps a little jealous since part of Chrome uses the open source WebKit, which also powers Safari.. I haven't really used Safari on the mac platform, but Chrome sure beats the shit out of Apple's Safari for Windows (Safari for Windows sucks so bad, I'm not even going to review it).

One thing about Chrome is the speed. It is really snappy. Quick to open, quick to respond to mouse clicks, and quick to load pages. I'm impatient and I open new pages and tabs like a fiend so this really appeals to me.

Google has re-thought some of the more basic assertions about the browser UI, but without straying so far from the familiar that it would alienate experienced users.

The most obvious example of this re-thinking is in how Chrome handles tabs. Instead of a browser that contains tabs, Chrome has tabs that contain browsers. The tabs are at the top.  Within tabs you have your toolbar (if you enable it), the address bar, and whatever web page you might have opened. This is a subtle distinction, but if you take a little time to examine how this affects the UI's behavior you can see that a lot of thought went into the idea.

Another obvious thing is that the address bar doubles as a search bar... this is not new (IE has had this feature for about 10 years), but it works much smoother than in other browsers and is much more intuitive. Typing pops-up the expected auto-complete window, which is quite similar to that found in any other modern browser, but Chrome manages to put a lot into this pop-up with less noise and a lot more appeal.... plus it includes search results, bookmarks, history, feeds, etc.  right in the same pop-up. Again, nothing really new, just a slicker and more streamlined take on an old favorite.

Chrome is also exceptionally very well animated, giving subtle but very important visual cues to the user about what is going on, where, and why. Instead of just "popping into existence", new tabs slide into position. Drag and drop operations are smooth and intuitive, but more importantly dropping things is very smart... Chrome just seems to magically "know" what you want it to do when you drag/drop something.

The Chrome rendering engine is fast, smooth, and most of the pages that I've viewed have work perfectly and look really good. While it isn't quite as awesome as Firefox's rendering engine, it is so very close that I usually can't tell a difference in the quality at all.

Options and settings in Chrome are also sparse, but it has some of the friendliest settings editors that I have ever seen. Even rookie end-users will find it very easy to change settings without being overwhelmed with techno-babble.  Power users might wish for a few more options, but all the really important stuff is there.

File downloads are handled elegantly using a status bar like display at the bottom that shows progress and details about you downloads (it is almost exactly the same as the very popular "Downloads Progress Bar" add-on for Firefox). I do have a major complaint here though... the download bar is embedded within the tab where the download was initiated. When you change tabs, you can no longer see status on downloads that are taking place within other tabs. I would much prefer if the download bar was part of the overall instance of the application so I could monitor all my downloads no matter which tab I'm in.

The major thing that is missing is a comprehensive system for managing 3rd party add-ons, but as far as I can see there aren't any add-ons yet anyway. It does have support for plug-ins for flash, acrobat, etc. but nothing quite like the add-on gallery for Firefox and or the new accelerators and web slices in IE 8... but I'm positive that we'll see this side of Chrome very soon.

Another missing feature seems to be support for RSS feeds. If it is there, it sucks... but as far as I can see it doesn't do anything with RSS at all... I sure hope that's on the top of their "to-do" list.

Overall Chrome is a welcome entrance into the newly competitive browser landscape. It lacks many of the advanced features you might be used to from other browsers, and end-user customization is very limited in the beta.  But what has been delivered out-of-the-beta-box is still amazingly well done.

Add-ons will keep power users on Firefox for a while, but Google's biggest competition will be from the new, and much improved, Internet Explorer 8... which has so many of the same features as Chrome that it is almost creepy! I wonder who is copying who here?

Still, I've only had this thing a week or so, but I find that I am using it more than any of my other browsers, and I am really enjoying the speed and simplicity quite a bit.

Monday, September 8, 2008

Browser Reviews: Firefox 3.x

With the recent releases of so many new web browsers, I thought it might be time to take my bearings again and review the new landscape. I'll tackle Firefox 3 first.

Browser Roundup Series:

Part 1: Firefox 3
Part 2: Google Chrome
Part 3: Internet Explorer 8

For most people, the web browser is the most important piece of software they will ever use. As a programmer specializing in web applications, browsers are even more important to me. Applications I build can be viewed in a variety of browsers, and so I have always had several installed at any one time.

I make a habit of switching my default browser from time to time, so that I can get a good feel for the "end user" experience, and I also do a lot of testing of my applications in various different versions.

I have a lot to say about each of the current contenders, so I'll split this up into several posts. First, we'll tackle what has been my preferred default browser for the last 2 years.

Mozilla's Firefox 3:


Since this one has been out a while I'll not spend too much text on describing the specific features in much detail, but I do have a lot of overall opinions to venture.

Firefox was born out of a desire to take Mozilla's impressive rendering engine and embed it into a stripped down browser UI without the commercial constraints and bloat that had complicated Mozilla first browser suite (which had been bankrolled by Netscape).

Firefox got started by being simply a great browser! It re-thought the UI which gave it an edge over stale old IE, and the rendering engine was light-years ahead of the competition. The most successful feature was tabbed browsing.

Firefox also evolved a fantastic add-on architecture that would allow users to pick up advanced features on an as-desired basis without those features hampering development of the core browser... and most importantly it was easy for users to find add-ons, install them, and manage them.

Firefox 1 and 2 were all about perfecting the initial design, and so changes between versions had been fairly incremental, but Firefox 3, the current incarnation, was a major overhaul.

Overall I'm disappointed!

There are tons of new features in the new version, most of them quite good even, but I generally find that Firefox 3 is not as enjoyable as the previous versions were. It has a lot of power, but it has also added a lot of complexity. 

The UI is loud, crowded, and noisy. The default theme has a cheese-ball cartoony feel that just isn't any fun, and and finding a 3rd party theme that doesn't look like it was designed by a middle-schooler on crack is neither easy nor fun.

Another major problem is that Firefox 3 takes much longer to launch, opening new tabs is slower, and the interface is sometimes sluggish to respond to commands (and I see this behavior even on my top-of-the-line XPS M1730 laptop).

Firefox 3 still has the best rendering engine out there. And it remains the king of  flexibility and customization via an amazing add-on infrastructure. But the new version has trended very far away from the regular user's needs.  It just feels bloated and complex... which is ironic considering the history that spawned Firefox in the first place. 

Firefox's popularity with most users had stemmed from the tabbed UI and fantastic rendering engine. It offered enough gravy via add-ons and advanced features to tip many users away from IE.

But now Firefox is facing a really big problem...

The new entrants in the browser competition have finally gotten their own rendering engines caught up to the W3C recommendations (including IE 8, which has made massive strides in that area). Even though Firefox had enjoyed several years of leadership in technical compliance, this isn't much of a big deal anymore. The recommendations (the so-called "standards") don't change that fast anymore, which means that Firefox can't leverage much expertise here to differentiate it's browser offering.

The other side of Firefox's success was always in the convenience features, slick UI, and tabbed browsing... but the competition has also caught up in this area, and they have many more years of experience in design and usability... plus the financial incentive to really do a good job with their new designs.

So Firefox is fast finding that it only has significant market appeal for power users and developers... but catering to that market segment has caused it to drift away from the needs of the vast majority of end-users who just want to browse the web without complications.

I'm still a big fan, but even as a power user and developer myself, I generally find that I don't use that many of the add-ons and advanced features either. Sure they are nice, and I played with them for a while, but after the "wow, neat!" factor wears itself out, I find that all I really want is to open a browser and browse... not muck about configuring things and keeping up with a dozen wonky 3rd party add-ons.

I do love the integrated spell checker though... I wish that was standard in all browsers.


 

Friday, August 22, 2008

ADO.NET Entity Framework: Impressive! Powerful! Useless!

The new Microsoft Entity Framework is the latest in a long line of very impressive, yet tragic failures in Microsoft's data access strategy...

The basics of ADO.NET are great. SqlCommand, SqlConnection and their relatives for other platforms... awesome. But almost every single version of ADO.NET has failed when it comes to useful higher level abstractions. To be sure, they each demo well and they each have uses with simple applications (like those you'd be shown a demo of).

But whenever it comes to complex applications, the abstractions tend to become cumbersome, restrictive, an inflexible. The result is that most serious application end up using 3rd party frameworks or custom abstraction layers instead and under-the-hood they tend to stick to the basic ADO.NET SqlConnections and SqlCommands to do the dirty work.

In .NET 1.x it was DataSets and SqlAdapters. In .NET 2.x it was DataTables and TableAdapters. And now we have the ADO.NET Entity Framework (EF).

I had high hopes for EF. MS had clearly recognized that a radical new approach would be needed if they were to achieve a useful abstraction without having to re-invent the same wheel over and over again every few years. They had also set some pretty good goals in terms of making EF useful in support of other data stores outside the classic relational database.

Sadly, what has been delivered in the 1.x version of EF is hopelessly crippled by the deliberate lack of implicit lazy loading.

Here is an example of what this means. Assume we have two logical EF entities that map more-or-less directly to physical tables in a database. One is the Order object and the other the Customer. These entities have a navigation relationship between each other (which is analogous to the physical database's foreign key).

If you get a reference to an Order object it will have a property called Customer. This property is how you'd navigate the relationship between the entities.

So you'd expect that if you look at MyOrder.Customer you'd get back a reference to an instance of the Customer entity... But you would be fucking wrong!

The Customer property on the instance of Order may not have been fetched from the database automatically when you obtained your reference to the Order...

Instead of implicit lazy loading, EF has "Deferred Loading"... or if you prefer you can call it "Explicit Lazy Loading". The idea is that you can check to see if the Customer has been loaded for an Order, and if not then you can explicitly load it when and if you need it. But it will not automatically load the data for these properties and related entities unless you explicitly tell the framework to do so (which is unlike most ORM frameworks, LINQ to SQL, etc.).

What happens in real applications is that you never know what has and has not been loaded. So your code is chock-full of bullshit like this:

if(!someOrder.CustomerReference.IsLoaded)
{
 someOrder.CustomerReference.Load();
}
string customerLastName = someOrder.Customer.LastName;

This allows you to do what you need to be doing in your code, but the price is that you HAVE to do this all over the fucking place... every time you want to access a property that traverses a relationship. You end up with more checks for data than you do data in the first place.

Even worse than that though, when you code something to use your EF model, now you now have to somehow magically "know" which properties on your entities are going to traverse a relationship and which don't.

If you know in advance that you are going need the related data later on, then when you fetch your entity you can a technique known as "Eager Loading" to tell EF to go ahead and load up the related data in advance.

This looks like this:

var x = entities.Orders.Include("Customer");

Again, this allows you to do what needs doing, but if you are making the fetched order entity available to other classes (like as a return value from a public method)... then the caller isn't going to know if you pre-loaded the relationships or not... so they'll still have to do the whole "IsLoaded" anti-tard checking shit anyway.

In his response to a nasty online petition called the "Vote of No Confidence", Tim Mallalieu defended the lack of implicit lazy loading with this statement:

"We took a fairly conservative approach in v1.0, because we wanted developers to be aware of when they were asking the framework to make a roundtrip to the database... our take on 'boundaries are explicit'."

That has to be the most depressing statement that I've ever read regarding ANY data access technology ever!

Hey guys!

The entire point of an abstraction layer is so that developers using that layer DON'T have to be aware of the damned internal workings under the abstraction layer!!!!

But most offensive to me is the overall fact that I cannot "trust" the EF model. For example... if I have a Customer entity then I have no way to know if Orders property contains an empty collection because there aren't any orders of if it is empty because the framework hasn't loaded data. Instead of being able to trust the entity model to be accurate I have to baby sit it and constantly ask "are you sure you loaded data for this already?".

Fuck that!

Then we get into the other side effect. All the mechanisms needed to do the paranoid checking-up after EF use some counter-intuitive techniques. Before I check the Customer property on an Order entity, I have to first check up with a CustomerReference property on my Order to get information about the state of the contents of the actual Customer property? Huh?

Yeah.... that's really slick there!

The eager load technique pisses me off even more!

So I can tell the EF to go ahead and load relationships... but to load them I have to use a method that takes a fucking string as an argument?!

So now I also have to be an expert on exactly what each navigation property in my EF model is specifically named... and that without strong type checking or intellisense? Sure... I can do that, but it slows me down and is just begging for a runtime bug (typoed the name or messed up the capitalization). That means I am constantly having to refer to the damned diagram all the time too which just slows me down and annoys the shit out of me at the same time!

Using EF without lazy loading is a good way to drive yourself into becoming so paranoid you'll need to remember to take your anti-psychotic meds before you even open Visual Studio!

There are a few 3rd party attempts out there to get implicit lazy loading features with the current version of EF. These are clever hacks, and I even tested out one of those. Overall, the hacks give you a much better experience than using the stock Entity Framework as is, but this is also code that will be hard to update to use any future releases of EF too, and these impose other limitations on your code too. I suppose though that if you HAD to use EF, you'd still be smart to use one of these 3rd party techniques to get the implicit lazy loading anyway.

LINQ to SQL may lack the ability to provide a true logical abstraction for your physical data model, or do fancy inheritance, or even handle some of the more unusual data mappings...  but at least you can TRUST that properties in a LINQ to SQL Entity will actually contain data that reflects what is in the real database. Plus the overall usage pattern of LINQ to SQL is much clearer and simpler.

Until EF gets built-in implicit lazy loading, screw it... I'll just use LINQ to SQL.


Monday, August 18, 2008

Should I hate Orson Scott Card?

Card opposes gay marriage... should I avoid reading his brilliant fiction books? WTF!?!?

In a recent entry over at "geekdad", one of wired's so-called blogs, Matt Blum wrote a piece contrasting his love of the book Ender's Game with his hatred for the book's author, Orson Scott Card. The source of the problem seems to be that Card has, especially in more recent years, been a very vocal opponent of gay marriage. We can find Orson's viewpoint very clearly stated in a piece that was published on the Mormon Times web site.

On one side of the argument we have a brilliant writer that makes a convincing and logical argument and on the other we have a hyper-reactionary hate monger that simply cannot deal with opinions that differ from his own.

Sadly, in this case the hate monger is Matt Blum.

For the most part, Matt is struggling with a problem that faces a lot of people in a celebrity obsessed internet culture. Any person of any minor fame will have their entire personal life splattered all over the place... and what we often find is that the people behind great works are not always good people. The knowledge sometimes impacts our ability to appreciate the work itself.

But in this case, what I find truly annoying is how Matt and many similar people treat Orson Scott Card, especially over his opposition to gay marriage.

Matt seems to have bought into the left-wing activist's propaganda machine which preaches that it is wrong to even question the whole "gay rights" thing, especially in regards to gay marriage. He has other views that I can pick on in this piece too.

...about anti-semitism!

Contrary to the opinions of the political correctness crowd, including Matt, many people have legitimate complaints about Jews, Jewish culture, the Nation of Israel, etc.

Just because the Jewish people were treated badly in Germany half a century ago doesn't give them a free ride to be forgiven for being ass-hats today. There is a lot to dislike and disapprove of in Jewish politics and culture, just as there is in any culture.

Pointing out those flaws doesn't make you an evil bastard. But here in the home of free speech Jews appear to have earned a permanent “get out of jail free” card.

Anyone that says anything even mildly disapproving about anything that can be related to the Jewish is instantly flagged an anti-Semite... which is apparently something REALLY bad despite the fact that no one seems to actually know what exactly a semite might be... but everyone here sure does knows that you are a truly evil fucker if you aren't down with the semites!


He is quick to label anyone that disagrees with anything Jewish in nature as an anti-Semite...  with the expected references to Nazi Germany of course.

By the end of his post, matt resorted to just picking on Sean Connery for a statement that he made in 1987 (where he offered the opinion that there were some cases where it was OK to slap a woman with limited force)... a viewpoint I can disagree with in general, but that did made sense in the original context of Sean's interview... it certainly wasn't as offensive as it was made out to be in the resulting media spin.

There are at least a few legitimate arguments to be made opposing gay marriage and Orson Scott Card successfully makes one of those. Card even managed to makes his point without having to bring in the religious angle, which is admirable considering his audience was Mormon and it would have been much simpler for him to just play the God angle.

But Card didn't do that, and that is part of the reason why Card is such a fantastic writer. Instead he breaks into history, politics, law, and science to make a rational argument in support of his opinion (something you don't often see from the religious right these days).

I'm sure the underlying reason that Card is so passionate about this topic is motivated by his own religious views, but unlike most religiously motivated bull-shit out there, Card's argument holds some water when it is held up to non-religious analysis.

I disagree with Card's overall viewpoint of course. I personally don't see the merit in ANY form of state recognition of marriage, gay or straight. Card does talk intelligently about why there is a legal idea of marriage though:

 "The laws concerning marriage did not create marriage, they merely attempted to solve problems in such areas as inheritance, property, paternity, divorce, adoption and so on."

Card is a smart guy, but in my opinion he missed a vital point... the areas of "inheritance, property, paternity, divorce, adoption and so on" do not need to be solved via legal recognition of marriage. It could just as easily be solved via standard contract and tort law.

Ironically there is already precedent for this in the legal system. Prenuptial agreements are an example of how contract law is used to extend and/or modify the standard rules of legal marriage. Divorce agreements are another example.

As it stands though, the existing legal institute of marriage is extremely discriminatory and unjust towards a sizeable group of citizens. It is as repressive to these people as slavery was towards black Americans. I would argue that legal marriage law is also highly discriminatory towards heterosexual people that just aren't married or don't want to be. Certainly the tax system punishes single members of our society very harshly indeed.

But I still respect Card's argument. It is well thought out, logical, and well presented. Which brings us back to Matt's problem... what do you do when a creator of great work holds personal opinions that you strongly disagree with?

Well, first, it probably doesn't help to come off as a total ass-hole like Matt did. I mean, by the end of his post he devolved into plain and simple name-calling.

How very mature Mr. geekdad, what a role model for your kids!

The geekdad "blog" over at wired is generally aimed at parents. In my opinion most of the writers over there seem to have some really silly ideas on parenting. These are the kind of parents that are shoved so far up their kid's ass that their kids will turn out to be worthless adults that just live in their geekdad's basement until they're 40... but since the blog does aim at parents, it brings up the question:

Should let you kids read a work if the person that created it also teaches ideas you don't agree with, or even find outright hostile, immoral, etc?

Well... you can be a narrow minded ass-hole and just steer your kids clear of such works... protecting them and making sure they grow up to believe only what you want them to. Or you might choose to teach them how to fucking think for themselves so they can make up their own minds when it comes to contentious political issues...

So I recommend you burn all of Card's books and add his name to your fucking net-nanny firewall or whatever...


Friday, July 11, 2008

Vista - Downgrading the Masses!

The regular non-technical folk around here HATE windows Vista. So I set out to catalog their complaints...

I hang out at low-class food establishments on a regular basis. While that has certain disadvantages, it does give me an otherwise rare opportunity to hang out with "regular" folk... you know, people that don't spend their whole life in front of a computer.

Since I frequently have my laptop with me, this generally sparks a large number of those "so you know anything about computers?" conversations. Mostly these are just plain painful, but I learn about common trends in the mass market this way.

For example, 2 or 3 years ago people would often notice the laptop and ask about it. They didn't know much about laptops and so the questions were basic questions: "Can you go online with those?" or "Does it run Office?". Generally people were interested to know if laptops were actually real PCs since they hadn't had any firsthand experience with them.

These days, the questions have changed. Everyone is either buying a laptop, or has already bought one. Very few people are mentioning the classic desktop PC anymore. These people aren't mobile and don't "need" a laptop as opposed to a desktop, but they are buying laptops anyway. I suppose this is a combination of the falling prices of laptops, and the growing desire for a less "invasive" home PC... something that fits on a desk, doesn't require assembly and wiring, etc.

This trend toward laptops hasn't made most of these people more intelligent or tech savvy of course. The questions are still as painful as always.

One of the more depressing trends over the last year or so has been the nearly universal hatred for Windows Vista.

This isn't the typical mass-media fueled hype, nor the "hate the man" stuff you used to hear... you know, the common wisdom that says that Bill Gates is a rich monopolistic ass-hole and Windows is crap... go Linux!".

Not this time. This is a pragmatic hatred fueled by firsthand experience.

I've been using Vista regularly for almost 2 years now (since late beta days). While I don't "like" it really, I personally would not go back to Windows XP either. I consider it a decent upgrade, but it certainly had a rocky start. I did finally turn off User Access Control though. I tried to get used to it, and like it, but as a developer too much of what I do requires admin access and I grew tired of the constant black screen prompting.

The number of people that report hating Vista or having actually downgraded to Windows XP was staggering to me so I started keeping track of these conversations formally back in mid-April. I created a simple text file to track the opinions as they were offered.

Today that list looks roughly like this:

Had Vista and Downgraded = 61
Has Vista wants to Downgrade = 23
Has XP wants to Upgrade = 5
Has XP refuses to Upgrade = 56
Has Vista and will stick with it = 4
Has Vista and likes it = 0
Bought or will buy a Mac = 5

That's 154 people and not even ONE of them claims to like Vista. Now, I doubt all of the people refusing to upgrade will actually refuse in the long run and I doubt all of the people wanting to downgrade will actually do so. But the trend is still crystal clear.

The largest group actually DID downgrade back to XP, which is a fucking insane number of people. Do you have any idea how difficult it is for an average non-technical user to perform an OS install? Of course most of these had a tech-savvy relative (or child as was often the case) do the downgrade for them. The fact that so many people actually went to the effort to downgrade is a very strong indicator that this is a very REAL dissatisfaction and not just the idle complaint that has always been there since the PC was invented.

Only 5 of the people I've talked to have bought a Mac instead of a PC, and none mentioned any serious intentions towards moving to Linux, though some of the more computer literate did ask me if I had an opinion about Linux (I never recommend Linux to non-technical users though).

What this tells me is that Windows Vista is a colossal failure in the market. Even Windows ME was better received than Vista (and ME was truly a pile of shit).

While people around here aren't running to the competition yet, if Microsoft doesn't get a new and better OS out the door fast, they will be in serious trouble.... if it isn't already too late.


Wednesday, July 9, 2008

Pack'n less

My XD .40 cal Sub-Compact just wasn't working out... so I've adoped the Walther PPS 9mm as my new choice firearm for conceal & carry.

In the fall last year I bought an XD .40 sub-compact handgun made by Springfield Armory. I had chosen the XD because I really liked the feel of the weapon, had fired a 9mm version and been very pleased, and the .40 offered substantial fire-power in a small form factor. I also loved the features of the XD line.

My intent was to carry it on a semi-regular basis.

That was a little more than 6 months ago.

Overall, the XD is a fantastic weapon, and I would recommend the XD line without reservation to anyone that asks. There are a LOT of good guns on the market, but the XD line is truly at the top of the heap.

Unfortunately, I have discovered some problems that have forced me to the conclusion that the XD sub-compact .40 is just not the right weapon for me personally.

After a couple thousand rounds of regular target practice, I just wasn't consistent in how well I shot. Sometimes I'd do well, and others I'd hardly put one in nine rounds on the paper at 5 yards. I'm not naturally a "good" shot. But after that much practice I should at least be consistently bad. The gun itself was fine... other people pick it up and do fantastically well with it.

The other problem was the size of the weapon. I'm 6' 4" tall and weigh in at around 145lbs --tall and skinny. The XD is a sub-compact with a 3" barrel and an overall length of about 6.5". It isn't a large weapon in those dimensions, but the slide is still a 2" wide chunk of steel. No matter how slack I try and dress, the XD sticks out like a brick making wearing it concealed nearly impossible.

But it was the lack of consistent accuracy that forced me to admit that, while I loved the XD, it was not the right weapon for me. Not knowing if you can reliably defend yourself with your side-arm is far worse than just being unarmed in the first place. It's safer for both you, and anyone else that happens to be around when the shit hits the fan.

So reluctantly, I decided to give up my XD and start looking for something that fit me better.

The search was long, and there were a lot of weapons to research and consider. In the end though it was the Walther PPS 9mm that ended up in my hands.


I didn't get to shoot one of these before I bought it. It is a new weapon, and they are very hard to find. Fortunately, one local shop had one in stock, and after seeing it myself I decided it was worth the gamble... well, that and the fact that the shop's owner gave be a reasonable trade-in price on my XD that put the Walther in range of my limited budget (the PPS retails for $650 - $750 when you can find them at all). 

The Walther PPS is thin and light. It weighs in at about 20oz loaded, and is just over 1" wide at the thickest point. And it measures just over 6" long. It is significantly smaller than the double-stack sub-compacts like the XD, but still feels more substantial than the Pocket guns such as the Kahr PM9 or those made by Kel-Tec. The thin design it what makes it conceal well, though it is still a tad largish if you want to use a pocket holster. As thin as I am, the pockets on my Dockers slacks are deep enough to hide it well enough.

What impresses me most about the PPS is the recoil. I've fired a lot of 9mm handguns, and this is the softest shooting one I've yet to handle. While it is very thin, the backstrap is just big enough to spread the recoil evenly over you palm, and the double recoil spring design and the shape of the weapon itself do the rest.

The slide action is smooth, and the trigger action is great, though a bit stiff at the break point until I put a few hundred rounds through it.

Some people have problems getting used to the magazine release, which is integrated into the bottom of the trigger guard. I personally got used to the release mechanism very quickly, but if you have developed a habit of using a standard thumb button release, it might take you longer to get used to.

One of the features I like most is the choice of 3 magazine sizes. It ships with a 6 and 7 round magazine. The 6 round is flush with the bottom of the grip leaving your pinky exposed, while the 7 round adds enough extra grip to give most people's pinky a resting point. There is an 8 round magazine as well which gives even people with large hands pinky support. It also ships with two backstraps... a small one for people with girl hands, and a large one for people with hands designed to break rocks.

The only real complaint I have is the "QuickSafe" backstrap mechanism. Removing the backstrap decocks the weapon and renders it un-firable. This also assists with takedown. If you remove the backstrap you can field strip the weapon by just pulling down on the take-down levers and pulling the slide forward. You do not have to pull the trigger or pull the slide back in any way. This makes it very simple to disassemble. Personally though, I would rather have a grip safety instead of the QuickSafe mechanism. It's a minor point though.

One other minor complain for some people is the somewhat unpredictable direction the ejector will send your spent casings. Mine tends to throw them up and slightly to the right. If you are a left-handed shooter though, it might rain the casing down on your head. I don't have a problem with it as the casing generally fly over my right shoulder, but there have been a couple of them bounce off the top of my head anyway.

I've only put a few hundred rounds through it so far, but I've found that I can shoot much more consistently than I did with my XD. I'm still not a good shot, but I can see improvement on each trip to the range now.

The biggest disadvantages to owning the PPS actually have nothing to do with the weapon itself. It is such a new weapon that it is very hard to find accessories for it. There are a few decent holsters on the market now, but not as many as with other more popular weapons. Magazines are the biggest problem as the cheapest I've been able to locate are in the $50 (each)  price range. And if you want lasers or custom sights you'll find these even harder to come by. This makes the PPS an expensive weapon to own.

But the weapon is gaining popularity fast, and so more and more accessories are coming out for it every month.

If you are looking for a CCW weapon, I highly recommend you consider the Walther PPS.


Thursday, June 26, 2008

I'm not advertising for you...

The South Carolina DVM expects to use my car's license plate to advertise for the state... we'll see about that!

Every few years the state of South Carolina sends me a new license plate. For some reason, they just can't pick a color and design and stick with it. But this time... they have really managed to pissed me off.

Here is the new license plate that arrived in the mail yesterday.


First of all, I don't give a rat's ass about the plate being pretty. I was fine with the plain white & blue plates with the high-visibility red font they used to use in the 80's. The purpose of the plate is to give police a way to identify the car. That's all it has to do. But if they want to spend my tax dollars making it all pretty then that's OK with me.

What gripes my ass though is that "TRAVEL2SC.COM" written at the bottom of the plate.  Never mind that it's in all upper-case... it's the idea of having it there at all that pisses me off.

I don't mind branding to some extent. If you make some physical thing that I buy and you want to have your name on it then that's fine as long as your branding isn't over-the-top annoying and doesn't negatively impact the usefulness of the thing you brand. Take my car for example... it has the Toyota logo on front and back, and on back the word "Toyota" and the model of the car. That's fine with me. If someone looking at my car likes it, they might find it useful to know what kind of car it is so they can see about getting one themselves. I always like to be able to look at other people's car and know what kind it is too.

Same with most things and branding... as long as you keep it kinda subtle, tactful, and out of the way.

Advertising is a different matter though. I fucking refuse to advertise for you, especially not for free.

Take my car as an example again. I went to several car lots before I bought my car. One of them had a car that I liked, but it had a big-ass steel logo for the car dealer bolted onto the back. The dealer logo was actually larger than all the Toyota logos combined. I asked them if they could order me the car without the logo or remove the logo and "repair" the hole that the bolts would have left.

They looked at me funny and said they couldn't do that. So I told him that the only way I'd buy his car is if he gave me $100/month each month I owned it or knocked off $7200 off the price ($100/month times the 6 years I'd financed the car for). He laughed thinking I was joking... so I got up and walked out of the sales office.

I ended up buying the same car from a dealership that had a simple "sticker" logo that I could remove. I made sure they knew that the only reason I bought their car instead of the other guy was because I could remove the logo.

Its just a thing with me. I will allow you to brand my stuff if you can do it tactfully... but I'm not running around throwing someone else's name in everyone's face. That's why I don't buy a lot of name-brand clothing lines. If your logo is the prominent feature of the shirt, then fuck-off! You aren't charging me $20 to $40 for a shirt AND getting me to do your advertising for you.

But in this case with the license plates, it is even more offensive to me. The state requires that I have a license plate, and I have no alternative vendor I can go to get a plate from. I could buy a "custom" license plate from the state that doesn't have advertising on it... but why should I have to pay $50 more for the "privledge" of dodging having to do the governments dirty advertising.

In the end... duct tape came to the rescue:


I'm serious... I will not advertise for you. I don't give a damn if you are the government or my mom. If you want me to advertise for you.... pay me!




Thursday, June 5, 2008

ASP.NET: web site vs. web application project - Part 2

My previous article about web sites vs. web applications seems to be a popular article, generating about 1/2 of the total traffic for this site. Most of that traffic comes from searches. Unfortunately, I doubt the old post really contains what people are actually looking for. So I'd like to spend a little text describing these two project types and how they compare.

A little background:

Visual Studio 2005 introduced the asp.net "web site". Not only was this the only project type for asp.net when VS 2005 shipped, is was also major change from the old VS 2003 web project. In web sites the code is compiled on the fly by asp.net and there are no visual studio specific project files or auto-generated classes involved. This makes web sites simple and easy to deploy (just copy source to a web server and browse). 

Not long after VS 2005 shipped, MS released the Web Application Project. This was an add-on initially, but has since been folded into VS 2005 with SP1 and shipped with VS 2008 out of the box.

The Web application project is an updated version of the old VS.NET 2003 project type. It organizes the project using the familiar VS project files and such. It requires you to compile the application before you can run it, but you gain more control over how the application is compiled.

I'll omit a detailed technical description of the differences between web sites and web applications. This territory has been better covered elsewhere on the web, and the MSDN documentation that ships with VS 2008 covers it in detail too.

What most people want to know is, which is better?

The answer does depend a little on personal preference and what kind of application you are building.

I write and maintain several web applications. Some are very small personal sites with mostly static content, while others are huge data entry applications. My largest solution includes about 22 different class library and database projects that support a single web site project.

The web site project has always disappointed me, even with my smaller applications. The Web Application project type has become my preferred approach for all new projects, and I've since converted most of my older web sites to web applications as well.

Web site projects:

Web sites are a little simpler if you are doing inline code instead of code behind. Web sites also reflect changes in code files without needing to be manually compiled. That means you can edit a file and just refresh the browser.

If you need to explicitly "build", so you can ensure your code doesn't have errors for example, you can still do so. However, the "build" command doesn't really compile the project... it just verifies it using the dynamic compiler. While 99% of the time this is fine, I have come across a couple of minor cases where the verification compiler didn't find an error, but attempting to run the site for real did.

Major advantages of web sites:
  • Everything in the project's folder is part of the project. This makes it easy to use other editors or tools with web sites. If you add files outside Visual Studio, they will still be part of the project. If you edit a file outside VS it will still be compiled and the changes visible when the site is viewed in a browser.
     
  • You can deploy without having to compile... just XCOPY and go. Web sites do support pre-compilation if you choose to use it.
     
  • Files don't have to be written in the same language. VS will support having a mix of VB and C# code on a file-by-file basis. Sounds good, but I've never found this useful personally. Maintaining a site is much easier if you stick with one language.
     
  • The add "item" dialogs in Visual Studio are more intuitive for web sites. I'm not sure why both projects don't use the same dialogs, but they certainly don't.
     
  • Profile's design time compilation is automatic. The ProfileCommon class is created dynamically making it easy to work with the profile provider in a strongly typed way.
The biggest annoyance for me with web sites are:
  • No way to really "exclude" a file without renaming it. Refactoring tools and the "compiler" have to crawl through every file in your application. This can get slow if you have a lot of files. For example, I often use FCKEditor, which has a dump-truck load of files. Most of them are not asp.net files. But just having to scan through them when I build or refactor can really slow things down. This has gotten a little better in VS 2008, but not fast enough for my tastes.
     
  • No control over your namespaces. Sure, you can manually add namespaces to pretty much anything, but visual studio will fight you every step of the way. With generated code such as ADO.NET DataSets and such, this gets very hard to control. Eventually you will give up and just let VS put everything in the default namespace. In large applications this gets very annoying, especially if you like a well structured application.
     
  • It is hard (read, nearly impossible) to reference pages, user controls, etc from custom classes in the app_code folder. This produces some interesting problems if you are doing anything fancy like dynamically loading pages or controls and such.
     
  • The application compiles to the asp.net temporary internet files folder. This is a drop location for all that dynamically compiled code that the asp.net compiler will produce. This is a fine mechanism until it breaks. When it breaks you can get really weird errors from the compiler that don't make obvious sense. These are pretty easy to cause by accident. For example, if you tell VS to "build" then refresh a browser pointed at the site at the same time.... the two compiles often conflict in some bizarre manner corrupting the temp asp.net files. When this happens, assuming you figure out that this is the cause of the problem, you have to shut down VS and the web server, manually remove the files from the temp folder, then restart everything.
     
  • No ability to product XML comment output files. I use the crap out of XML comments, so this is the big deal breaker for me.
     
  • Not much control over build outputs. In most projects you can set whether a file is compiled, copied to the output directory, and such. But not with web sites. If a file is in the project's folder structure, it is part of the project.
     
  • Team Build hates web sites. Lacking a project file, you can use the web deployment project add-on to help out, but even still I've found that trying to automate a build for any significantly complex web site is a disaster and time-sink.

  • Disconnected Source Control. VS supports working disconnected from source control these days, but I find that it often has problems keeping web sites in sync when you reconnect. This is a sporadic problem, and hard to reproduce, but seems to be more common with delete, rename, and add operations.
The web application project:

The web application project is a little more formal than web sites. You get an actual project file by which Visual Studio tracks the files that are in your project. Web applications do generate "designer" files for your pages that link the code-behind to the controls you've put in the markup, but unlike old VS 2003 projects these are much simpler and leverage partial classes and such.

The drawbacks are:
  • The site has to be compiled/built before it will run.
     
  • Your project is specific to only one language.
     
  • No automatic support for a Profile class. You have to use a separate tool to generate ProfileCommon or write one manually.
The major advantages are:
  • Compile and refactoring is much faster since VS has a way to track what is in the project and doesn't have to scan everything in every folder. Also, you can have stuff in the folders that aren't part of the project (I find this useful sometimes).  

  • You can control namespaces, assembly names, and build behavior for various files in the project. Namespaces are also automatically managed by VS based on the application's folder structure. This includes a real "project properties" editor too with all those familiar things like build options, references, settings, etc.
         
  • You can generate XML comment output files.
         
  • You can exclude files from the project without having to rename them.
         
  • MSBuild and Team Build work much smoother with web application projects.
         
  • Custom code files don't have to be in a specific folder, you can put them anywhere and organize them however you see fit.
         
  • Classes can refernce pages and controls.
         
  • You can split the site into multiple projects.
         
  • Include pre and post build steps to compilation.
         
  • Disconnected source control seems to work more consistently with web application projects.
The bottom line:

Web applications scale better and are just plain smoother than web sites assuming you plan to do most of your development directly in Visual Studio. The only major difference is that you have to build manually... so get used to CTRL + SHIFT + B. At least it's pretty fast in VS 2008 and it won't blow up the temporary internet files folder like web site projects can.

I can't say that web sites are inferior to web applications overall. There are cases where web sites do work very well, especially with smaller and simpler projects. I just personally don't find that having on-the-fly compilation is really much extra value, while more control over the application's compilation is always a good thing.