So you’ve rebooted the server, rebuilt the desktop, refreshed the users profile, fully patched everything and the third party support tells you it’s not their problem; you’ve even read articles on the second page of Google results and the application still doesn’t work!!
The next step is usually a gathering of technology experts who, after careful discussion and consultation blame the desktop build. “It’ll be conflicting versions of the Java runtime environment” you’ll hear, when a web app isn’t performing.
In more enlighted organisations the experts will look at the recorded symptoms, create a hypothesis, and test it. This all sounds very scientific, we used the word hypothesis after all. Unfortunately, more often than not, the symptoms are from call logs; “The application is a bit slow at some point most mornings. I think some of my collegues get it as well”. A hypothesis created from bad information is hardly scientific. Darwin collected data for five years on the Beagle before he came up with anything good.
It’s tempting to blame those poor people on the helpdesk for not recording better quality information, but the reality is that complex issues need high quality, accurate information that helpdesks and end users are not capable of providing.
This is a blog about solving complex IT issues using data provided by a range of diagnostic tools. I’ll be detailing some of my favourite tools and illustrating their use with real world examples from my working life.
Before we go any further I have to admit to two key influences. Paul Offords book “Rapid Problem Resolution“, which remains the most sensible thing I’ve read about troubleshooting and Mark Russinovich’s blog “Case of the Unexplained“.
The posts fall into one of two categories “All tooled up…” which are overviews of my favourite tools and techniques for using them and “The trouble with….” which are real world examples of using those tools to solve problem.
I hope you find it interesting and good troubleshooting