Tuesday, April 24, 2012

Technical Documentation as Reference Material

In my line of work, I have been asked many times over the years to produce test scripts so that the client can "test" and make sure that the system is working as it's supposed to. Another way to put this is to document a complex, technical process so that someone with no experience in the field can do it. For some reason, I thought the testing field was uniquely privileged with these requests but that's not true. Similarly purposed documents are commonplace across disciplines whether it's requirements gathering or system deployment.

While the actual function may vary the fallacy in these documents is uniform. If the purpose of the document was simply to provide information to skilled people, there'd be no problem. However, the specific, implicit purpose of these documents is more often than not for unskilled people. A document is no replacement for skill or training because a document is limited whereas the realm of possibilities is infinite. Without skill, you'll be swallowed by infinity faster than the fish swallowed Jonah.

So why do we keep doing this?

Because it's an industry practice?
This isn't elementary school where we do things just because it's popular. We should foster practices that produce the best results.

Because the client asked for it?
We're not just code monkeys pumping out a website, we're consultants. It's our duty to the client to educate them on why technical processes should be executed by people with skills in those processes. The truth of the matter is that by doing so, we help to insulate themselves from disaster.

Because the client offered to pay us a lot of money for it?
Do I even need to say that this would be unethical? I hope this is never a factor. Yes, the client is paying us but with that we have an obligation to better the client not just give them what they want because of it.

Skilled Work for Skilled People
The most important thing that we need to take to heart is, as I said before, technical processes should be executed by people with skills in those processes otherwise the exposure to risk becomes extraordinary.

  • Why risk having a site that isn't designed to do what it needs to? (BA)
  • Why risk having a site that can't do what it's designed to do? (Development)
  • Why risk having a site that doesn't do what it's designed to do? (Testing)
  • Why risk having a site that isn't up when it needs to be? (Deployment)

References, Not Instructions
With that in mind, I think we ought to shift our paradigm about how we think about technical documents. They're not instructions, they're references. They guide a familiar process for someone that lacks specific knowledge for a given situation. When necessary, training should be offered to fully realize the necessary knowledge transfer. The training though is not to impart a skill, such as coding in PHP, but to apply an existing skill to a situation, such as coding a widget within the existing framework and dependencies.

Friday, April 20, 2012

Thoughts on Effective Project Management

Effective project management and execution has many components. To name a few that comes to mind:
  • Planning
  • Communication
  • Cooperation
Projects don't always go smoothly though and so we compensate with:
  • Process
  • Documentation
  • Incentives (or bribery)
Don't get me wrong, process, documentation, and incentives are not bad things but they're not guarantees for great projects. Just because you have process doesn't mean your project will be executed well. Just because you have documentation doesn't mean the project will be universally perfectly understood. Just because you have incentives does not mean your team will be motivated.

When the project struggles, don't panic and don't think that there's going to be a miracle cure. So what should you do? Breathe. Relax. Rather than jumping to a change in the model, figure out first if the model is being effectively executed. Sometimes all it takes is some individual coaching or team training.

Think for a second about treating a medical ailment:
Symptoms: Excruciating pain, being unable to walk or stand
Solution: Prescribe heavy medications, such as Oxycodone. If you can't feel the pain, then what does it matter right? But if you only treat the symptoms and don't fix the problem, the symptoms will reoccur as soon as the drugs wear off.

Problem: The leg is broken
Solution: Set the let and/or perform surgery to bolt the bone back together. YES! We are out of pain and we're mobile again. But if you only fix the problem and don't eliminate the cause, then someone is sure to come along and break their leg--OR WORSE!

Cause: The rug is bunched up
Solution: Smooth out the rug. If the rug is prone to bunching, tack it down or figure out what causes it to get bunched up. Perhaps it's to close to a door that opens frequently. If tacking is not an option or it can't be moved further from the door, replace the rug with one that has a non-slip back or that is shorter.

I contend that most issues in project management and execution are like this. Simple causes that, when ignored, cause expensive problems and exhibit symptoms disconnected with the cause itself. Fixing the rug is a simple fix. It's not a change to the model, it's just taking the existing model and working the kinks out.

Thursday, April 19, 2012

[Not] Everything Is Urgent!

When projects get down to the wire, sometimes certain people (you know who they are) become prone to throwing out the rules for determining defect priority. As I previously wrote in Prioritizing Defect Reports, there are four factors that I consider when setting priority: Severity, Exposure, Business Need, and Timeframe. Unfortunately, Timeframe becomes a stumbling block when they fall prey to the terminal thought, "The deadline is right around the corner and all of these issues need to be done, therefore they are all Urgent!"

I call this a "terminal" thought because it leads to a disastrous method of project management: panic. Panic management occurs when organization goes out the window; and that's exactly what happens when all issues are prioritized the same. The priority indicator loses its value. Even when time is running short and all of the issues are crucial to launch, issues varying levels of priority and some need to be done before others. And what happens when nothing has any priority? Developers decided themselves which issues to do and when.

When we get to crunch time, I think it's appropriate to not only redefine the spans of time for the Timeframe factor but redefine the factor. Instead of thinking about a Timeframe, think about a Sequence:
  • Urgent: Drop everything and work on this
  • High: Complete before working on any lower priority issues
  • Normal and Low: After confirming with the project manager that there is nothing more important that needs to be done, concentrate on the Normal priority issues first but may incorporate Low priority issues if there's an efficiency advantage. Low priority issues are completed last.
By maintaining your system of priorities, you'll help keep your team focused and everyone will have a clearer vision of the outstanding risk in meeting the deadline.

Friday, April 13, 2012

The Twitter Gun

While I've had a Twitter account for a couple of years, I have only recently begun to utilize it. And while I'm definitely not an expert in Twitter strategy, I am a user and therefore I know what practices annoy the bajeebers out of me.

Twitter is like a mini blog and should generally follow the same guidelines as a regular blog. So in a regular blog, at least every one that I've ever read, each entry is unique. Yes, there may be recurring themes and topics but no one ever reposts a single entry multiple times. By and large, Twitter should be the same. However, some companies I've noticed like to tweet the same thing over and over. Perhaps they have one important message that they're trying to share like, "We're hiring" but they also like to share other useful things like, "Hey check out this link". And so after each time they post a "Hey check out this link" tweet, they repost the "We're hiring" tweet.

To me this practice is Twitter SPAM. It's unprofessional. It's rude to your followers. It encourages your followers not to pay attention to your tweets - which is the absolute last think you want to happen. And if you happen to be a company that claims to be a social media expert, it can be bad for business.

Consider the similarities and contrasts between two types of guns: the flare gun and handgun. Both are used to launch projectiles. Both are used to convey a message. A flare gun, however, will only shoot off one round at a time while a hand gun can fire many rounds in short succession. You run towards a flare gun but away from a handgun.

Twitter, used effectively, is like a flare gun because you want people to come to you. Therefore, you carefully plan your tweets and each contains a unique message. On the other hand, Twitter, used as I described above, is like a handgun. There's no strategy; you're just blasting away and people are going to run because it's the same thing over and over.

Of course, I think there's a third type of Twitter gun too, the machine gun. These are the companies that just don't stop sending out tweets. Seriously? While you are a news service, you're going to send out a tweet every three minutes when you post a new article? UN-SUB-SCRIBE! Maybe some people don't mind that but to me it just clogs up the feed and makes the useful information impossible to glean.

So if you run a company with an active Twitter strategy, think about how you're affecting your followers. Plan your tweets and don't inundate your followers with repetitive information.

Wednesday, April 11, 2012

The Quality Assurance Misnomer

Over the years, the industry standard title for a software tester has become “Quality Assurance Analyst/Engineer”. I don’t know the history of this but I do know that it’s not without controversy. When most people in the software industry hear the title, they think of the person’s role as being someone who assures the quality of the software product.  That’s a big problem though because, as a Quality Assurance Engineer myself, that’s not what my job is, nor is it generally the job of any software tester that I know.

Here’s the root of the problem: Quality is a business decision that evaluates whether a product or part thereof is GOOD or BAD and that is a decision that lies in the hands of the product owner – that person which is or represents the product purchaser/user. The reason for this is because they are the person that best knows whether the purchaser/user will be satisfied. Testers don't know the user so it would be presumptuous at best to put that responsibility on them

What I do is I poke, stress, and exercise the software in every way that I can think of and make observations. I compare those observations to documentation and common heuristics that help identify when intended functionality deviates from actual functionality. And then my job is actually to help inform that business decision by reporting all of my observations through conversation, defect reports, and progress reports to the product owner. You see, I can report RIGHT or WRONG based on my comparison to documentation and heuristics but that’s different than GOOD or BAD.

Determining GOOD or BAD is a murky process that is above the pay grade of a lowly software tester and that has to factor in many components including:
  • Test coverage
  • How stale is the testing (i.e. when was the last time part x was tested?)
  • How many failed test cases are outstanding
  • What the severity is of the outstanding defects
  • Amount of time allowed for planning, developing, and testing
  • The gut factor (i.e. does it feel like it’s ready?)

Then taking all of this information, to determine whether the product is GOOD or BAD, you need to have a sense of whether the customer will be satisfied as I mentioned above. The only way to develop this sense is to have an active, direct interaction with the customer from the beginning of the project or to just leave it directly up to the customer. Though, I don’t recommend the latter because without somehow quantifying an acceptance level, they may never agree that the product is good.

All that said, I don’t think that the title “Quality Assurance Analyst/Engineer” is BAD, just misunderstood. While I don’t assure quality through analysis and/or engineering, I do analyze and engineer so that quality can be assured.