This last Sunday afternoon, I went to spend a few hours with some friends at the beach. Interestingly in Long Beach (a city with "beach" in the name), going to the beach is entirely a land activity because the water is so polluted but that's beside the point. Anyway, it was a beautiful, sunny day and I knew I needed to protect myself from the deadly rays of the sun.
I applied sunblock.
I put it on my face. I put it on the edges of my ears. I put it on my neck. I put it on my hands and arms. I put it on the tops of my feet.
Fast forward a couple of hours... I came home with a sunburn.
But how and where, you wonder.
When I was applying my sunblock, I was sitting on the couch in my living room, watching TV. My experience from times when I've gone out without sunblock told me to get the edges of my ears and the tops of my feet and of course my face, neck, and arms. So I have a burn on my forehead and the insides of my knees.
Evidently, when I was applying the protectant to my face, I thought I went all the way up to my hairline, but it must have been my idealized hairline and not my actual hairline because there is a very distinct demarcation between healthy and burnt skin. If I had been in front of a mirror, I likely would not have made this mistake. And as for my knees, that was poor planning. I imagined myself as being up and about rather than sitting.
I actually have a point with this.
When we testers are doing our job, we like to think that we're covering everything, or at least all of the essential elements. But how many times doesn't a product (or in my case a website) launch and you get burned? Glaring red spots that seem obvious in hindsight. And of course they hurt and peel and are an embarrassment until they heal.
Testing is something that we do without being able see the entire picture. There's no magical tool (mirror) that we can look into and see where we haven't applied our skills. Testing coverage is mostly verified by feeling. We can draw on our previous experiences. We can understand the landscape by looking at other similar projects. We can be well versed on the documentation relating to the project. But no matter what, there's always going to be some spot that you can't see.
And sometimes there are spots that you didn't plan for. You plan for shorts day at the beach and then when you get there, you're handed a Speedo and told to change. We take calculated risks because of limited time and money... and then the unexpected happens.
Hopefully you don't get burnt but when you do (and you will), hopefully you learn from it. I know I'll definitely be more careful about applying sunblock to my face down the road.
NOTE: There are no pictures of the burn described in this post. Don't ask.
Because software testing without using your brain is as good of an idea as sticking your head in the sand to hide
Showing posts with label Software Testing. Show all posts
Showing posts with label Software Testing. Show all posts
Thursday, May 1, 2014
Thursday, November 15, 2012
The Dog Age Paradigm
It's a well-known fact that I am the proud parent of the world's handsomest dog. Diego and I have our birthdays a mere 10 days apart which inevitably leads to the "dog years" discussion. While I'm chronologically ten times older than he, in "dog years" I'm only about 1/2 older -- that is, if you subscribe to the notion of "dog years".
The average lifespan of dogs varies from breed to breed but it's somewhere in the neighborhood of 12 years. Since humans live on average about 80 years, we can say that dogs age about seven times faster than humans. Thus, while Diego has now completed three trips around the sun, he's actually 21 in dog years.
I think this model is silly and here are just a few examples of why:
I'm not trying to suggest in any of those scenarios that opinion was formed haphazardly. Rather, I'm suggesting to not ever stop questioning anything, including your own conclusions. Things change as time goes on and context changes. I suppose the dog years "formula" came about as a way of explaining to children why pets don't live as long as humans. What might be acceptable in explaining a concept to a 5-year-old doesn't work for a 30-year-old.
Alternatively, the death of a family pet could be the "right time" to start teaching kids algebra. If we have to have a formula, I think this one works much better:
x equals the age in dog years and y equals the age in human years.
The average lifespan of dogs varies from breed to breed but it's somewhere in the neighborhood of 12 years. Since humans live on average about 80 years, we can say that dogs age about seven times faster than humans. Thus, while Diego has now completed three trips around the sun, he's actually 21 in dog years.
I think this model is silly and here are just a few examples of why:
- Dogs can walk within weeks of birth (not even three months in dog years)
- They reach adolescence well before a year (not even seven in dog years)
- They're full grown well before being two (not even fourteen in dog years)
- Are there things in your project and/or testing that you just blindly accept as true even though they make no sense if you actually think about them?
- One can prove anything with numbers.
- Do you have processes that are overly simplified [or complicated] such that the intent gets lost?
I'm not trying to suggest in any of those scenarios that opinion was formed haphazardly. Rather, I'm suggesting to not ever stop questioning anything, including your own conclusions. Things change as time goes on and context changes. I suppose the dog years "formula" came about as a way of explaining to children why pets don't live as long as humans. What might be acceptable in explaining a concept to a 5-year-old doesn't work for a 30-year-old.
Alternatively, the death of a family pet could be the "right time" to start teaching kids algebra. If we have to have a formula, I think this one works much better:
x equals the age in dog years and y equals the age in human years.
Wednesday, July 25, 2012
Your Website Is a Rubik's Cube
My coworker, @jakedowns, has a Rubik's Cube sitting on his desk. It seems to serve as a physical manifestation of the gears turning in his brain as he's working on solving a development problem. He's actually pretty good at solving them and can usually do it in less than a minute.
On occasion, I've happened past his desk and noticed that it has been arranged in a checkerboard pattern. Last Friday I decided that he needed a challenge so I designed a pattern and told him to replicate it.
And he did! However, the pattern I provided was for only one face and his solution was for only one face. At the time, I made a joke that "the requirements were met but the purpose was not fulfilled" because I wanted the pattern to be displayed on all six faces.
At this, I set to work plotting out all six faces. It took me hours, literally, to figure out a valid solution extending the original pattern to all six sides. It was a great exercise for me. I learned that the pattern does not work when using the colors of opposing faces for the pattern (e.g. blue and green). I also learned that not only do I have to account for all eight corners, I also need to account that the thee colors on each corner are in the correct position relative to the other two. I also exercised my spacial reasoning skills quite a bit.
In the midst of all of this, I was thinking about how websites are like Rubik's Cubes. Designing and building a website isn't as simple as solving a Rubik's Cube where each side is a solid color. Rather, every website is unique, perhaps similar to others, but still has it's own individual requirements, much the same way that I created a new requirement for what it meant for Jake to solve the puzzle.
Upon doing a little internet research, it seems that there are some 43,252,003,274,489,856,000 (that's forty-three quintillion) valid combinations for a Rubik's Cube. When solving one for the traditional pattern, there is a well-established method to do so. However, when trying to solve for a custom design, most, if not all of that goes out the window. You still twist and turn but the algorithms have to be all new.
The thing about websites is that you don't just twist 'em and turn 'em until you feel like stopping and saying "Solved!" You have a goal in mind. Sometimes it takes a lot of work to get it figured out exactly what that goal is. Then you twist and turn until what you have matches that goal. It's a complex process because no two solutions are the same.
Here are some practical takeaways to keep in mind:
On occasion, I've happened past his desk and noticed that it has been arranged in a checkerboard pattern. Last Friday I decided that he needed a challenge so I designed a pattern and told him to replicate it.
And he did! However, the pattern I provided was for only one face and his solution was for only one face. At the time, I made a joke that "the requirements were met but the purpose was not fulfilled" because I wanted the pattern to be displayed on all six faces.
At this, I set to work plotting out all six faces. It took me hours, literally, to figure out a valid solution extending the original pattern to all six sides. It was a great exercise for me. I learned that the pattern does not work when using the colors of opposing faces for the pattern (e.g. blue and green). I also learned that not only do I have to account for all eight corners, I also need to account that the thee colors on each corner are in the correct position relative to the other two. I also exercised my spacial reasoning skills quite a bit.
In the midst of all of this, I was thinking about how websites are like Rubik's Cubes. Designing and building a website isn't as simple as solving a Rubik's Cube where each side is a solid color. Rather, every website is unique, perhaps similar to others, but still has it's own individual requirements, much the same way that I created a new requirement for what it meant for Jake to solve the puzzle.
Upon doing a little internet research, it seems that there are some 43,252,003,274,489,856,000 (that's forty-three quintillion) valid combinations for a Rubik's Cube. When solving one for the traditional pattern, there is a well-established method to do so. However, when trying to solve for a custom design, most, if not all of that goes out the window. You still twist and turn but the algorithms have to be all new.
The thing about websites is that you don't just twist 'em and turn 'em until you feel like stopping and saying "Solved!" You have a goal in mind. Sometimes it takes a lot of work to get it figured out exactly what that goal is. Then you twist and turn until what you have matches that goal. It's a complex process because no two solutions are the same.
Here are some practical takeaways to keep in mind:
- Your customer, no matter what they say, has a very specific result in mind for you building their website.
- It is worth every minute to take the time up front to do design and business analysis.
- Changes made once development (twisting and turning) has begun is going to affect the deadline and is likely going to mean that some things are going to have to be done over.
- Testers are happy to look at your design before development begins and look to see if the corners match up the way they should.
Monday, July 23, 2012
Improvising on a tune called "Exploratory Testing"
Paul Manz is someone whom I would consider to be among the greatest North American organists of the twentieth century. Had you ever attended one of his hymn festivals, you would have been treated to a number of improvisations - that is simultaneous composition and performance. And I guarantee you, you would never have thought, "This isn't music, this is just noise!" That's because his improvisations have all of the structure of music as we know it: tonality, meter, tempo, dynamic, melody, harmony, etc. And he was doing it ON THE SPOT; nothing was written down.
Exploratory testing has many parallels with improvised music but yet, it doesn't have the same respect even when executed by the "Paul Manz-es" of the testing world like James Bach and Anne-Marie Charrett.
I improvise regularly when I sit at the piano. I wasn't always very good but my skill has improved little by little over time, but particularly the last two years whereby I have had to create my own accompaniments to support congregational singing when the music editor at Oregon Catholic Press fails to understand the needs of the untrained singer. But I digress. My point however, is that no one would question the legitimacy of my playing even though I didn't sound like Paul Manz and the only thing in writing was the melody and harmonic suggestion.
E.T. is structured just like improvisations are structured. There's usually some sort of suggested charter or mission. Testers utilize various techniques to expose and isolate defects. But yet because it isn't written down in meticulous detail, E.T. is considered inferior. Some of the worst music of all time has been written down, performed over and over, and made the performer filthy rich.
While I think there's a close parallel between improvising and E.T., it doesn't hold up for written music and scripted testing. Music and testing are both art and science but I think they use them in opposite ways. Music looks for an artistic result achieved through a scientific process whereas testing looks for a scientific result achieved through an artistic process. When you script a test, you strip out the art - the intent, the intuition, the sapience, the wonder.
Any good musician should be able to read and perform music because it demonstrates the technical ability while being a means by which we can learn ultimately to express our own artistic thoughts. A tester has no need to know how to write scripts or execute them. One learns to test by testing, talking with a mentor, reading, writing, etc. and by testing. By gaining understanding of the philosophy of testing we learn ultimately to achieve the scientific results.
It's not fair that exploratory testing doesn't always get the credit it deserves but then again life isn't fair. Sojourn on testers, continually strive to better yourselves and serve as a positive example of just how effective exploratory testing is.
Exploratory testing has many parallels with improvised music but yet, it doesn't have the same respect even when executed by the "Paul Manz-es" of the testing world like James Bach and Anne-Marie Charrett.
I improvise regularly when I sit at the piano. I wasn't always very good but my skill has improved little by little over time, but particularly the last two years whereby I have had to create my own accompaniments to support congregational singing when the music editor at Oregon Catholic Press fails to understand the needs of the untrained singer. But I digress. My point however, is that no one would question the legitimacy of my playing even though I didn't sound like Paul Manz and the only thing in writing was the melody and harmonic suggestion.
E.T. is structured just like improvisations are structured. There's usually some sort of suggested charter or mission. Testers utilize various techniques to expose and isolate defects. But yet because it isn't written down in meticulous detail, E.T. is considered inferior. Some of the worst music of all time has been written down, performed over and over, and made the performer filthy rich.
While I think there's a close parallel between improvising and E.T., it doesn't hold up for written music and scripted testing. Music and testing are both art and science but I think they use them in opposite ways. Music looks for an artistic result achieved through a scientific process whereas testing looks for a scientific result achieved through an artistic process. When you script a test, you strip out the art - the intent, the intuition, the sapience, the wonder.
Any good musician should be able to read and perform music because it demonstrates the technical ability while being a means by which we can learn ultimately to express our own artistic thoughts. A tester has no need to know how to write scripts or execute them. One learns to test by testing, talking with a mentor, reading, writing, etc. and by testing. By gaining understanding of the philosophy of testing we learn ultimately to achieve the scientific results.
It's not fair that exploratory testing doesn't always get the credit it deserves but then again life isn't fair. Sojourn on testers, continually strive to better yourselves and serve as a positive example of just how effective exploratory testing is.
Wednesday, June 6, 2012
Two Keys for Effective Defect Reports
When I was in high school, a class that I was in was given a writing assignment. This was a type of class that ordinarily wouldn't have writing assignments and so the students were a little unsure as to the requirements. One of my classmates asked, "How long should the paper be?" And the perverted old man replied, "The paper should be like a woman's skirt: long enough to cover the subject but short enough to still be interesting."
Today I want to expound upon effective defect reports. I am an absolute stickler for detailed defect reports. I have no problem sending issues back that don't have sufficient information. Defect reports are adult writing assignments and we need to make sure that we are including all of the necessary information. I could run down my list of criteria that I expect in every bug report, but you can find those types of lists lots of places. Besides, the list, like everything in testing, is context-oriented.
As inappropriate as my teacher was, he did imply an important point: it's not about a measurement, it's about accomplishing a purpose. Skirts, and clothing in general, maintain a certain level of modest, protect us from the elements, and evoke intrigue. Defect reports, must be able to fulfill two very important functions, otherwise they are a failure.
An effective defect report makes reproduction and troubleshooting as simple as possible.
More often than not, the person who discovers the defect is not the person who resolves the defect. This is a simple efficiency issue. If you have a PM or Scrum Master reviewing issues, they need to be able to assess the priority and assign the issue to the most appropriate developer. The developer should be able to understand what's going on without having to guess or ask additional questions so they can spend less time identifying the issue and more time fixing.
An effective defect report archives the defect.
The purpose of this is for testing. Often times, the person who discovers the defect is not the person who verifies that it has been fixed. The tester needs to be able to know exactly what was wrong so that when they test the resolution, they can determine beyond a doubt whether or not the defect is fixed. If the tester doesn't know precisely what was wrong, it's impossible to know if it's fixed.
If your bug reports fulfill those functions then you're well on your way to getting an A+ on your writing skills. Remember, context is everything - sometimes bug reproduction is elusive and sometimes a conversation can convey imporant information that is hard to express in text. If you keep mind of the purpose then the details will fall into place.
Today I want to expound upon effective defect reports. I am an absolute stickler for detailed defect reports. I have no problem sending issues back that don't have sufficient information. Defect reports are adult writing assignments and we need to make sure that we are including all of the necessary information. I could run down my list of criteria that I expect in every bug report, but you can find those types of lists lots of places. Besides, the list, like everything in testing, is context-oriented.
As inappropriate as my teacher was, he did imply an important point: it's not about a measurement, it's about accomplishing a purpose. Skirts, and clothing in general, maintain a certain level of modest, protect us from the elements, and evoke intrigue. Defect reports, must be able to fulfill two very important functions, otherwise they are a failure.
An effective defect report makes reproduction and troubleshooting as simple as possible.
More often than not, the person who discovers the defect is not the person who resolves the defect. This is a simple efficiency issue. If you have a PM or Scrum Master reviewing issues, they need to be able to assess the priority and assign the issue to the most appropriate developer. The developer should be able to understand what's going on without having to guess or ask additional questions so they can spend less time identifying the issue and more time fixing.
An effective defect report archives the defect.
The purpose of this is for testing. Often times, the person who discovers the defect is not the person who verifies that it has been fixed. The tester needs to be able to know exactly what was wrong so that when they test the resolution, they can determine beyond a doubt whether or not the defect is fixed. If the tester doesn't know precisely what was wrong, it's impossible to know if it's fixed.
If your bug reports fulfill those functions then you're well on your way to getting an A+ on your writing skills. Remember, context is everything - sometimes bug reproduction is elusive and sometimes a conversation can convey imporant information that is hard to express in text. If you keep mind of the purpose then the details will fall into place.
Thursday, May 10, 2012
One For and Two Against Test Scripts
The following is a follow-up to my own Technical Documentation as Reference Material as well as David Greenlees' "Idiot Scripts".
Several thoughts come to my mind when I think about test scripts written for others to use.
I have at times, even recently, asked other people to write test scripts. Not because I want to use them or have them ready to distribute to someone else but because I wanted to use it as a tool to give me insight their approach to testing. It probably isn't the most efficient method but it was what seemed to be the best solution for the circumstances.
To me the intent of scripts for use by business users or during UAT is basically the same: happy path/positive testing that shows that the system works as expected.
The problem I have with writing scripts for business users is that I expect them to know how the system works and test scripts are a horribly inefficient form of user documentation. Besides it leaves the intent of each step in obscurity. It makes more sense to me to teach the business user how to use the system and then let them use it, whether it's taught through a full blown manual, a help file, a training seminar, or a phone call. If the system is complicated enough that it isn't readily apparent to the average user how to use it they you're going to need some sort of training program regardless so why duplicate effort by writing test scripts?
The problem I have with writing scripts for UAT is the same as I mentioned above but it goes deeper. Some, perhaps most people, might not agree with me. When I think about writing UAT scripts, it gives my heart ethical palpitations! UAT isn't just verifying functionality, it's verifying that it's acceptable. Determining whether the software/website is acceptable is a judgement call that only the client can make. Granted acceptance criteria can be written out and those criteria can be negotiated between the client and the agency but it's still a subjective evaluation when it comes down to it. The specific problem then that I have with UAT scripts is that I, as the script writer, am determining whether the deliverable is or is not acceptable. If the client wants to write an objective set of steps that define acceptability they can do that but that's on them. And if they want to go through some sort of approval process then it just becomes a dog and pony show.
Several thoughts come to my mind when I think about test scripts written for others to use.
I have at times, even recently, asked other people to write test scripts. Not because I want to use them or have them ready to distribute to someone else but because I wanted to use it as a tool to give me insight their approach to testing. It probably isn't the most efficient method but it was what seemed to be the best solution for the circumstances.
To me the intent of scripts for use by business users or during UAT is basically the same: happy path/positive testing that shows that the system works as expected.
The problem I have with writing scripts for business users is that I expect them to know how the system works and test scripts are a horribly inefficient form of user documentation. Besides it leaves the intent of each step in obscurity. It makes more sense to me to teach the business user how to use the system and then let them use it, whether it's taught through a full blown manual, a help file, a training seminar, or a phone call. If the system is complicated enough that it isn't readily apparent to the average user how to use it they you're going to need some sort of training program regardless so why duplicate effort by writing test scripts?
The problem I have with writing scripts for UAT is the same as I mentioned above but it goes deeper. Some, perhaps most people, might not agree with me. When I think about writing UAT scripts, it gives my heart ethical palpitations! UAT isn't just verifying functionality, it's verifying that it's acceptable. Determining whether the software/website is acceptable is a judgement call that only the client can make. Granted acceptance criteria can be written out and those criteria can be negotiated between the client and the agency but it's still a subjective evaluation when it comes down to it. The specific problem then that I have with UAT scripts is that I, as the script writer, am determining whether the deliverable is or is not acceptable. If the client wants to write an objective set of steps that define acceptability they can do that but that's on them. And if they want to go through some sort of approval process then it just becomes a dog and pony show.
Wednesday, May 9, 2012
The Test Pilot
Just because you...
...designed the airplane...
...built the airplane...
...managed the design, construction, and testing of the airplane...
...flew on an airplane once...
...played an airplane video game...
...have a cousin that flies airplanes...
...read a book about flying airplanes...
...doesn’t mean you know how to fly the airplane!
I cannot emphasize this enough: testing is a skilled trade - whether it's testing an airplane or testing a website. The project manager for the the design and construction of a new airplane would not think twice about trying to fly the plane himself. Instead he delegates that responsibility to someone that knows how to fly the plane.
But just because you...
...know how to fly the plane...
...doesn't mean you know how to test the airplane!
Testing goes well beyond normal use. Test pilots need to be able to conceptualize the abnormal, the extreme, the emergency, even the absurd situations and have the skill to execute on them. Yes, every pilot should be trained to handle emergency situations but being forced into an emergency is far different from intentionally stepping into one.
Testing websites and software shouldn't be any different. There are certainly valid scenarios for the project manager or the client to do some testing but having them do the heavy lifting will lead to poor results.
Now this isn't to say that testing is an exclusive club by any means. You learn testing by doing. You really can't go to school to get a degree in testing. Attending a three-day seminar won't make you a good tester. Getting an expensive certification from ISTQB won't make you a good tester. You become a good tester with practice, by having an open mind, by challenging the status quo, by thinking, by being independent.
...designed the airplane...
...built the airplane...
...managed the design, construction, and testing of the airplane...
...flew on an airplane once...
...played an airplane video game...
...have a cousin that flies airplanes...
...read a book about flying airplanes...
...doesn’t mean you know how to fly the airplane!
I cannot emphasize this enough: testing is a skilled trade - whether it's testing an airplane or testing a website. The project manager for the the design and construction of a new airplane would not think twice about trying to fly the plane himself. Instead he delegates that responsibility to someone that knows how to fly the plane.
But just because you...
...know how to fly the plane...
...doesn't mean you know how to test the airplane!
Testing goes well beyond normal use. Test pilots need to be able to conceptualize the abnormal, the extreme, the emergency, even the absurd situations and have the skill to execute on them. Yes, every pilot should be trained to handle emergency situations but being forced into an emergency is far different from intentionally stepping into one.
Testing websites and software shouldn't be any different. There are certainly valid scenarios for the project manager or the client to do some testing but having them do the heavy lifting will lead to poor results.
Now this isn't to say that testing is an exclusive club by any means. You learn testing by doing. You really can't go to school to get a degree in testing. Attending a three-day seminar won't make you a good tester. Getting an expensive certification from ISTQB won't make you a good tester. You become a good tester with practice, by having an open mind, by challenging the status quo, by thinking, by being independent.
Tuesday, May 8, 2012
Fast, Good, Cheap, and _______
The Triple Constraint or Project Management Triangle is a device that describes opposing variables in project management. The variation that I'm most familiar with has the three sides of the triangle representing: Fast, Good, and Cheap. The principle is that, when applied to a project, one variable must suffer at the expense of the other two, whichever one that is, or vice versa. For example, you can have a project run fast and deliver a good product but it won't be cheap.
Who says you have to sacrifice though? It's really not a "who" but a "what", which I recognize thanks to a recent post by Catherine Powell. (Our models differ but it's because of her that my brain got to thinking about this.) There's a fourth project component missing from the model and that's Scope. The fourth variable necessitates compromise but before talking about how Scope comes into play, let's review how the Project Management Triangle works. There is a variable for each side of the triangle A, B, and C and a fourth variable for the perimeter of the triangle D. The perimeter is fixed so that no matter what, A + B + C = D. If A increases, B and/or C must decrease. If A and B increase, C must decrease.
Why does the perimeter have to be fixed? Let's face it, most projects, if not all, have one thing that is 100% non-negotiable. In the Fast, Good, Cheap model, this is Scope. Without answering "what is this project about" you really don't have a project. Even if a project has more than one "non-negotiable," one will still trump the other.
As you may have recognized, the three sides of the triangle are not permanently designated Fast (Timeline), Good (Quality), and Cheap (Cost). Rather, the three sides are that which is not the paramount non-negotiable perimeter.
Let's now apply these variables to the sides of the triangle. When a side of the triangle is benefited, it gets longer but it gets shorter when penalized.
Using the Fast, Good, Cheap model for the three sides, let's say Scope is our fixed perimeter with a non-negotiable set of requirements. Let's say the company wants a bigger profit margin, which is in essence cutting the cost. If we want it to stay on time, the quality will suffer because of reduced testing. If we want it to maintain quality, the timeline will suffer because of less-skilled (i.e. cheaper) labor.
Under ideal circumstances, we'll have an equilateral triangle. This requires doing your homework up front. Figure out what your non-negotiable variable is and then set the remaining variables based on that. Doing this should minimize the need to make compromises enabling you to not have to change anything. If change is necessary and compromises are undesirable, the only way to change the perimeter is to change the contract either through a change order or a completely new contract.
Who says you have to sacrifice though? It's really not a "who" but a "what", which I recognize thanks to a recent post by Catherine Powell. (Our models differ but it's because of her that my brain got to thinking about this.) There's a fourth project component missing from the model and that's Scope. The fourth variable necessitates compromise but before talking about how Scope comes into play, let's review how the Project Management Triangle works. There is a variable for each side of the triangle A, B, and C and a fourth variable for the perimeter of the triangle D. The perimeter is fixed so that no matter what, A + B + C = D. If A increases, B and/or C must decrease. If A and B increase, C must decrease.
Why does the perimeter have to be fixed? Let's face it, most projects, if not all, have one thing that is 100% non-negotiable. In the Fast, Good, Cheap model, this is Scope. Without answering "what is this project about" you really don't have a project. Even if a project has more than one "non-negotiable," one will still trump the other.
As you may have recognized, the three sides of the triangle are not permanently designated Fast (Timeline), Good (Quality), and Cheap (Cost). Rather, the three sides are that which is not the paramount non-negotiable perimeter.
For Example
Timeline: The client needs a website to go live at the same time as a huge marketing campaign
Quality: The website must adhere to government regulations
Scope: The functionality of the website has to include everything specified
Cost: The expense of the project cannot go one penny over
Let's now apply these variables to the sides of the triangle. When a side of the triangle is benefited, it gets longer but it gets shorter when penalized.
| Variable | Benefited | Penalized |
|---|---|---|
| Timeline | Decreases | Increases |
| Quality | Increases | Decreases |
| Scope | Increases | Decreases |
| Cost | Decreases | Increases |
Using the Fast, Good, Cheap model for the three sides, let's say Scope is our fixed perimeter with a non-negotiable set of requirements. Let's say the company wants a bigger profit margin, which is in essence cutting the cost. If we want it to stay on time, the quality will suffer because of reduced testing. If we want it to maintain quality, the timeline will suffer because of less-skilled (i.e. cheaper) labor.
Under ideal circumstances, we'll have an equilateral triangle. This requires doing your homework up front. Figure out what your non-negotiable variable is and then set the remaining variables based on that. Doing this should minimize the need to make compromises enabling you to not have to change anything. If change is necessary and compromises are undesirable, the only way to change the perimeter is to change the contract either through a change order or a completely new contract.
Thursday, April 19, 2012
[Not] Everything Is Urgent!
When projects get down to the wire, sometimes certain people (you know who they are) become prone to throwing out the rules for determining defect priority. As I previously wrote in Prioritizing Defect Reports, there are four factors that I consider when setting priority: Severity, Exposure, Business Need, and Timeframe. Unfortunately, Timeframe becomes a stumbling block when they fall prey to the terminal thought, "The deadline is right around the corner and all of these issues need to be done, therefore they are all Urgent!"
I call this a "terminal" thought because it leads to a disastrous method of project management: panic. Panic management occurs when organization goes out the window; and that's exactly what happens when all issues are prioritized the same. The priority indicator loses its value. Even when time is running short and all of the issues are crucial to launch, issues varying levels of priority and some need to be done before others. And what happens when nothing has any priority? Developers decided themselves which issues to do and when.
When we get to crunch time, I think it's appropriate to not only redefine the spans of time for the Timeframe factor but redefine the factor. Instead of thinking about a Timeframe, think about a Sequence:
I call this a "terminal" thought because it leads to a disastrous method of project management: panic. Panic management occurs when organization goes out the window; and that's exactly what happens when all issues are prioritized the same. The priority indicator loses its value. Even when time is running short and all of the issues are crucial to launch, issues varying levels of priority and some need to be done before others. And what happens when nothing has any priority? Developers decided themselves which issues to do and when.
When we get to crunch time, I think it's appropriate to not only redefine the spans of time for the Timeframe factor but redefine the factor. Instead of thinking about a Timeframe, think about a Sequence:
- Urgent: Drop everything and work on this
- High: Complete before working on any lower priority issues
- Normal and Low: After confirming with the project manager that there is nothing more important that needs to be done, concentrate on the Normal priority issues first but may incorporate Low priority issues if there's an efficiency advantage. Low priority issues are completed last.
Wednesday, April 11, 2012
The Quality Assurance Misnomer
Over the years, the industry standard title for a software tester has become “Quality Assurance Analyst/Engineer”. I don’t know the history of this but I do know that it’s not without controversy. When most people in the software industry hear the title, they think of the person’s role as being someone who assures the quality of the software product. That’s a big problem though because, as a Quality Assurance Engineer myself, that’s not what my job is, nor is it generally the job of any software tester that I know.
Here’s the root of the problem: Quality is a business decision that evaluates whether a product or part thereof is GOOD or BAD and that is a decision that lies in the hands of the product owner – that person which is or represents the product purchaser/user. The reason for this is because they are the person that best knows whether the purchaser/user will be satisfied. Testers don't know the user so it would be presumptuous at best to put that responsibility on them
What I do is I poke, stress, and exercise the software in every way that I can think of and make observations. I compare those observations to documentation and common heuristics that help identify when intended functionality deviates from actual functionality. And then my job is actually to help inform that business decision by reporting all of my observations through conversation, defect reports, and progress reports to the product owner. You see, I can report RIGHT or WRONG based on my comparison to documentation and heuristics but that’s different than GOOD or BAD.
Determining GOOD or BAD is a murky process that is above the pay grade of a lowly software tester and that has to factor in many components including:
Then taking all of this information, to determine whether the product is GOOD or BAD, you need to have a sense of whether the customer will be satisfied as I mentioned above. The only way to develop this sense is to have an active, direct interaction with the customer from the beginning of the project or to just leave it directly up to the customer. Though, I don’t recommend the latter because without somehow quantifying an acceptance level, they may never agree that the product is good.
All that said, I don’t think that the title “Quality Assurance Analyst/Engineer” is BAD, just misunderstood. While I don’t assure quality through analysis and/or engineering, I do analyze and engineer so that quality can be assured.
Here’s the root of the problem: Quality is a business decision that evaluates whether a product or part thereof is GOOD or BAD and that is a decision that lies in the hands of the product owner – that person which is or represents the product purchaser/user. The reason for this is because they are the person that best knows whether the purchaser/user will be satisfied. Testers don't know the user so it would be presumptuous at best to put that responsibility on them
What I do is I poke, stress, and exercise the software in every way that I can think of and make observations. I compare those observations to documentation and common heuristics that help identify when intended functionality deviates from actual functionality. And then my job is actually to help inform that business decision by reporting all of my observations through conversation, defect reports, and progress reports to the product owner. You see, I can report RIGHT or WRONG based on my comparison to documentation and heuristics but that’s different than GOOD or BAD.
Determining GOOD or BAD is a murky process that is above the pay grade of a lowly software tester and that has to factor in many components including:
- Test coverage
- How stale is the testing (i.e. when was the last time part x was tested?)
- How many failed test cases are outstanding
- What the severity is of the outstanding defects
- Amount of time allowed for planning, developing, and testing
- The gut factor (i.e. does it feel like it’s ready?)
Then taking all of this information, to determine whether the product is GOOD or BAD, you need to have a sense of whether the customer will be satisfied as I mentioned above. The only way to develop this sense is to have an active, direct interaction with the customer from the beginning of the project or to just leave it directly up to the customer. Though, I don’t recommend the latter because without somehow quantifying an acceptance level, they may never agree that the product is good.
All that said, I don’t think that the title “Quality Assurance Analyst/Engineer” is BAD, just misunderstood. While I don’t assure quality through analysis and/or engineering, I do analyze and engineer so that quality can be assured.
Friday, March 16, 2012
Gems of Wisdom from Fellow Software Testers
This past week, I've been feeling very cerebral and the timing couldn't have been better because I've been able give my blog some much needed attention and I've also been able to get almost caught up on the backlog of blogs that I follow. In my reading, I have come across a number of gems of wisdom that I think are well worth sharing:
James Bach in Why Scripted Testing is Not for Novices
Michael Bolton in Why Checking Is Not Enough
Michael Bolton in What Exploratory Testing Is Not (Part 3): Tool-Free Testing
Anne-Marie Charrett in Please don't hire me
I hope you find some wisdom in this too.
James Bach in Why Scripted Testing is Not for Novices
- [A] scripted tester, to do well, must apprehend the intent of the one who wrote the script. Moreover, the scripted tester must go beyond the stated intent and honor the tacit intent, as well– otherwise it’s just shallow, bad testing. - TW: This problem is a direct result of divorcing test design and execution. And for a novice tester, they simply don't have the skills yet to "read between the lines" of the script to see the intent.
Michael Bolton in Why Checking Is Not Enough
- But even when we’re working on the best imaginable teams in the best-managed projects, as soon as we begin to test test, we begin immediately to discover things that no one—neither testers, designers, programmers, nor product owner—had anticipated or considered before testing revealed them.
- It’s important not to confuse checks with oracles. An oracle is a principle or mechanism by which we recognize a problem. A check is a mechanism, an observation linked to a decision rule.
- Testing is not governed by rules; it is governed by heuristics that, to be applied appropriately, require sapient awareness and judgement.
- A passing check doesn’t tell us that the product is acceptable. At best, a check that doesn’t pass suggests that there is a problem in the product that might make it unacceptable.
- Yet not even testing is about telling people that the product is acceptable. - TW: I've been trying to promote this concept for at least a year in my own evolutionary understanding of my craft. You can expect my own blogging on this topic soon.
- Testing is about investigating the product to reveal knowledge that informs the acceptability decision.
Michael Bolton in What Exploratory Testing Is Not (Part 3): Tool-Free Testing
- People often make a distinction between “automated” and “exploratory” testing. - TW: This is the first sentance and BAM! did it cause a paradigm shift for me!
- That traditional view of test automation focuses on performing checks, but that’s not the only way in which automation can help testing. In the Rapid Software Testing class, James Bach and I suggest a more expansive view of test automation: any use of tools to support testing.
Anne-Marie Charrett in Please don't hire me
- If you want me to break your code - TW: This set my brain going for at least two hours thinking about the implications of this. It's a great point though, testers don't break code. Look forward to more on this in the future.
I hope you find some wisdom in this too.
Tuesday, March 13, 2012
Predator II: Just Die Already!
What do they do in in Hollywood if a movie does well at the box office? Make a sequel and use the success of the first movie to milk consumers for more money. Predator did well, earning almost $100M so naturally, Predator II came along three years later. Sequels usually have a common relationship with the original. In the first movie, the hero, after great struggle, finally kills the villain and then the sequel comes along and somehow the villain is back to life causing even more havoc. I see that and get to thinking, "Gosh darn! Why won't that thing JUST DIE ALREADY!?" I ask that question about software defects often because often times, defects have sequels too (and so should blogs)!
Despite my best efforts - thorough coverage, testing every test case I can think of, making sure everything WORKS - out of nowhere pops up some bug that I thought was squashed weeks ago! Sometimes it doesn't matter how big your muscles are, how many bullets you have in your gun, or how many other people have their guns pointed at the same monster, they still come back to life.
This can be caused by any number of things. It could be laziness on the part of the tester (or the developer) but I don't think that's usually the case. Here are some other causes:
When defects do reoccur, use it as a learning catalyst. Was there an obvious breakdown in the development or testing methodology? Is this a "new" method of execution that should be noted for the future and with application other scenarios? Is there a different testing technique or tool that might have been able to expose that the defect was not fully resolved earlier?
Old defects will continue to pop up. Don't be discouraged but remain vigilant. Keep sharpening your skills and be ready for the enemy to return, because he always does, sometimes with Aliens.
Despite my best efforts - thorough coverage, testing every test case I can think of, making sure everything WORKS - out of nowhere pops up some bug that I thought was squashed weeks ago! Sometimes it doesn't matter how big your muscles are, how many bullets you have in your gun, or how many other people have their guns pointed at the same monster, they still come back to life.
This can be caused by any number of things. It could be laziness on the part of the tester (or the developer) but I don't think that's usually the case. Here are some other causes:
- Unclear requirements
- Inexperience
- Lack of time
- Introduction of new/altered code
- Using [testing] the software in a way that it wasn't previously used
When defects do reoccur, use it as a learning catalyst. Was there an obvious breakdown in the development or testing methodology? Is this a "new" method of execution that should be noted for the future and with application other scenarios? Is there a different testing technique or tool that might have been able to expose that the defect was not fully resolved earlier?
Old defects will continue to pop up. Don't be discouraged but remain vigilant. Keep sharpening your skills and be ready for the enemy to return, because he always does, sometimes with Aliens.
Please study for my interview
I recently volunteered myself into the position of reviewing resumes, identifying candidates, and executing first round interviews for a QA position at my company. This has been a real learning experience for me as I've garnered a better understanding of the hiring process; but more so, it has been an eye opening experience. The great majority of applicants in the field are ROBOTS, figuratively speaking. Requirements get turned into scripts and that's how you test. As long as you have the right process, the right charts, the right formula, that's all you need.
I disagree. Don't get me wrong, I want someone that can transform requirements into test scripts but I what I really want is someone that can THINK. Someone that can test what's not in the requirements document. To quote Anne-Marie Charrett, I want to hire someone that says, "Don't hire me if you want perfect software. Don't hire me if you're looking for a tester to only "check" if things look ok."
For my interviews, I developed a standard list of questions to ask. Each question was carefully thought out and has a specific purpose for being asked. Here's my list:
I disagree. Don't get me wrong, I want someone that can transform requirements into test scripts but I what I really want is someone that can THINK. Someone that can test what's not in the requirements document. To quote Anne-Marie Charrett, I want to hire someone that says, "Don't hire me if you want perfect software. Don't hire me if you're looking for a tester to only "check" if things look ok."
For my interviews, I developed a standard list of questions to ask. Each question was carefully thought out and has a specific purpose for being asked. Here's my list:
- How did you get into software testing?
- What do you like about testing?
- What are your frustrations with testing and how do you deal with them?
- For example: How do you deal with being the resident naysayer?
- For example: How do you deal with defects that just never seem to get fixed?
- What comes to mind when you hear the term ‘Quality Assurance’?
- Compare and contrast automated and manual testing.
- Compare and contrast scripted and exploratory testing.
- How confident are you in your ability to deliver defect free software?
- What essential information should be included in defect report?
- How do you determine if something is working correctly when you have no documentation?
- What’s your experience with Agile?
- How do you sharpen your testing skills?
Monday, March 12, 2012
Using XQuery in MS SQL for Novices Part 2
How to use the value() function
Just the other day I started the conversation about how to use XQuery functions in SQL to pull specific XML data out of a table. It's time now for Part 2.
In the first discussion, I explained how to use the query() function. It's a very nice function because it returns both the XML nodes and the data within them. However, if all you want is the specific data in the node, having the nodes display in each result can make it more difficult to read the results. For this purpose we can use the value() function.
The value() function is ever-eager to throw the error "'value()' requires a singleton (or empty sequence)". It's also eager to throw the error "The value function requires 2 argument(s)." Obviously, these errors occur when we don't provide all of the necessary information.
The two arguments required by the values() function are:
This is how it looks:
We can further tidy up the results by aliasing the column (assigning it a custom name of our choice) and using the exists() function.
Just the other day I started the conversation about how to use XQuery functions in SQL to pull specific XML data out of a table. It's time now for Part 2.
In the first discussion, I explained how to use the query() function. It's a very nice function because it returns both the XML nodes and the data within them. However, if all you want is the specific data in the node, having the nodes display in each result can make it more difficult to read the results. For this purpose we can use the value() function.
The value() function is ever-eager to throw the error "'value()' requires a singleton (or empty sequence)". It's also eager to throw the error "The value function requires 2 argument(s)." Obviously, these errors occur when we don't provide all of the necessary information.
The two arguments required by the values() function are:
- The reference to the node
- A data type specification in which to host the returned information
This is how it looks:
SELECT XMLColumn.value('(//XML-Node-Name)[1]', 'DataType(#)') FROM TableDon't forget to add a comma between the arguments!
We can further tidy up the results by aliasing the column (assigning it a custom name of our choice) and using the exists() function.
SELECT XMLColumn.value('(//XML-Node-Name)[1]', 'DataType(#)') 'Alias Column Name' FROM Table WHERE XMLColumn.exists('//XML-Node-Name') = 1
Sunday, March 11, 2012
What should be the easiest archaeological dig in the world
When I entered the field of software testing, the last last thing on my mind was archaeology. Software testing is about the future whereas archaeology is about the past. Technology is clean and non-physical but archaeology is dirty and labor-intensive. However, the two fields aren't all that different. A good software tester is going to thrive on artifacts, but not the kind buried in the ruins of ancient civilizations.
In software projects, an artifact is any piece of documentation that relates to the project:
In software projects, an artifact is any piece of documentation that relates to the project:
- RFP
- RFP Response
- SOW
- Project Plan
- Requirements
- Design comps
- Story boards
- User personas
- Test scripts
- Defect reports
- Meeting summaries
- etc.
The artifacts mentioned above help software testers the same way bones help paleontologists. Both are vital in the acquisition of knowledge. Testers need to have a thirst for project knowledge. How can one assert if the product works as intended without having an authoritative source? Regardless of approach, the more the tester knows about how the software is supposed to work, the more thoroughly they can test through broader coverage and more intricate scenarios.
When projects don't have a decent amount of documentation, you're asking for scope creep and unsatisfied client expectations. Testers, get dirty and dig into the documentation. Insist on having access to it and demand it early. Study it thoroughly. Use it to help design your tests. If the documentation doesn't exist, consult the project experts (well, consult them anyway) and then write it down. Once it's written down, it can't be disputed - it might be changed later but that can happen to any document. Moral of the story: use everything at your disposal to learn about the project.
Friday, March 9, 2012
Using XQuery in MS SQL for Novices Part 1
A recent project at work presented me with a new challenge. Generally in a database, at least the ones that I've had to work with, each different type of data is stored in its own column. Well, the project currently under test downloads information from a third party system in the form of an XML file. Even though we don't use every bit of data in provided by the third party, we still want to preserve it in its original format. Therefore we store the entire XML string in the database.
When XML data is stored in a database, there are at least a couple different data types that can be used.
Because using query() alone can be a little messy returning empty rows, we can use another function that checks YES or NO does something exist in the XML data.
When XML data is stored in a database, there are at least a couple different data types that can be used.
- It could be stored as just regular old text using the varchar data type. In that case whatever program is using the data has to convert it back into XML in order to use it. The Umbraco content management system likes to do it this way.
- On the other hand, it could be stored natively using the XML data type, which is what our system does. Storing the data AS XML presents some interesting challenges as well as many opportunities when it comes to digging for information within the XML string itself -- most of which I'm still figuring out. For today, I'd like to start with just the basics.
Doing It Old School
If all you're trying to do is find the rows where a specific series of characters occurs, you can search just like you always do... almost.
SELECT * FROM Table WHERE XMLColumn LIKE '%key-text%'
This traditional query will throw an error faster than a lazy developer's code. Instead, you need to change data types on the fly:
SELECT * FROM Table
WHERE CAST(XMLColumn AS varchar(max)) LIKE '%key-text%'
So to reiterate, this query uses the CAST command to convert the XML column into a text data type.
Being All Fancy About It
Like I said, the old school works fine if you're just trying to identify entire records in the table that match your criteria. If you're just looking for specific nodes in the XML data, the approach above is NOT very efficient. You can harness the power of the XML data type to do things much more quickly! Since v.2005, MS SQL Server has supported XQuery. I'm not a programmer so I don't yet understand the full power that this yields. I've found, however, that the existing help on the internet was not written for people like me so here's what I've learned so far:
STEP 1: Using query()
Assuming that we just want to find the data stored in a specific XML node, we can use the Query() function.
SELECT XMLColumn.query('//XML-Node-Name') FROM TableA couple of notes about what's going on there:
- Query() is a function. Inside of the function is one ore more arguments (RAWR! J/K), which are written inside of the parentheses. Each argument must be enclosed in single quotes and multiple arguments are comma separated. The query() function will accept accept ONE argument.
- The double slashes // at the start of the argument tell the system that you're looking for a node and for our purposes, it doesn't matter whether it has child or parent nodes.
- You need to put the goods of the function inside of the single quotes.
- The node name is CASE SENSITIVE
- Don't capitalize the the function: e.g. type "query" not "Query"
When this query runs, unless you add a WHERE clause, it's going return a result for every single row in the table, including the rows where the specified node does not exist. If the node does not exist for that row, there will be an empty cell in the results table. If the node does exist, the data returned will look like this:
<xml-node-name>XML data stored in node</xml-node-name>STEP 2: Using exists()
Because using query() alone can be a little messy returning empty rows, we can use another function that checks YES or NO does something exist in the XML data.
SELECT XMLColumn.query('//XML-Node-Name') FROM Table WHERE XMLColumn.exists('//XML-Node-Name') = 1This query will only return the rows where the specified node exists without any empty rows. A couple of notes about what's going on here:
- I used the same argument for both functions because if the node doesn't exist that I'm looking for, that row doesn't need to be returned.
- The = 1 part is telling it to evaluate to TRUE (e.g. Yes, it exists) whereas, if I set it to = 0 it would be the opposite (e.g. No, it doesn't exist). Setting it to 0 would only return the blank rows in this example.
Please let me know in the comments if you've found this useful. Look for more posts to come as I learn to harness more power of the XQuery!
Helpful but technical links (also probably the first ones you came across when googling "XQuery SQL Server":
Tuesday, March 29, 2011
Thursday, February 24, 2011
Ode to Software Testing
Subscribe to:
Comments (Atom)



