Thursday, November 15, 2012

The Dog Age Paradigm

It's a well-known fact that I am the proud parent of the world's handsomest dog. Diego and I have our birthdays a mere 10 days apart which inevitably leads to the "dog years" discussion. While I'm chronologically ten times older than he, in "dog years" I'm only about 1/2 older -- that is, if you subscribe to the notion of "dog years".

The average lifespan of dogs varies from breed to breed but it's somewhere in the neighborhood of 12 years. Since humans live on average about 80 years, we can say that dogs age about seven times faster than humans. Thus, while Diego has now completed three trips around the sun, he's actually 21 in dog years.

I think this model is silly and here are just a few examples of why:
  1. Dogs can walk within weeks of birth (not even three months in dog years)
  2. They reach adolescence well before a year (not even seven in dog years)
  3. They're full grown well before being two (not even fourteen in dog years)
I do have a point in all of this, actually several:
  1. Are there things in your project and/or testing that you just blindly accept as true even though they make no sense if you actually think about them?
  2. One can prove anything with numbers.
  3. Do you have processes that are overly simplified [or complicated] such that the intent gets lost?
I'd like to emphasize point one a little bit more. It's very easy for us to get stuck in a certain line of thinking about anything. Perhaps there's a developer that was plagued with delivering a series of bad code and now you think he's a terrible developer. Maybe some requirements weren't very clear and now the client wants something different than what was constructed but you think your solution is the best. Context-driven testing is God's gift to software testers. Test automation is the bane of mankind.

I'm not trying to suggest in any of those scenarios that opinion was formed haphazardly. Rather, I'm suggesting to not ever stop questioning anything, including your own conclusions. Things change as time goes on and context changes. I suppose the dog years "formula" came about as a way of explaining to children why pets don't live as long as humans. What might be acceptable in explaining a concept to a 5-year-old doesn't work for a 30-year-old.

Alternatively, the death of a family pet could be the "right time" to start teaching kids algebra. If we have to have a formula, I think this one works much better:
x equals the age in dog years and y equals the age in human years.

Wednesday, August 22, 2012

Who's Number One: Revenue or Customers?

There's a new kid on the block, App.net that, from what I understand, is a Twitter-esque service that is supported by user and developer membership fees, not paid advertisements. The company puts it this way, "We're building a real-time social service where users and developers come first, not advertisers." Call me a cynic but I don't think App.net puts users and developers first any more than Twitter and Facebook put advertisers first. And thanks, to my cynicism, this sparked a great Twitter Debate or "twibate" mainly with my boss, @jeffturner (from the bottom up):


As you can see from the conversation, I think that revenue comes before anything else - users, developers, or even advertisers. For a company to say that they put users and developers first is the very definition of being altruistic, that is to be selfless. If the company is sincere about users first, then I ask, why not be structured as a non-profit organization? There will be a part two of this coming soon to discuss the merits of both for-profit and non-profit business organizational structures. For the duration of this post, I simply want to assert my case for why I think revenue comes first because Twitter simply doesn't lend itself to the lengthy explanation this requires.

I can't think of a single business that doesn't say something along the lines of "the customer comes first" or "the customer is always right". But that's a lie. We all know it's a lie but it's not offensive because we are endeared by the implication that customers are at least very important. Customers aren't always right. If a person attempts to return a Dell laptop to an Apple store, the geniuses are going to laugh him out of the building. If someone fat-fingers a contract so it says a website will be built with HTML 6, the customer is still going to get an HTML 5 product.

The customer isn't always wrong though either. In fact, customers are extremely important - so important that I think it's fair to say that customers and revenue go hand-in-hand. If a business doesn't have a customer base to fill the bank account, it will cease to exist faster than Solyndra. Because of that, it's imperative that the company cater to the customers but the business itself still comes first. A company can't operate in a deficit to please customers. A company can't operate illegally to please customers. A company can't operate unethically to please customers.

In the end, whichever group is paying is going to get the most attention but don't mistake attention for importance. When ordering by importance revenue is what keeps the business operating and therefore revenue must be the greater of two equals. One may start a business with the earnest intent of providing a great service and not with the intent of becoming a bazillionaire. App.net was started to provide some user-oriented features that Twitter simply can't offer and they're able to do that because they're using a different model for generating revenue. They still wouldn't even be in business if they didn't have some sort of funding. 

Acknowledging that revenue is first isn't a bad thing. Money isn't a bad thing. However, when revenue becomes disproportionately large over the customers then the business/customer relationship begins to break down. There's an implicit trust that customers understand the business needs to make money but that the business will not gouge the customers. If the business can't make money, it ceases to exist. If the business takes advantage of the customers, they'll eventually catch on and patronize someone else who will treat them fairly. Balance is key but revenue is #1.



On a side note, I'd like to mention I haven't actually tried App.net's service but I believe that is irrelevant to the opinions mentioned above. This isn't a review of the service. Rather, App.net serves as an excellent case study of my broader world view on business economics and should not be construed as being either for or against either Twitter or App.net. 

Side note #2: Thanks to fellow blogger Stephanie Quilao (@skinnyjeans) and her post, App.Net: Why This Consumer Would Pay A Subscription for a Twitter-like Social Network, which helped set all the gears in my head turning. I always love to read something that makes me think!

Tuesday, August 14, 2012

Channeling Emily Post: Meeting Etiquette II

A few weeks ago, I wrote about some tips on how to be courteous when scheduling a meeting or receiving an invitation to a meeting. Today I present the companion, being courteous during the meeting.

Meeting courtesy begins long before the meeting actually starts.
  • Do your homework: If you've been asked review a document or perform some other preparation, do it at least a few hours in advance so that you can be thorough and have enough time to let it sink in.
  • Prepare your handouts: If you're providing handouts, print them early because the printer will always jam and take longer than "normal".
  • Use the restroom before the meeting.
  • Arrive early enough that you can be ready immediately at the scheduled start time - no waiting to find a find a clean sheet of paper, pour your coffee, or boot your laptop.

During the meeting:
  • Be sure to have brought a notepad, tablet, or laptop so you can take notes. Don't expect that you'll remember everything you need, a handout, or a post-meeting summary.
  • Put your phone on silent and check your messages after the meeting. If your presence at the meeting isn't important enough that you think you can be checking your messages, then why are you at the meeting? If an emergency does come up then the meeting should be postponed so that everyone can participate.
  • Be an active participant - ask questions and offer personal perspective when appropriate.
  • Remain focused on the topic, don't start unrelated conversations with other people in the room.
  • End the meeting on time.

After the meeting:
  • If you uploaded files to the meeting room computer, remove them and turn off all  equipment.
  • Clean up after yourself - if you have garbage (e.g. an empty coffee cup), throw it out; if you provided refreshments, don't leave the remnants in a mess
  • If you've been assigned a post-meeting action item, complete it promptly.

The universal key to etiquette is to be considerate of other people. Treat other people, their time, and their property how you would like to be treated. Keeping this in mind should make meeting etiquette a breeze.

Friday, August 10, 2012

UPDATE: Dear Web Designers

At the end of May, I wrote a heart-felt plea to web designers for them to please throw out their 960 pixel design mold. Today I have a brief update on the numbers from Global Stat Counter.

By way of review, here's an excerpt of what I previously wrote:
For the three months February to April 2012, screen resolutions of 1280+ totaled 66.35% of the market, 1360+ were 42.07%, and 1600+ were 14.05%.
We now have three complete months of new data. The trends continue. For the three months May to July 2012, screen resolutions of 1280+ totaled 67.50% of the market, 1360+ were 44.30%, and 1600+ were 14.78%.

I do wish to take a moment to talk about that old foe 1024x768.  It appears that the rate of decline has slowed down quite a bit. In April, that resolution had a share of 18.02% and in July it was just down to 17.47%. It definitely has a significant market share, but I think it's important to remember that iPads, "the world's most amazing tablet", run that resolution (or double). When talking about iPads though, it's equally important to note that they scale pages quite handsomely.

Dear Web Designers, please use a bigger mold. The numbers justify it and I think the masses would appreciate it.

Wednesday, July 25, 2012

Your Website Is a Rubik's Cube

My coworker, @jakedowns, has a Rubik's Cube sitting on his desk. It seems to serve as a physical manifestation of the gears turning in his brain as he's working on solving a development problem. He's actually  pretty good at solving them and can usually do it in less than a minute.

On occasion, I've happened past his desk and noticed that it has been arranged in a checkerboard pattern. Last Friday I decided that he needed a challenge so I designed a pattern and told him to replicate it.

And he did! However, the pattern I provided was for only one face and his solution was for only one face. At the time, I made a joke that "the requirements were met but the purpose was not fulfilled" because I wanted the pattern to be displayed on all six faces.

At this, I set to work plotting out all six faces. It took me hours, literally, to figure out a valid solution extending the original pattern to all six sides. It was a great exercise for me. I learned that the pattern does not work when using the colors of opposing faces for the pattern (e.g. blue and green). I also learned that not only do I have to account for all eight corners, I also need to account that the thee colors on each corner are in the correct position relative to the other two. I also exercised my spacial reasoning skills quite a bit.

In the midst of all of this, I was thinking about how websites are like Rubik's Cubes. Designing and building a website isn't as simple as solving a Rubik's Cube where each side is a solid color. Rather, every website is unique, perhaps similar to others, but still has it's own individual requirements, much the same way that I created a new requirement for what it meant for Jake to solve the puzzle.
Upon doing a little internet research, it seems that there are some 43,252,003,274,489,856,000 (that's forty-three quintillion) valid combinations for a Rubik's Cube. When solving one for the traditional pattern, there is a well-established method to do so. However, when trying to solve for a custom design, most, if not all of that goes out the window. You still twist and turn but the algorithms have to be all new.

The thing about websites is that you don't just twist 'em and turn 'em until you feel like stopping and saying "Solved!" You have a goal in mind. Sometimes it takes a lot of work to get it figured out exactly what that goal is. Then you twist and turn until what you have matches that goal. It's a complex process because no two solutions are the same.

Here are some practical takeaways to keep in mind:
  • Your customer, no matter what they say, has a very specific result in mind for you building their website.
  • It is worth every minute to take the time up front to do design and business analysis.
  • Changes made once development (twisting and turning) has begun is going to affect the deadline and is likely going to mean that some things are going to have to be done over.
  • Testers are happy to look at your design before development begins and look to see if the corners match up the way they should.
The next time someone says to you the words "simple website" or "simple change", hand them two Rubik's Cubes that have been thoroughly discombobulated and tell them to change one of them to match the other.

Monday, July 23, 2012

Improvising on a tune called "Exploratory Testing"

Paul Manz is someone whom I would consider to be among the greatest North American organists of the twentieth century. Had you ever attended one of his hymn festivals, you would have been treated to a number of improvisations - that is simultaneous composition and performance. And I guarantee you, you would never have thought, "This isn't music, this is just noise!" That's because his improvisations have all of the structure of music as we know it: tonality, meter, tempo, dynamic, melody, harmony, etc. And he was doing it ON THE SPOT; nothing was written down.

Exploratory testing has many parallels with improvised music but yet, it doesn't have the same respect even when executed by the "Paul Manz-es" of the testing world like James Bach and Anne-Marie Charrett.

I improvise regularly when I sit at the piano. I wasn't always very good but my skill has improved little by little over time, but particularly the last two years whereby I have had to create my own accompaniments to support congregational singing when the music editor at Oregon Catholic Press fails to understand the needs of the untrained singer. But I digress. My point however, is that no one would question the legitimacy of my playing even though I didn't sound like Paul Manz and the only thing in writing was the melody and harmonic suggestion.

E.T. is structured just like improvisations are structured. There's usually some sort of suggested charter or mission. Testers utilize various techniques to expose and isolate defects. But yet because it isn't written down in meticulous detail, E.T. is considered inferior. Some of the worst music of all time has been written down, performed over and over, and made the performer filthy rich.

While I think there's a close parallel between improvising and E.T., it doesn't hold up for written music and scripted testing. Music and testing are both art and science but I think they use them in opposite ways. Music looks for an artistic result achieved through a scientific process whereas testing looks for a scientific result achieved through an artistic process. When you script a test, you strip out the art - the intent, the intuition, the sapience, the wonder.

Any good musician should be able to read and perform music because it demonstrates the technical ability while being a means by which we can learn ultimately to express our own artistic thoughts. A tester has no need to know how to write scripts or execute them. One learns to test by testing, talking with a mentor, reading, writing, etc. and by testing. By gaining understanding of the philosophy of testing we learn ultimately to achieve the scientific results.

It's not fair that exploratory testing doesn't always get the credit it deserves but then again life isn't fair. Sojourn on testers, continually strive to better yourselves and serve as a positive example of just how effective exploratory testing is.

Splitting Definitions

There are two words that we use interchangeably and even the dictionary considers them synonyms but I'd like to challenge us to be more judicious about when we use the word "normal" and when we use the word "average." For both words when we say that x is normal or average we're trying to convey a baseline against which to judge y. However normal and average imply completely different, I might argue opposite, methods of establishing the baseline.

Normal establishes the baseline through a rule. It's objective. Anything that doesn't follow the rule is abnormal.

Average establishes the baseline based on the collection of results. It's subjective. The baseline changes as the results change. Result x could be above or below average but it becomes part of the average when evaluating result y. If we wish to exclude x from the average then we're establishing a rule.

When it comes to software, functionality is generally normal and usage is average. In most cases we define [set rules] about how the software is supposed to work but we never know exactly who will be using it or how. It is quite impossible to set a rule as to who will be using the system. Instead we observe and look for trends and patterns. Be careful though because there is no such thing as an average user and when we create rules based on some composite, mythical person, we are sure to disenfranchise real people.

Normal = Objective. Average = Subjective.

Monday, July 9, 2012

Channeling Emily Post: Meeting Etiquette

When your team grows beyond a handful and staff members have obligations to multiple projects, inevitably scheduling collaborative meeting time can be tricky. And then "tricky" turns into "frustrating" when common courtesy becomes extinct.

You can help save your coworkers some frustration by keeping in mind a few simple guidelines when it comes to scheduling meetings:
  1. If you receive an appointment request, please respond promptly so that the requester can reschedule if necessary.
  2. Please make sure all time unavailable is recorded on your calendar – out of office, personal appointments, vacation, etc. Even if you're working from home, it's nice for that to be indicated on your calendar because some meetings just can't happen by conference call.
  3. For meetings outside of the office, please make sure the duration of the unavailability includes travel time.
  4. If reserving a conference room, make sure it is included on the appointment, even if it's a spur-of-the-moment meeting.
  5. If you have previously accepted a meeting request and can no longer attend, please decline promptly. Your presence at the meeting may be essential which could require the meeting to be rescheduled.
  6. If you scheduled a meeting and can no longer attend, please cancel promptly so that attendees can remained focused at their current task.
Remember, meetings occupy the time of multiple people. Failing to apply a small amount of consideration can cause a huge waste of time. The wasted time isn't just when staff is sitting around waiting for someone to show up but also in the time that they require shift focus back to their other work. Courtesy also helps to keep meeting productivity maximized by preventing attendees from stewing over other attendees showing up late or not at all.

Update: Read the sequel about etiquette relating to the meeting itself.

Tuesday, July 3, 2012

Random Ponderings on Various Pay Structures

Recently, I was thinking about the various pros and cons of the various pay structures that we use: hourly, salary, and commission. Being an ardent subscriber of the context-driven testing school, I question everything -- even when I'm not testing.

Thought 1: Link to Productivity
Hourly payment is often applied where productivity can be closely tied to time.
Examples:
Manufacturing - parts per hour
Retail - number of associates needed to handle customer load
Custodial - how much can be cleaned in a given shift

Salary payment is often applied where productivity is linked to accomplishing a task that can require varying time.
Examples:
Software testing - make sure the system works
Teaching - in addition to classroom time, prep for lessons and evaluate student work
Business executive - run the company

Commission payment is often applied where productivity is connected with sales.
Examples:
Car salesman - sell more cars, earn more money
Real Estate Agent - sell more homes, earn more money
Mortgage broker - close more loans, earn more money

Thought 2: Productivity ≠ Working Hard
For any given task with a set amount of time, Person A could exude a great deal of personal effort and not complete the task whereas Person B could exude very little personal effort and finish the task with time to spare.

Thought 3: Time = Money
Hourly: The more hours you work, the more you're paid
Salary: The fewer hours you work, the higher your effective hourly rate
Commission: The more sales you make, which in theory means, the more time you use pursuing sales leads, the more you're paid

Thought 4: Salary is a bit of an odd duck
Unlike hourly and commission, the salary payment structure does not have a built in mechanism to actually get paid more than your base rate. A salary employee gets paid the same no matter how many hours he works or how many "tasks" are completed. Salary has other perks though. Generally it has more flexible hours and sometimes you may even get to leave work a little early or take off a few hours without using paid time off. And in comparison to commission, you're still guaranteed to make money. If a salesperson can't close any sales, he won't get paid. And I don't have any data to support this but with all things being equal, I think salary employees generally have a higher base pay than hourly.

Thought 5: Salary systems require integrity to work
In salary situations it is generally stipulated that hours worked in excess of 40 are not paid overtime. Moreover, employers generally require an accounting of time spent which should add up to about 40 hours. But in order for all of this to work, employees have to actually put in their 40 hours and employers have to keep overtime as the exception. In other words, employees shouldn't try to cheat the system and employers shouldn't try take advantage of their workers.

Thought 6: So why would anyone pick one over the other?
Really, that depends on a variety of circumstances. Generally the "choice" is in the choosing of the career. You don't say, I want to do FOO and be paid with structure BAR. Personality has a lot to do with it too. In the past, I've worked a number of hourly jobs. I liked being able to say, "my shift is over, time to go home." I think the pressure of a commission setting would would weigh heavily on me and I wouldn't enjoy it. But now I've been salary for my entire professional career and it's worked out really well because of the cerebral work that I do. Some days I finish a little early and then I don't have to wait for the clock to strike a certain time before heading out. Other days, I get in a thread and I don't even notice that the time is well beyond the average end of the day.

Wednesday, June 6, 2012

Two Keys for Effective Defect Reports

When I was in high school, a class that I was in was given a writing assignment. This was a type of class that ordinarily wouldn't have writing assignments and so the students were a little unsure as to the requirements. One of my classmates asked, "How long should the paper be?" And the perverted old man replied, "The paper should be like a woman's skirt: long enough to cover the subject but short enough to still be interesting."

Today I want to expound upon effective defect reports. I am an absolute stickler for detailed defect reports. I have no problem sending issues back that don't have sufficient information. Defect reports are adult writing assignments and we need to make sure that we are including all of the necessary information. I could run down my list of criteria that I expect in every bug report, but you can find those types of lists lots of places. Besides, the list, like everything in testing, is context-oriented.

As inappropriate as my teacher was, he did imply an important point: it's not about a measurement, it's about accomplishing a purpose. Skirts, and clothing in general, maintain a certain level of modest, protect us from the elements, and evoke intrigue. Defect reports, must be able to fulfill two very important functions, otherwise they are a failure.

An effective defect report makes reproduction and troubleshooting as simple as possible.
More often than not, the person who discovers the defect is not the person who resolves the defect. This is a simple efficiency issue. If you have a PM or Scrum Master reviewing issues, they need to be able to assess the priority and assign the issue to the most appropriate developer. The developer should be able to understand what's going on without having to guess or ask additional questions so they can spend less time identifying the issue and more time fixing.

An effective defect report archives the defect.
The purpose of this is for testing. Often times, the person who discovers the defect is not the person who verifies that it has been fixed. The tester needs to be able to know exactly what was wrong so that when they test the resolution, they can determine beyond a doubt whether or not the defect is fixed. If the tester doesn't know precisely what was wrong, it's impossible to know if it's fixed.

If your bug reports fulfill those functions then you're well on your way to getting an A+ on your writing skills. Remember, context is everything - sometimes bug reproduction is elusive and sometimes a conversation can convey imporant information that is hard to express in text. If you keep mind of the purpose then the details will fall into place.

Wednesday, May 30, 2012

Squeeze or Slice?

When it comes to planning the scope for an iteration, I generally think of two different approaches: squeezing and slicing. So which one are you? Are you a squeezer or a slicer?

The Squeezer

  • Tries to get as much out as possible in each squeeze
  • Some squeezes go "Splllppppppt!" and hardly anything comes out
  • More oozes out after you stop squeezing
  • You don't know how much the bottle holds
  • Even if you may know the bottle is 8oz, you never know for sure how much is left
  • You have to pound the end and squeeze repetitively to get the last little bit out

The Slicer

  • Each slice is more or less the same size
  • You can see exactly how big the loaf is and how many slices are left
  • Only the supernatural keeps the loaf from running out
  • There are just a few crumbs to brush up after the last slice

As you may have guessed, I'm an advocate of the slicing system. The characteristics mentioned above are maybe a little bit euphemistic because in reality, software projects are not factory baked and sliced bread. They're homemade and each slice is cut individually. When you first pull the bread out of the oven, you might ideally want the bread to be sliced into 10 pieces. However, after the first slice or two, you might find that the bread is heavier or lighter than you thought prompting you to make thinner or thicker slices going forward. If you slice off more than you can chew, it's easy to cut that piece in half and save the rest for later. You don't always know how many loaves there will be before you're done baking and slicing but you always have an easily manageable unit.

I like slicing because planning is "baked in". Each iteration isn't like a spin of the roulette wheel. You can respond to experience and adjust accordingly. And you always have a clear vision of what the end is and when it will come.

Friday, May 25, 2012

Dear Web Designers

If you don't care about responsive web design whatsoever, you may stop reading here.

One of the designers where I work recently shared a link to an article called Responsive Web Design: What It Is and What It Isn’t. It's a worthwhile read where the main idea is responsive design isn't just a method for making your website work on desktop and mobile. It's about making your website scalable so it fits whatever size screen the user has. Rather than "either/or" it's "everything/and."

Reading between the lines and honing in on my own pet peeves, why do so many web designs scale to small screens but not big screens? I find myself growing increasingly frustrated by seeing beautiful web designs with gobs of potential that only fill 1024 of my available 1920 horizontal pixels! That leaves nearly 47% of my screen unused.

This sparked a very insightful (for me) conversation with the designer. The process of scaling a website is generally conceptualized as a one-way process: start big and get smaller. There's no "start big and get bigger." It actually makes a lot of sense to me as a much more manageable process to only go in one direction.

Herein lies my plea: Web designers, please throw out your 960 pixel starting mold. It's outdated. It's time to get a bigger mold.

StatCounter Global Stats has plenty of data to show us why:
Source: StatCounter Global Stats - Screen Resolution Market Share

Source: StatCounter Global Stats - Screen Resolution Market Share

1024-wide resolutions had a huge market share three years ago and it's economically-savvy to design for the market. That market share has dropped consistently and significantly. Meanwhile 1366-wide resolutions have gone from non-existence to being the new standard.

I've done some analysis of the numbers. If you've shopped for a computer lately, you've probably noticed that all monitors are widescreen. Most Macbooks use an 8:5 aspect ratio whereas PC manufacturers have overwhelmingly adopted the 16:9 aspect ratio. Because PCs account for 90% of the market, I've chosen to focus exclusively on the 16:9 ratio. For the three years May 2009 through April 2012, 1024x768 has dropped by an average of 0.64 points per month. Meanwhile 1366x768 has increased by 0.53, 1600x900 by 0.1 and 1920x1080 by 0.14. The numbers are very consistent, aside from a completely bizarre November 2011.

So what size mold should be used now? It depends upon the context, like everything. If responsive design isn't in scope, then leave the mold where it is. If it is, here are some more numbers for you: For the three months February to April 2012, screen resolutions of 1280+ totaled 66.35% of the market, 1360+ were 42.07%, and 1600+ were 14.05%. Judging by the data mentioned in the last paragraph, I say, set the mold at 1600 because that market share has shown steady growth and I think 14% is a pretty big share. After that set the steps at 1280 and 1024.

Update: See how the trends continue for the months May through July 2012.

Thursday, May 10, 2012

One For and Two Against Test Scripts

The following is a follow-up to my own Technical Documentation as Reference Material as well as David Greenlees' "Idiot Scripts".

Several thoughts come to my mind when I think about test scripts written for others to use.

I have at times, even recently, asked other people to write test scripts. Not because I want to use them or have them ready to distribute to someone else but because I wanted to use it as a tool to give me insight their approach to testing. It probably isn't the most efficient method but it was what seemed to be the best solution for the circumstances.

To me the intent of scripts for use by business users or during UAT is basically the same: happy path/positive testing that shows that the system works as expected.

The problem I have with writing scripts for business users is that I expect them to know how the system works and test scripts are a horribly inefficient form of user documentation. Besides it leaves the intent of each step in obscurity. It makes more sense to me to teach the business user how to use the system and then let them use it, whether it's taught through a full blown manual, a help file, a training seminar, or a phone call. If the system is complicated enough that it isn't readily apparent to the average user how to use it they you're going to need some sort of training program regardless so why duplicate effort by writing test scripts?

The problem I have with writing scripts for UAT is the same as I mentioned above but it goes deeper. Some, perhaps most people, might not agree with me. When I think about writing UAT scripts, it gives my heart ethical palpitations! UAT isn't just verifying functionality, it's verifying that it's acceptable. Determining whether the software/website is acceptable is a judgement call that only the client can make. Granted acceptance criteria can be written out and those criteria can be negotiated between the client and the agency but it's still a subjective evaluation when it comes down to it. The specific problem then that I have with UAT scripts is that I, as the script writer, am determining whether the deliverable is or is not acceptable. If the client wants to write an objective set of steps that define acceptability they can do that but that's on them. And if they want to go through some sort of approval process then it just becomes a dog and pony show.

Wednesday, May 9, 2012

What Makes a Leader?

Being a leader is not defined by:
  • Having the highest salary
  • Having a big title
  • Having the most tenure
  • Having an expensive degree or certification
  • Having the most skill and knowledge in a discipline
  • Having the most opinions
A leader is someone who:
  • Motivates others without threats or coercion
  • Can enforce the rules without pissing people off
  • Realizes that you can't always make everyone happy
  • Can discern between best practices and the best for the situation
  • Can see and analyze conflicts from all sides
  • Places the interests of the team above her own
  • Draws out the quiet voices and speaks up for those who won't/can't
  • Is willing to be unpopular but doesn't wear it as a badge of honor
  • Owns his mistakes
In short, a leader is not defined by what you HAVE but who you ARE.

Funny thing is, I can't think of one single word that appropriately describes a good leader other than "leader." On the other hand, if you're a bad leader, you could be described as a jerk or a jackass, and the list goes on. Even if you're the boss that doesn't mean you're a leader. Being bossy is just an adult synonym for being a bully. A good leader shines with Honesty, Integrity, and Respect. Those only expose themselves through how a person approaches and reacts to a situation and the way they treat others.

The Test Pilot

Just because you...
    ...designed the airplane...
    ...built the airplane...
    ...managed the design, construction, and testing of the airplane...
    ...flew on an airplane once...
    ...played an airplane video game...
    ...have a cousin that flies airplanes...
    ...read a book about flying airplanes...
...doesn’t mean you know how to fly the airplane!

I cannot emphasize this enough: testing is a skilled trade - whether it's testing an airplane or testing a website. The project manager for the the design and construction of a new airplane would not think twice about trying to fly the plane himself. Instead he delegates that responsibility to someone that knows how to fly the plane.

But just because you...
    ...know how to fly the plane...
...doesn't mean you know how to test the airplane!

Testing goes well beyond normal use. Test pilots need to be able to conceptualize the abnormal, the extreme, the emergency, even the absurd situations and have the skill to execute on them. Yes, every pilot should be trained to handle emergency situations but being forced into an emergency is far different from intentionally stepping into one.

Testing websites and software shouldn't be any different. There are certainly valid scenarios for the project manager or the client to do some testing but having them do the heavy lifting will lead to poor results.

Now this isn't to say that testing is an exclusive club by any means. You learn testing by doing. You really can't go to school to get a degree in testing. Attending a three-day seminar won't make you a good tester. Getting an expensive certification from ISTQB won't make you a good tester. You become a good tester with practice, by having an open mind, by challenging the status quo, by thinking, by being independent.

Tuesday, May 8, 2012

Fast, Good, Cheap, and _______

The Triple Constraint or Project Management Triangle is a device that describes opposing variables in project management. The variation that I'm most familiar with has the three sides of the triangle representing: Fast, Good, and Cheap. The principle is that, when applied to a project, one variable must suffer at the expense of the other two, whichever one that is, or vice versa. For example, you can have a project run fast and deliver a good product but it won't be cheap.

Who says you have to sacrifice though? It's really not a "who" but a "what", which I recognize thanks to a recent post by Catherine Powell. (Our models differ but it's because of her that my brain got to thinking about this.) There's a fourth project component missing from the model and that's Scope. The fourth variable necessitates compromise but before talking about how Scope comes into play, let's review how the Project Management Triangle works. There is a variable for each side of the triangle A, B, and C and a fourth variable for the perimeter of the triangle D. The perimeter is fixed so that no matter what, A + B + C = D. If A increases, B and/or C must decrease. If A and B increase, C must decrease.

Why does the perimeter have to be fixed? Let's face it, most projects, if not all, have one thing that is 100% non-negotiable. In the Fast, Good, Cheap model, this is Scope. Without answering "what is this project about" you really don't have a project. Even if a project has more than one "non-negotiable," one will still trump the other.


As you may have recognized, the three sides of the triangle are not permanently designated Fast (Timeline), Good (Quality), and Cheap (Cost). Rather, the three sides are that which is not the paramount non-negotiable perimeter.

For Example
Timeline: The client needs a website to go live at the same time as a huge marketing campaign
Quality: The website must adhere to government regulations
Scope: The functionality of the website has to include everything specified
Cost: The expense of the project cannot go one penny over

Let's now apply these variables to the sides of the triangle. When a side of the triangle is benefited, it gets longer but it gets shorter when penalized.

Variable Benefited Penalized
Timeline Decreases Increases
Quality Increases Decreases
Scope Increases Decreases
Cost Decreases Increases

Using the Fast, Good, Cheap model for the three sides, let's say Scope is our fixed perimeter with a non-negotiable set of requirements. Let's say the company wants a bigger profit margin, which is in essence cutting the cost. If we want it to stay on time, the quality will suffer because of reduced testing. If we want it to maintain quality, the timeline will suffer because of less-skilled (i.e. cheaper) labor.

Under ideal circumstances, we'll have an equilateral triangle. This requires doing your homework up front. Figure out what your non-negotiable variable is and then set the remaining variables based on that. Doing this should minimize the need to make compromises enabling you to not have to change anything. If change is necessary and compromises are undesirable, the only way to change the perimeter is to change the contract either through a change order or a completely new contract.

Tuesday, April 24, 2012

Technical Documentation as Reference Material

In my line of work, I have been asked many times over the years to produce test scripts so that the client can "test" and make sure that the system is working as it's supposed to. Another way to put this is to document a complex, technical process so that someone with no experience in the field can do it. For some reason, I thought the testing field was uniquely privileged with these requests but that's not true. Similarly purposed documents are commonplace across disciplines whether it's requirements gathering or system deployment.

While the actual function may vary the fallacy in these documents is uniform. If the purpose of the document was simply to provide information to skilled people, there'd be no problem. However, the specific, implicit purpose of these documents is more often than not for unskilled people. A document is no replacement for skill or training because a document is limited whereas the realm of possibilities is infinite. Without skill, you'll be swallowed by infinity faster than the fish swallowed Jonah.

So why do we keep doing this?

Because it's an industry practice?
This isn't elementary school where we do things just because it's popular. We should foster practices that produce the best results.

Because the client asked for it?
We're not just code monkeys pumping out a website, we're consultants. It's our duty to the client to educate them on why technical processes should be executed by people with skills in those processes. The truth of the matter is that by doing so, we help to insulate themselves from disaster.

Because the client offered to pay us a lot of money for it?
Do I even need to say that this would be unethical? I hope this is never a factor. Yes, the client is paying us but with that we have an obligation to better the client not just give them what they want because of it.

Skilled Work for Skilled People
The most important thing that we need to take to heart is, as I said before, technical processes should be executed by people with skills in those processes otherwise the exposure to risk becomes extraordinary.

  • Why risk having a site that isn't designed to do what it needs to? (BA)
  • Why risk having a site that can't do what it's designed to do? (Development)
  • Why risk having a site that doesn't do what it's designed to do? (Testing)
  • Why risk having a site that isn't up when it needs to be? (Deployment)

References, Not Instructions
With that in mind, I think we ought to shift our paradigm about how we think about technical documents. They're not instructions, they're references. They guide a familiar process for someone that lacks specific knowledge for a given situation. When necessary, training should be offered to fully realize the necessary knowledge transfer. The training though is not to impart a skill, such as coding in PHP, but to apply an existing skill to a situation, such as coding a widget within the existing framework and dependencies.

Friday, April 20, 2012

Thoughts on Effective Project Management

Effective project management and execution has many components. To name a few that comes to mind:
  • Planning
  • Communication
  • Cooperation
Projects don't always go smoothly though and so we compensate with:
  • Process
  • Documentation
  • Incentives (or bribery)
Don't get me wrong, process, documentation, and incentives are not bad things but they're not guarantees for great projects. Just because you have process doesn't mean your project will be executed well. Just because you have documentation doesn't mean the project will be universally perfectly understood. Just because you have incentives does not mean your team will be motivated.

When the project struggles, don't panic and don't think that there's going to be a miracle cure. So what should you do? Breathe. Relax. Rather than jumping to a change in the model, figure out first if the model is being effectively executed. Sometimes all it takes is some individual coaching or team training.

Think for a second about treating a medical ailment:
Symptoms: Excruciating pain, being unable to walk or stand
Solution: Prescribe heavy medications, such as Oxycodone. If you can't feel the pain, then what does it matter right? But if you only treat the symptoms and don't fix the problem, the symptoms will reoccur as soon as the drugs wear off.

Problem: The leg is broken
Solution: Set the let and/or perform surgery to bolt the bone back together. YES! We are out of pain and we're mobile again. But if you only fix the problem and don't eliminate the cause, then someone is sure to come along and break their leg--OR WORSE!

Cause: The rug is bunched up
Solution: Smooth out the rug. If the rug is prone to bunching, tack it down or figure out what causes it to get bunched up. Perhaps it's to close to a door that opens frequently. If tacking is not an option or it can't be moved further from the door, replace the rug with one that has a non-slip back or that is shorter.

I contend that most issues in project management and execution are like this. Simple causes that, when ignored, cause expensive problems and exhibit symptoms disconnected with the cause itself. Fixing the rug is a simple fix. It's not a change to the model, it's just taking the existing model and working the kinks out.

Thursday, April 19, 2012

[Not] Everything Is Urgent!

When projects get down to the wire, sometimes certain people (you know who they are) become prone to throwing out the rules for determining defect priority. As I previously wrote in Prioritizing Defect Reports, there are four factors that I consider when setting priority: Severity, Exposure, Business Need, and Timeframe. Unfortunately, Timeframe becomes a stumbling block when they fall prey to the terminal thought, "The deadline is right around the corner and all of these issues need to be done, therefore they are all Urgent!"

I call this a "terminal" thought because it leads to a disastrous method of project management: panic. Panic management occurs when organization goes out the window; and that's exactly what happens when all issues are prioritized the same. The priority indicator loses its value. Even when time is running short and all of the issues are crucial to launch, issues varying levels of priority and some need to be done before others. And what happens when nothing has any priority? Developers decided themselves which issues to do and when.

When we get to crunch time, I think it's appropriate to not only redefine the spans of time for the Timeframe factor but redefine the factor. Instead of thinking about a Timeframe, think about a Sequence:
  • Urgent: Drop everything and work on this
  • High: Complete before working on any lower priority issues
  • Normal and Low: After confirming with the project manager that there is nothing more important that needs to be done, concentrate on the Normal priority issues first but may incorporate Low priority issues if there's an efficiency advantage. Low priority issues are completed last.
By maintaining your system of priorities, you'll help keep your team focused and everyone will have a clearer vision of the outstanding risk in meeting the deadline.

Friday, April 13, 2012

The Twitter Gun

While I've had a Twitter account for a couple of years, I have only recently begun to utilize it. And while I'm definitely not an expert in Twitter strategy, I am a user and therefore I know what practices annoy the bajeebers out of me.

Twitter is like a mini blog and should generally follow the same guidelines as a regular blog. So in a regular blog, at least every one that I've ever read, each entry is unique. Yes, there may be recurring themes and topics but no one ever reposts a single entry multiple times. By and large, Twitter should be the same. However, some companies I've noticed like to tweet the same thing over and over. Perhaps they have one important message that they're trying to share like, "We're hiring" but they also like to share other useful things like, "Hey check out this link". And so after each time they post a "Hey check out this link" tweet, they repost the "We're hiring" tweet.

To me this practice is Twitter SPAM. It's unprofessional. It's rude to your followers. It encourages your followers not to pay attention to your tweets - which is the absolute last think you want to happen. And if you happen to be a company that claims to be a social media expert, it can be bad for business.

Consider the similarities and contrasts between two types of guns: the flare gun and handgun. Both are used to launch projectiles. Both are used to convey a message. A flare gun, however, will only shoot off one round at a time while a hand gun can fire many rounds in short succession. You run towards a flare gun but away from a handgun.

Twitter, used effectively, is like a flare gun because you want people to come to you. Therefore, you carefully plan your tweets and each contains a unique message. On the other hand, Twitter, used as I described above, is like a handgun. There's no strategy; you're just blasting away and people are going to run because it's the same thing over and over.

Of course, I think there's a third type of Twitter gun too, the machine gun. These are the companies that just don't stop sending out tweets. Seriously? While you are a news service, you're going to send out a tweet every three minutes when you post a new article? UN-SUB-SCRIBE! Maybe some people don't mind that but to me it just clogs up the feed and makes the useful information impossible to glean.

So if you run a company with an active Twitter strategy, think about how you're affecting your followers. Plan your tweets and don't inundate your followers with repetitive information.

Wednesday, April 11, 2012

The Quality Assurance Misnomer

Over the years, the industry standard title for a software tester has become “Quality Assurance Analyst/Engineer”. I don’t know the history of this but I do know that it’s not without controversy. When most people in the software industry hear the title, they think of the person’s role as being someone who assures the quality of the software product.  That’s a big problem though because, as a Quality Assurance Engineer myself, that’s not what my job is, nor is it generally the job of any software tester that I know.

Here’s the root of the problem: Quality is a business decision that evaluates whether a product or part thereof is GOOD or BAD and that is a decision that lies in the hands of the product owner – that person which is or represents the product purchaser/user. The reason for this is because they are the person that best knows whether the purchaser/user will be satisfied. Testers don't know the user so it would be presumptuous at best to put that responsibility on them

What I do is I poke, stress, and exercise the software in every way that I can think of and make observations. I compare those observations to documentation and common heuristics that help identify when intended functionality deviates from actual functionality. And then my job is actually to help inform that business decision by reporting all of my observations through conversation, defect reports, and progress reports to the product owner. You see, I can report RIGHT or WRONG based on my comparison to documentation and heuristics but that’s different than GOOD or BAD.

Determining GOOD or BAD is a murky process that is above the pay grade of a lowly software tester and that has to factor in many components including:
  • Test coverage
  • How stale is the testing (i.e. when was the last time part x was tested?)
  • How many failed test cases are outstanding
  • What the severity is of the outstanding defects
  • Amount of time allowed for planning, developing, and testing
  • The gut factor (i.e. does it feel like it’s ready?)

Then taking all of this information, to determine whether the product is GOOD or BAD, you need to have a sense of whether the customer will be satisfied as I mentioned above. The only way to develop this sense is to have an active, direct interaction with the customer from the beginning of the project or to just leave it directly up to the customer. Though, I don’t recommend the latter because without somehow quantifying an acceptance level, they may never agree that the product is good.

All that said, I don’t think that the title “Quality Assurance Analyst/Engineer” is BAD, just misunderstood. While I don’t assure quality through analysis and/or engineering, I do analyze and engineer so that quality can be assured.

Friday, March 23, 2012

WordPress: PHP Is Not Installed

So I'm starting to work on a little project at home involving a website built in WordPress. This is all new to me so naturally, there are going to be bumps in the road. Anyway, I managed to install Apache, MySQL, and PHP. However, when attempting to install WordPress, I was continually presented with the error:
Error: PHP is not running
WordPress requires that your web server is running PHP. Your server does not have PHP installed, or PHP is turned off.
WHAT??? The PHP test file worked perfectly fine! I googled and googled some more trying to find a solution to this and while there question appeared frequently, there was never a good answer. I finally figured it out myself though!

The problem was in how I was accessing install.php. Through Windows Explorer, I had navigated to my WordPress directory and double clicked on readme.html. The file opened in Firefox. Then I clicked the hyperlink in Step 2 "wp-admin/install.php". This opened the file with the error, and here's the problem: look at the address bar:
file:///C:/apache/htdocs/wordpress/wp-admin/install.php
In order to make it work, you need to hit it from LOCALHOST
http://localhost/wordpress/wp-admin/install.php

BAM! Done! 

Environmental Variables:
OS: Windows XP SP3
Webserver: Apache 2.2
MySQL: 5.5.21
PHP: 5.2.17
WordPress 3.3.1

Friday, March 16, 2012

Gems of Wisdom from Fellow Software Testers

This past week, I've been feeling very cerebral and the timing couldn't have been better because I've been able give my blog some much needed attention and I've also been able to get almost caught up on the backlog of blogs that I follow. In my reading, I have come across a number of gems of wisdom that I think are well worth sharing:

James Bach in Why Scripted Testing is Not for Novices
  • [A] scripted tester, to do well, must apprehend the intent of the one who wrote the script. Moreover, the scripted tester must go beyond the stated intent and honor the tacit intent, as well– otherwise it’s just shallow, bad testing. - TW: This problem is a direct result of divorcing test design and execution. And for a novice tester, they simply don't have the skills yet to "read between the lines" of the script to see the intent.

Michael Bolton in Why Checking Is Not Enough
  • But even when we’re working on the best imaginable teams in the best-managed projects, as soon as we begin to test test, we begin immediately to discover things that no one—neither testers, designers, programmers, nor product owner—had anticipated or considered before testing revealed them.
  • It’s important not to confuse checks with oracles. An oracle is a principle or mechanism by which we recognize a problem. A check is a mechanism, an observation linked to a decision rule.
  • Testing is not governed by rules; it is governed by heuristics that, to be applied appropriately, require sapient awareness and judgement.
  • A passing check doesn’t tell us that the product is acceptable. At best, a check that doesn’t pass suggests that there is a problem in the product that might make it unacceptable.
  • Yet not even testing is about telling people that the product is acceptable. - TW: I've been trying to promote this concept for at least a year in my own evolutionary understanding of my craft. You can expect my own blogging on this topic soon.
  • Testing is about investigating the product to reveal knowledge that informs the acceptability decision.

Michael Bolton in What Exploratory Testing Is Not (Part 3):  Tool-Free Testing
  • People often make a distinction between “automated” and “exploratory” testing. - TW: This is the first sentance and BAM! did it cause a paradigm shift for me!
  • That traditional view of test automation focuses on performing checks, but that’s not the only way in which automation can help testing. In the Rapid Software Testing class, James Bach and I suggest a more expansive view of test automation: any use of tools to support testing.

Anne-Marie Charrett in Please don't hire me
  • If you want me to break your code - TW: This set my brain going for at least two hours thinking about the implications of this. It's a great point though, testers don't break code. Look forward to more on this in the future.

I hope you find some wisdom in this too.

Thursday, March 15, 2012

Counting My M&M's

Like millions of other Americans, I keep a stash of munchies in my desk drawer at work. I find that a little treat in the afternoon is a highly effective way to keep me focused on my work. One of my snacks, interestingly (or oddly), has evolved into a bit of a ritual.

It all started back in 2010. I was at my local Dominick's before work picking up a donut, something for lunch, and stock for the stash. I happened down the candy aisle and saw that they had the jumbo bags of M&M's on sale. I'm always a sucker for the lowest price per unit so I couldn't resist. Fast forward to the afternoon-snack time-and a funny little thought popped into my head: "Are there the same number of each color of M&M's in the package?" I decided to find out, just for the heck of it.


The jumbo sack of M&M's is 42 ounces so there are two things to keep in mind. 1. I do not eat the entire thing in one helping. 2. I was not about to dump out the whole sack and count them right then and there. Instead, I started a spreadsheet. Whenever I decided to have a handful of M&M's, I would first sort them by color, count them, and then record the stats in my spreadsheet. It didn't take too long for multiple passersby to notice and remark upon my method for eating M&M's. Suddenly I had a reputation that needed to be maintained! Never again could I reach into a sack of M&M's and NOT count each color.

Two years later, I'm now on my fourth sack of M&M's. I don't eat them every day and I don't always have them in my stash. But when I do, I keep statistics. Today, I am pleased to announce the public sharing of this information on my M&M Counter page!

The M&M Counter page has several fun features:
  • The graph at the top shows the combined stats for all of the M&M's that I have consumed over the years.
  • Then, I provide a graph and chart that are updated in real time that show the stats for the sack that I'm currently in the process of devouring.
  • Finally, I have provided a form, for what I think is the most exciting feature of all, to allow YOU to join in the fun. You are welcomed and encouraged to contribute your own counts. Currently, the form only supports the regularly colored candies. Pick the closest option when selecting the size of the package. Soon, I will include a graph or two to display the user submitted data.
So yeah, it's a little weird. There's nothing scientific about this. There's no ploy to identify certain colors as subject of discrimination. It's just for fun. Enjoy!

Paper Airplane Science Experiment

When I was in elementary school, we would have a science fair every other year and all of the students would do some sort of science project. In fifth grade I simulated acid rain and in seventh grade I made recycled paper. I didn't find the acid rain experiment to be fun at all and it had been the only thing I could think of. The paper experiment was more fun because I got to make a big mess but it still didn't engage me, even though it was something I really wanted to do. So for  you parents out there trying to help your kid find something different to do for a science experiment, here's an idea: PAPER AIRPLANES.


Lots of little kids like making paper airplanes and if you think about it, it's super-simple to turn it into a science experiment and there's a a number of different approaches that you can take. The idea when doing different experiments, is that you want to identify all of the different variables and then just change ONE. In any approach, you'll form hypotheses around the same questions:
  1. Which airplane flies the farthest?
  2. Which airplane flies the straightest?
Experiment Method 1: Airplane design
Go online (or to the library) and find instructions on how to create several different designs of paper airplanes. Then construct them using the SAME size and type of paper. If you want, you could measure the wing dimensions and area and use those numbers to form your hypotheses.

Experiment Method 2: Airplane material
Pick a single airplane design and change up the type of paper used. Make sure the size of the paper doesn't change though. You could use construction paper, aluminum foil, printer paper, wrapping papper, cardstock, or whatever else you can think of. Here you'll probably want to weigh each airplane and use that for forming hypotheses.

Experiment Method 3: Airplane size
Again, pick a single airplane design but this time change up the size of the paper but use same type of paper. You may need to cut down large pieces of paper to make smaller planes. I say, find the biggest cut of paper you can find. Most schools have large rolls of paper for making big posters or bulletin boards. For each variation, make sure that the length to width proportion of the paper remains constant.

Experiment Method 4: Airplane proportions
Pick a single airplane design and type of paper. This time, vary the dimensions of the paper, while maintaining the same number of square inches. For example: (12x10) and (15x8) both have 120 square inches of area. You could also do (12x10) and (10x12) - rotating the axis of the plane. In doing this type of experiment, you should take lots of measurements in forming your hypotheses: wing area, plane length, wing width, plane height.

Conducting the experiment
Reduce the potential for errors:
  • I would make three copies of each plane to try to account for any variations there might be in the construction. Also, practice making the plane first and then make new ones for the experiment.
  • Throw and measure each plane at least five times. The more times you throw it, the more accurate your data will be, unless, of course, the plane gets really beat up from doing a nosedive into the floor. You can always disregard measurements (so you take 10 but only use the first 5 because you see that after that the data started to skew from bent noses).
Location selection:
  • You should try to find a wide open place to throw the airplanes. The school gymnasium would be a great place because it's wide open and there won't be any wind.
Location preparation:
  • Place a mark on the floor from where you will launch the planes each time (Use gaffers tape in the gymnasium so you don't gunk up the floors!). Mark out a straight line from that point. You might want to put a target on the wall or something that is in line with the starting point and straight line. When you throw the planes, aim for the target and that will help ensure that you throw consistently.
Throwing:
  • Again, aim for the target and make sure that your throwing arm is lined up with your target.
Measuring:
  • Measure the distance from the throwing point to where the plane LANDED. Have someone there to help watch in case the planes want to slide on the floor.
  • Measure the angle between the center line and where the plane lands.
One Last Idea:
You could also experiment with airplane surface friction. I don't know how you would MEASURE friction though so I'm just floating this out there for you. Using glossy inkjet photo paper, the same size paper, dimensions, and airplane design, make one plane with the glossy side out and one plane with the back side out.

Share Your Results
If you decide to try this experiment in one form or another, please share your results. I would love to hear what you discover!

Wednesday, March 14, 2012

Editior Fail at Yahoo! News

While browsing the news at Yahoo! over my lunch break on my way to read Dear Abby, I noticed this rather prominent error. Don't rely on spell checker!

Tuesday, March 13, 2012

Predator II: Just Die Already!

What do they do in in Hollywood if a movie does well at the box office? Make a sequel and use the success of the first movie to milk consumers for more money. Predator did well, earning almost $100M so naturally, Predator II came along three years later. Sequels usually have a common relationship with the original. In the first movie, the hero, after great struggle, finally kills the villain and then the sequel comes along and somehow the villain is back to life causing even more havoc. I see that and get to thinking, "Gosh darn! Why won't that thing JUST DIE ALREADY!?" I ask that question about software defects often because often times, defects have sequels too (and so should blogs)!

Despite my best efforts - thorough coverage, testing every test case I can think of, making sure everything WORKS - out of nowhere pops up some bug that I thought was squashed weeks ago! Sometimes it doesn't matter how big your muscles are, how many bullets you have in your gun, or how many other people have their guns pointed at the same monster, they still come back to life.

This can be caused by any number of things. It could be laziness on the part of the tester (or the developer) but I don't think that's usually the case. Here are some other causes:
  1. Unclear requirements
  2. Inexperience
  3. Lack of time
  4. Introduction of new/altered code
  5. Using [testing] the software in a way that it wasn't previously used
That last one is a big ball of mud - "WHY DIDN'T YOU TEST IT THAT WAY BEFORE???" See causes 1-4. The truth is, software is extremely complex and it's impossible to account for every scenario. As a tester, you must know how to prioritize and how to look for the scenarios that are most likely to occur and those which have the gravest consequences because there just isn't time/budget/people-power to do it all.

When defects do reoccur, use it as a learning catalyst. Was there an obvious breakdown in the development or testing methodology? Is this a "new" method of execution that should be noted for the future and with application other scenarios? Is there a different testing technique or tool that might have been able to expose that the defect was not fully resolved earlier?

Old defects will continue to pop up. Don't be discouraged but remain vigilant. Keep sharpening your skills and be ready for the enemy to return, because he always does, sometimes with Aliens.

Please study for my interview

I recently volunteered myself into the position of reviewing resumes, identifying candidates, and executing first round interviews for a QA position at my company. This has been a real learning experience for me as I've garnered a better understanding of the hiring process; but more so, it has been an eye opening experience. The great majority of applicants in the field are ROBOTS, figuratively speaking. Requirements get turned into scripts and that's how you test. As long as you have the right process, the right charts, the right formula, that's all you need.

I disagree. Don't get me wrong, I want someone that can transform requirements into test scripts but I what I really want is someone that can THINK. Someone that can test what's not in the requirements document. To quote Anne-Marie Charrett, I want to hire someone that says, "Don't hire me if you want perfect software. Don't hire me if you're looking for a tester to only "check" if things look ok."

For my interviews, I developed a standard list of questions to ask. Each question was carefully thought out and has a specific purpose for being asked. Here's my list:
  1. How did you get into software testing?
  2. What do you like about testing?
  3. What are your frustrations with testing and how do you deal with them?
    1. For example: How do you deal with being the resident naysayer?
    2. For example: How do you deal with defects that just never seem to get fixed?
  4. What comes to mind when you hear the term ‘Quality Assurance’?
  5. Compare and contrast automated and manual testing.
  6. Compare and contrast scripted and exploratory testing.
  7. How confident are you in your ability to deliver defect free software?
  8. What essential information should be included in defect report?
  9. How do you determine if something is working correctly when you have no documentation?
  10. What’s your experience with Agile?
  11. How do you sharpen your testing skills?
So there you go. If you're ever on the other side of the table being interviewed by me, nothing I ask should catch you by surprise. In fact, I encourage you to study for my interview. However, for now, I'm not going to explain any of my rationale; you'll have to come back for that later. For those of you that conduct interviews yourselves, is there anything you think I should add? Please share in the comments.

Monday, March 12, 2012

Using XQuery in MS SQL for Novices Part 2

How to use the value() function

Just the other day I started the conversation about how to use XQuery functions in SQL to pull specific XML data out of a table. It's time now for Part 2.

In the first discussion, I explained how to use the query() function. It's a very nice function because it returns both the XML nodes and the data within them. However, if all you want is the specific data in the node, having the nodes display in each result can make it more difficult to read the results. For this purpose we can use the value() function.

The value() function is ever-eager to throw the error "'value()' requires a singleton (or empty sequence)". It's also eager to throw the error "The value function requires 2 argument(s)." Obviously, these errors occur when we don't provide all of the necessary information.

The two arguments required by the values() function are:
  1. The reference to the node
  2. A data type specification in which to host the returned information
The requirement of a singleton is admittedly, not something I fully understand. It is solved, however, by appending a number one in brackets - [1] - to the end of the first argument. Just put everything inside of the single quotes and the argument itself inside of parentheses.

This is how it looks:
SELECT XMLColumn.value('(//XML-Node-Name)[1]', 'DataType(#)') FROM Table
Don't forget to add a comma between the arguments!
 
We can further tidy up the results by aliasing the column (assigning it a custom name of our choice) and using the exists() function.
SELECT XMLColumn.value('(//XML-Node-Name)[1]', 'DataType(#)') 'Alias Column Name' FROM Table WHERE XMLColumn.exists('//XML-Node-Name') = 1

Sunday, March 11, 2012

What should be the easiest archaeological dig in the world

When I entered the field of software testing, the last last thing on my mind was archaeology. Software testing is about the future whereas archaeology is about the past. Technology is clean and non-physical but archaeology is dirty and labor-intensive. However, the two fields aren't all that different. A good software tester is going to thrive on artifacts, but not the kind buried in the ruins of ancient civilizations.

In software projects, an artifact is any piece of documentation that relates to the project:

  • RFP
  • RFP Response
  • SOW
  • Project Plan
  • Requirements
  • Design comps
  • Story boards
  • User personas
  • Test scripts
  • Defect reports
  • Meeting summaries
  • etc.
The artifacts mentioned above help software testers the same way bones help paleontologists. Both are vital in the acquisition of knowledge. Testers need to have a thirst for project knowledge. How can one assert if the product works as intended without having an authoritative source? Regardless of approach, the more the tester knows about how the software is supposed to work, the more thoroughly they can test through broader coverage and more intricate scenarios.

When projects don't have a decent amount of documentation, you're asking for scope creep and unsatisfied client expectations. Testers, get dirty and dig into the documentation. Insist on having access to it and demand it early. Study it thoroughly. Use it to help design your tests. If the documentation doesn't exist, consult the project experts (well, consult them anyway) and then write it down. Once it's written down, it can't be disputed - it might be changed later but that can happen to any document. Moral of the story: use everything at your disposal to learn about the project.

Friday, March 9, 2012

Using XQuery in MS SQL for Novices Part 1

A recent project at work presented me with a new challenge. Generally in a database, at least the ones that I've  had to work with, each different type of data is stored in its own column. Well, the project currently under test downloads information from a third party system in the form of an XML file. Even though we don't use every bit of data in provided by the third party, we still want to preserve it in its original format. Therefore we store the entire XML string in the database.

When XML data is stored in a database, there are at least a couple different data types that can be used.
  1. It could be stored as just regular old text using the varchar data type. In that case whatever program is using the data has to convert it back into XML in order to use it. The Umbraco content management system likes to do it this way.
  2. On the other hand, it could be stored natively using the XML data type, which is what our system does. Storing the data AS XML presents some interesting challenges as well as many opportunities when it comes to digging for information within the XML string itself -- most of which I'm still figuring out. For today, I'd like to start with just the basics.
Doing It Old School
If all you're trying to do is find the rows where a specific series of characters occurs, you can search just like you always do... almost. 
SELECT * FROM Table WHERE XMLColumn LIKE '%key-text%'
This traditional query will throw an error faster than a lazy developer's code. Instead, you need to change data types on the fly:
SELECT * FROM Table
WHERE CAST(XMLColumn AS varchar(max)) LIKE '%key-text%'
So to reiterate, this query uses the CAST command to convert the XML column into a text data type.

Being All Fancy About It
Like I said, the old school works fine if you're just trying to identify entire records in the table that match your criteria. If you're just looking for specific nodes in the XML data, the approach above is NOT very efficient. You can harness the power of the XML data type to do things much more quickly! Since v.2005, MS SQL Server has supported XQuery. I'm not a programmer so I don't yet understand the full power that this yields. I've found, however, that the existing help on the internet was not written for people like me so here's what I've learned so far:

STEP 1: Using query()
Assuming that we just want to find the data stored in a specific XML node, we can use the Query() function.
SELECT XMLColumn.query('//XML-Node-Name') FROM Table
A couple of notes about what's going on there:
  1. Query() is a function. Inside of the function is one ore more arguments (RAWR! J/K), which are written inside of the parentheses. Each argument must be enclosed in single quotes and multiple arguments are comma separated. The query() function will accept accept ONE argument.
  2. The double slashes // at the start of the argument tell the system that you're looking for a node and for our purposes, it doesn't matter whether it has child or parent nodes.
  3. You need to put the goods of the function inside of the single quotes.
  4. The node name is CASE SENSITIVE
  5. Don't capitalize the the function: e.g. type "query" not "Query"
When this query runs, unless you add a WHERE clause, it's going return a result for every single row in the table, including the rows where the specified node does not exist. If the node does not exist for that row, there will be an empty cell in the results table. If the node does exist, the data returned will look like this:
<xml-node-name>XML data stored in node</xml-node-name>
STEP 2: Using exists()
Because using query() alone can be a little messy returning empty rows, we can use another function that checks YES or NO does something exist in the XML data.
SELECT XMLColumn.query('//XML-Node-Name'FROM Table WHERE XMLColumn.exists('//XML-Node-Name') = 1
 This query will only return the rows where the specified node exists without any empty rows. A couple of notes about what's going on here:

  1. I used the same argument for both functions because if the node doesn't exist that I'm looking for, that row doesn't need to be returned.
  2. The = 1 part is telling it to evaluate to TRUE (e.g. Yes, it exists) whereas, if I set it to = 0 it would be the opposite (e.g. No, it doesn't exist). Setting it to 0 would only return the blank rows in this example.
Please let me know in the comments if you've found this useful. Look for more posts to come as I learn to harness more power of the XQuery!

Helpful but technical links (also probably the first ones you came across when googling "XQuery SQL Server":