Wednesday, July 25, 2012

Your Website Is a Rubik's Cube

My coworker, @jakedowns, has a Rubik's Cube sitting on his desk. It seems to serve as a physical manifestation of the gears turning in his brain as he's working on solving a development problem. He's actually  pretty good at solving them and can usually do it in less than a minute.

On occasion, I've happened past his desk and noticed that it has been arranged in a checkerboard pattern. Last Friday I decided that he needed a challenge so I designed a pattern and told him to replicate it.

And he did! However, the pattern I provided was for only one face and his solution was for only one face. At the time, I made a joke that "the requirements were met but the purpose was not fulfilled" because I wanted the pattern to be displayed on all six faces.

At this, I set to work plotting out all six faces. It took me hours, literally, to figure out a valid solution extending the original pattern to all six sides. It was a great exercise for me. I learned that the pattern does not work when using the colors of opposing faces for the pattern (e.g. blue and green). I also learned that not only do I have to account for all eight corners, I also need to account that the thee colors on each corner are in the correct position relative to the other two. I also exercised my spacial reasoning skills quite a bit.

In the midst of all of this, I was thinking about how websites are like Rubik's Cubes. Designing and building a website isn't as simple as solving a Rubik's Cube where each side is a solid color. Rather, every website is unique, perhaps similar to others, but still has it's own individual requirements, much the same way that I created a new requirement for what it meant for Jake to solve the puzzle.
Upon doing a little internet research, it seems that there are some 43,252,003,274,489,856,000 (that's forty-three quintillion) valid combinations for a Rubik's Cube. When solving one for the traditional pattern, there is a well-established method to do so. However, when trying to solve for a custom design, most, if not all of that goes out the window. You still twist and turn but the algorithms have to be all new.

The thing about websites is that you don't just twist 'em and turn 'em until you feel like stopping and saying "Solved!" You have a goal in mind. Sometimes it takes a lot of work to get it figured out exactly what that goal is. Then you twist and turn until what you have matches that goal. It's a complex process because no two solutions are the same.

Here are some practical takeaways to keep in mind:
  • Your customer, no matter what they say, has a very specific result in mind for you building their website.
  • It is worth every minute to take the time up front to do design and business analysis.
  • Changes made once development (twisting and turning) has begun is going to affect the deadline and is likely going to mean that some things are going to have to be done over.
  • Testers are happy to look at your design before development begins and look to see if the corners match up the way they should.
The next time someone says to you the words "simple website" or "simple change", hand them two Rubik's Cubes that have been thoroughly discombobulated and tell them to change one of them to match the other.

Monday, July 23, 2012

Improvising on a tune called "Exploratory Testing"

Paul Manz is someone whom I would consider to be among the greatest North American organists of the twentieth century. Had you ever attended one of his hymn festivals, you would have been treated to a number of improvisations - that is simultaneous composition and performance. And I guarantee you, you would never have thought, "This isn't music, this is just noise!" That's because his improvisations have all of the structure of music as we know it: tonality, meter, tempo, dynamic, melody, harmony, etc. And he was doing it ON THE SPOT; nothing was written down.

Exploratory testing has many parallels with improvised music but yet, it doesn't have the same respect even when executed by the "Paul Manz-es" of the testing world like James Bach and Anne-Marie Charrett.

I improvise regularly when I sit at the piano. I wasn't always very good but my skill has improved little by little over time, but particularly the last two years whereby I have had to create my own accompaniments to support congregational singing when the music editor at Oregon Catholic Press fails to understand the needs of the untrained singer. But I digress. My point however, is that no one would question the legitimacy of my playing even though I didn't sound like Paul Manz and the only thing in writing was the melody and harmonic suggestion.

E.T. is structured just like improvisations are structured. There's usually some sort of suggested charter or mission. Testers utilize various techniques to expose and isolate defects. But yet because it isn't written down in meticulous detail, E.T. is considered inferior. Some of the worst music of all time has been written down, performed over and over, and made the performer filthy rich.

While I think there's a close parallel between improvising and E.T., it doesn't hold up for written music and scripted testing. Music and testing are both art and science but I think they use them in opposite ways. Music looks for an artistic result achieved through a scientific process whereas testing looks for a scientific result achieved through an artistic process. When you script a test, you strip out the art - the intent, the intuition, the sapience, the wonder.

Any good musician should be able to read and perform music because it demonstrates the technical ability while being a means by which we can learn ultimately to express our own artistic thoughts. A tester has no need to know how to write scripts or execute them. One learns to test by testing, talking with a mentor, reading, writing, etc. and by testing. By gaining understanding of the philosophy of testing we learn ultimately to achieve the scientific results.

It's not fair that exploratory testing doesn't always get the credit it deserves but then again life isn't fair. Sojourn on testers, continually strive to better yourselves and serve as a positive example of just how effective exploratory testing is.

Splitting Definitions

There are two words that we use interchangeably and even the dictionary considers them synonyms but I'd like to challenge us to be more judicious about when we use the word "normal" and when we use the word "average." For both words when we say that x is normal or average we're trying to convey a baseline against which to judge y. However normal and average imply completely different, I might argue opposite, methods of establishing the baseline.

Normal establishes the baseline through a rule. It's objective. Anything that doesn't follow the rule is abnormal.

Average establishes the baseline based on the collection of results. It's subjective. The baseline changes as the results change. Result x could be above or below average but it becomes part of the average when evaluating result y. If we wish to exclude x from the average then we're establishing a rule.

When it comes to software, functionality is generally normal and usage is average. In most cases we define [set rules] about how the software is supposed to work but we never know exactly who will be using it or how. It is quite impossible to set a rule as to who will be using the system. Instead we observe and look for trends and patterns. Be careful though because there is no such thing as an average user and when we create rules based on some composite, mythical person, we are sure to disenfranchise real people.

Normal = Objective. Average = Subjective.

Monday, July 9, 2012

Channeling Emily Post: Meeting Etiquette

When your team grows beyond a handful and staff members have obligations to multiple projects, inevitably scheduling collaborative meeting time can be tricky. And then "tricky" turns into "frustrating" when common courtesy becomes extinct.

You can help save your coworkers some frustration by keeping in mind a few simple guidelines when it comes to scheduling meetings:
  1. If you receive an appointment request, please respond promptly so that the requester can reschedule if necessary.
  2. Please make sure all time unavailable is recorded on your calendar – out of office, personal appointments, vacation, etc. Even if you're working from home, it's nice for that to be indicated on your calendar because some meetings just can't happen by conference call.
  3. For meetings outside of the office, please make sure the duration of the unavailability includes travel time.
  4. If reserving a conference room, make sure it is included on the appointment, even if it's a spur-of-the-moment meeting.
  5. If you have previously accepted a meeting request and can no longer attend, please decline promptly. Your presence at the meeting may be essential which could require the meeting to be rescheduled.
  6. If you scheduled a meeting and can no longer attend, please cancel promptly so that attendees can remained focused at their current task.
Remember, meetings occupy the time of multiple people. Failing to apply a small amount of consideration can cause a huge waste of time. The wasted time isn't just when staff is sitting around waiting for someone to show up but also in the time that they require shift focus back to their other work. Courtesy also helps to keep meeting productivity maximized by preventing attendees from stewing over other attendees showing up late or not at all.

Update: Read the sequel about etiquette relating to the meeting itself.

Tuesday, July 3, 2012

Random Ponderings on Various Pay Structures

Recently, I was thinking about the various pros and cons of the various pay structures that we use: hourly, salary, and commission. Being an ardent subscriber of the context-driven testing school, I question everything -- even when I'm not testing.

Thought 1: Link to Productivity
Hourly payment is often applied where productivity can be closely tied to time.
Examples:
Manufacturing - parts per hour
Retail - number of associates needed to handle customer load
Custodial - how much can be cleaned in a given shift

Salary payment is often applied where productivity is linked to accomplishing a task that can require varying time.
Examples:
Software testing - make sure the system works
Teaching - in addition to classroom time, prep for lessons and evaluate student work
Business executive - run the company

Commission payment is often applied where productivity is connected with sales.
Examples:
Car salesman - sell more cars, earn more money
Real Estate Agent - sell more homes, earn more money
Mortgage broker - close more loans, earn more money

Thought 2: Productivity ≠ Working Hard
For any given task with a set amount of time, Person A could exude a great deal of personal effort and not complete the task whereas Person B could exude very little personal effort and finish the task with time to spare.

Thought 3: Time = Money
Hourly: The more hours you work, the more you're paid
Salary: The fewer hours you work, the higher your effective hourly rate
Commission: The more sales you make, which in theory means, the more time you use pursuing sales leads, the more you're paid

Thought 4: Salary is a bit of an odd duck
Unlike hourly and commission, the salary payment structure does not have a built in mechanism to actually get paid more than your base rate. A salary employee gets paid the same no matter how many hours he works or how many "tasks" are completed. Salary has other perks though. Generally it has more flexible hours and sometimes you may even get to leave work a little early or take off a few hours without using paid time off. And in comparison to commission, you're still guaranteed to make money. If a salesperson can't close any sales, he won't get paid. And I don't have any data to support this but with all things being equal, I think salary employees generally have a higher base pay than hourly.

Thought 5: Salary systems require integrity to work
In salary situations it is generally stipulated that hours worked in excess of 40 are not paid overtime. Moreover, employers generally require an accounting of time spent which should add up to about 40 hours. But in order for all of this to work, employees have to actually put in their 40 hours and employers have to keep overtime as the exception. In other words, employees shouldn't try to cheat the system and employers shouldn't try take advantage of their workers.

Thought 6: So why would anyone pick one over the other?
Really, that depends on a variety of circumstances. Generally the "choice" is in the choosing of the career. You don't say, I want to do FOO and be paid with structure BAR. Personality has a lot to do with it too. In the past, I've worked a number of hourly jobs. I liked being able to say, "my shift is over, time to go home." I think the pressure of a commission setting would would weigh heavily on me and I wouldn't enjoy it. But now I've been salary for my entire professional career and it's worked out really well because of the cerebral work that I do. Some days I finish a little early and then I don't have to wait for the clock to strike a certain time before heading out. Other days, I get in a thread and I don't even notice that the time is well beyond the average end of the day.

Wednesday, June 6, 2012

Two Keys for Effective Defect Reports

When I was in high school, a class that I was in was given a writing assignment. This was a type of class that ordinarily wouldn't have writing assignments and so the students were a little unsure as to the requirements. One of my classmates asked, "How long should the paper be?" And the perverted old man replied, "The paper should be like a woman's skirt: long enough to cover the subject but short enough to still be interesting."

Today I want to expound upon effective defect reports. I am an absolute stickler for detailed defect reports. I have no problem sending issues back that don't have sufficient information. Defect reports are adult writing assignments and we need to make sure that we are including all of the necessary information. I could run down my list of criteria that I expect in every bug report, but you can find those types of lists lots of places. Besides, the list, like everything in testing, is context-oriented.

As inappropriate as my teacher was, he did imply an important point: it's not about a measurement, it's about accomplishing a purpose. Skirts, and clothing in general, maintain a certain level of modest, protect us from the elements, and evoke intrigue. Defect reports, must be able to fulfill two very important functions, otherwise they are a failure.

An effective defect report makes reproduction and troubleshooting as simple as possible.
More often than not, the person who discovers the defect is not the person who resolves the defect. This is a simple efficiency issue. If you have a PM or Scrum Master reviewing issues, they need to be able to assess the priority and assign the issue to the most appropriate developer. The developer should be able to understand what's going on without having to guess or ask additional questions so they can spend less time identifying the issue and more time fixing.

An effective defect report archives the defect.
The purpose of this is for testing. Often times, the person who discovers the defect is not the person who verifies that it has been fixed. The tester needs to be able to know exactly what was wrong so that when they test the resolution, they can determine beyond a doubt whether or not the defect is fixed. If the tester doesn't know precisely what was wrong, it's impossible to know if it's fixed.

If your bug reports fulfill those functions then you're well on your way to getting an A+ on your writing skills. Remember, context is everything - sometimes bug reproduction is elusive and sometimes a conversation can convey imporant information that is hard to express in text. If you keep mind of the purpose then the details will fall into place.

Wednesday, May 30, 2012

Squeeze or Slice?

When it comes to planning the scope for an iteration, I generally think of two different approaches: squeezing and slicing. So which one are you? Are you a squeezer or a slicer?

The Squeezer

  • Tries to get as much out as possible in each squeeze
  • Some squeezes go "Splllppppppt!" and hardly anything comes out
  • More oozes out after you stop squeezing
  • You don't know how much the bottle holds
  • Even if you may know the bottle is 8oz, you never know for sure how much is left
  • You have to pound the end and squeeze repetitively to get the last little bit out

The Slicer

  • Each slice is more or less the same size
  • You can see exactly how big the loaf is and how many slices are left
  • Only the supernatural keeps the loaf from running out
  • There are just a few crumbs to brush up after the last slice

As you may have guessed, I'm an advocate of the slicing system. The characteristics mentioned above are maybe a little bit euphemistic because in reality, software projects are not factory baked and sliced bread. They're homemade and each slice is cut individually. When you first pull the bread out of the oven, you might ideally want the bread to be sliced into 10 pieces. However, after the first slice or two, you might find that the bread is heavier or lighter than you thought prompting you to make thinner or thicker slices going forward. If you slice off more than you can chew, it's easy to cut that piece in half and save the rest for later. You don't always know how many loaves there will be before you're done baking and slicing but you always have an easily manageable unit.

I like slicing because planning is "baked in". Each iteration isn't like a spin of the roulette wheel. You can respond to experience and adjust accordingly. And you always have a clear vision of what the end is and when it will come.

Friday, May 25, 2012

Dear Web Designers

If you don't care about responsive web design whatsoever, you may stop reading here.

One of the designers where I work recently shared a link to an article called Responsive Web Design: What It Is and What It Isn’t. It's a worthwhile read where the main idea is responsive design isn't just a method for making your website work on desktop and mobile. It's about making your website scalable so it fits whatever size screen the user has. Rather than "either/or" it's "everything/and."

Reading between the lines and honing in on my own pet peeves, why do so many web designs scale to small screens but not big screens? I find myself growing increasingly frustrated by seeing beautiful web designs with gobs of potential that only fill 1024 of my available 1920 horizontal pixels! That leaves nearly 47% of my screen unused.

This sparked a very insightful (for me) conversation with the designer. The process of scaling a website is generally conceptualized as a one-way process: start big and get smaller. There's no "start big and get bigger." It actually makes a lot of sense to me as a much more manageable process to only go in one direction.

Herein lies my plea: Web designers, please throw out your 960 pixel starting mold. It's outdated. It's time to get a bigger mold.

StatCounter Global Stats has plenty of data to show us why:
Source: StatCounter Global Stats - Screen Resolution Market Share

Source: StatCounter Global Stats - Screen Resolution Market Share

1024-wide resolutions had a huge market share three years ago and it's economically-savvy to design for the market. That market share has dropped consistently and significantly. Meanwhile 1366-wide resolutions have gone from non-existence to being the new standard.

I've done some analysis of the numbers. If you've shopped for a computer lately, you've probably noticed that all monitors are widescreen. Most Macbooks use an 8:5 aspect ratio whereas PC manufacturers have overwhelmingly adopted the 16:9 aspect ratio. Because PCs account for 90% of the market, I've chosen to focus exclusively on the 16:9 ratio. For the three years May 2009 through April 2012, 1024x768 has dropped by an average of 0.64 points per month. Meanwhile 1366x768 has increased by 0.53, 1600x900 by 0.1 and 1920x1080 by 0.14. The numbers are very consistent, aside from a completely bizarre November 2011.

So what size mold should be used now? It depends upon the context, like everything. If responsive design isn't in scope, then leave the mold where it is. If it is, here are some more numbers for you: For the three months February to April 2012, screen resolutions of 1280+ totaled 66.35% of the market, 1360+ were 42.07%, and 1600+ were 14.05%. Judging by the data mentioned in the last paragraph, I say, set the mold at 1600 because that market share has shown steady growth and I think 14% is a pretty big share. After that set the steps at 1280 and 1024.

Update: See how the trends continue for the months May through July 2012.

Thursday, May 10, 2012

One For and Two Against Test Scripts

The following is a follow-up to my own Technical Documentation as Reference Material as well as David Greenlees' "Idiot Scripts".

Several thoughts come to my mind when I think about test scripts written for others to use.

I have at times, even recently, asked other people to write test scripts. Not because I want to use them or have them ready to distribute to someone else but because I wanted to use it as a tool to give me insight their approach to testing. It probably isn't the most efficient method but it was what seemed to be the best solution for the circumstances.

To me the intent of scripts for use by business users or during UAT is basically the same: happy path/positive testing that shows that the system works as expected.

The problem I have with writing scripts for business users is that I expect them to know how the system works and test scripts are a horribly inefficient form of user documentation. Besides it leaves the intent of each step in obscurity. It makes more sense to me to teach the business user how to use the system and then let them use it, whether it's taught through a full blown manual, a help file, a training seminar, or a phone call. If the system is complicated enough that it isn't readily apparent to the average user how to use it they you're going to need some sort of training program regardless so why duplicate effort by writing test scripts?

The problem I have with writing scripts for UAT is the same as I mentioned above but it goes deeper. Some, perhaps most people, might not agree with me. When I think about writing UAT scripts, it gives my heart ethical palpitations! UAT isn't just verifying functionality, it's verifying that it's acceptable. Determining whether the software/website is acceptable is a judgement call that only the client can make. Granted acceptance criteria can be written out and those criteria can be negotiated between the client and the agency but it's still a subjective evaluation when it comes down to it. The specific problem then that I have with UAT scripts is that I, as the script writer, am determining whether the deliverable is or is not acceptable. If the client wants to write an objective set of steps that define acceptability they can do that but that's on them. And if they want to go through some sort of approval process then it just becomes a dog and pony show.

Wednesday, May 9, 2012

What Makes a Leader?

Being a leader is not defined by:
  • Having the highest salary
  • Having a big title
  • Having the most tenure
  • Having an expensive degree or certification
  • Having the most skill and knowledge in a discipline
  • Having the most opinions
A leader is someone who:
  • Motivates others without threats or coercion
  • Can enforce the rules without pissing people off
  • Realizes that you can't always make everyone happy
  • Can discern between best practices and the best for the situation
  • Can see and analyze conflicts from all sides
  • Places the interests of the team above her own
  • Draws out the quiet voices and speaks up for those who won't/can't
  • Is willing to be unpopular but doesn't wear it as a badge of honor
  • Owns his mistakes
In short, a leader is not defined by what you HAVE but who you ARE.

Funny thing is, I can't think of one single word that appropriately describes a good leader other than "leader." On the other hand, if you're a bad leader, you could be described as a jerk or a jackass, and the list goes on. Even if you're the boss that doesn't mean you're a leader. Being bossy is just an adult synonym for being a bully. A good leader shines with Honesty, Integrity, and Respect. Those only expose themselves through how a person approaches and reacts to a situation and the way they treat others.

The Test Pilot

Just because you...
    ...designed the airplane...
    ...built the airplane...
    ...managed the design, construction, and testing of the airplane...
    ...flew on an airplane once...
    ...played an airplane video game...
    ...have a cousin that flies airplanes...
    ...read a book about flying airplanes...
...doesn’t mean you know how to fly the airplane!

I cannot emphasize this enough: testing is a skilled trade - whether it's testing an airplane or testing a website. The project manager for the the design and construction of a new airplane would not think twice about trying to fly the plane himself. Instead he delegates that responsibility to someone that knows how to fly the plane.

But just because you...
    ...know how to fly the plane...
...doesn't mean you know how to test the airplane!

Testing goes well beyond normal use. Test pilots need to be able to conceptualize the abnormal, the extreme, the emergency, even the absurd situations and have the skill to execute on them. Yes, every pilot should be trained to handle emergency situations but being forced into an emergency is far different from intentionally stepping into one.

Testing websites and software shouldn't be any different. There are certainly valid scenarios for the project manager or the client to do some testing but having them do the heavy lifting will lead to poor results.

Now this isn't to say that testing is an exclusive club by any means. You learn testing by doing. You really can't go to school to get a degree in testing. Attending a three-day seminar won't make you a good tester. Getting an expensive certification from ISTQB won't make you a good tester. You become a good tester with practice, by having an open mind, by challenging the status quo, by thinking, by being independent.

Tuesday, May 8, 2012

Fast, Good, Cheap, and _______

The Triple Constraint or Project Management Triangle is a device that describes opposing variables in project management. The variation that I'm most familiar with has the three sides of the triangle representing: Fast, Good, and Cheap. The principle is that, when applied to a project, one variable must suffer at the expense of the other two, whichever one that is, or vice versa. For example, you can have a project run fast and deliver a good product but it won't be cheap.

Who says you have to sacrifice though? It's really not a "who" but a "what", which I recognize thanks to a recent post by Catherine Powell. (Our models differ but it's because of her that my brain got to thinking about this.) There's a fourth project component missing from the model and that's Scope. The fourth variable necessitates compromise but before talking about how Scope comes into play, let's review how the Project Management Triangle works. There is a variable for each side of the triangle A, B, and C and a fourth variable for the perimeter of the triangle D. The perimeter is fixed so that no matter what, A + B + C = D. If A increases, B and/or C must decrease. If A and B increase, C must decrease.

Why does the perimeter have to be fixed? Let's face it, most projects, if not all, have one thing that is 100% non-negotiable. In the Fast, Good, Cheap model, this is Scope. Without answering "what is this project about" you really don't have a project. Even if a project has more than one "non-negotiable," one will still trump the other.


As you may have recognized, the three sides of the triangle are not permanently designated Fast (Timeline), Good (Quality), and Cheap (Cost). Rather, the three sides are that which is not the paramount non-negotiable perimeter.

For Example
Timeline: The client needs a website to go live at the same time as a huge marketing campaign
Quality: The website must adhere to government regulations
Scope: The functionality of the website has to include everything specified
Cost: The expense of the project cannot go one penny over

Let's now apply these variables to the sides of the triangle. When a side of the triangle is benefited, it gets longer but it gets shorter when penalized.

Variable Benefited Penalized
Timeline Decreases Increases
Quality Increases Decreases
Scope Increases Decreases
Cost Decreases Increases

Using the Fast, Good, Cheap model for the three sides, let's say Scope is our fixed perimeter with a non-negotiable set of requirements. Let's say the company wants a bigger profit margin, which is in essence cutting the cost. If we want it to stay on time, the quality will suffer because of reduced testing. If we want it to maintain quality, the timeline will suffer because of less-skilled (i.e. cheaper) labor.

Under ideal circumstances, we'll have an equilateral triangle. This requires doing your homework up front. Figure out what your non-negotiable variable is and then set the remaining variables based on that. Doing this should minimize the need to make compromises enabling you to not have to change anything. If change is necessary and compromises are undesirable, the only way to change the perimeter is to change the contract either through a change order or a completely new contract.

Tuesday, April 24, 2012

Technical Documentation as Reference Material

In my line of work, I have been asked many times over the years to produce test scripts so that the client can "test" and make sure that the system is working as it's supposed to. Another way to put this is to document a complex, technical process so that someone with no experience in the field can do it. For some reason, I thought the testing field was uniquely privileged with these requests but that's not true. Similarly purposed documents are commonplace across disciplines whether it's requirements gathering or system deployment.

While the actual function may vary the fallacy in these documents is uniform. If the purpose of the document was simply to provide information to skilled people, there'd be no problem. However, the specific, implicit purpose of these documents is more often than not for unskilled people. A document is no replacement for skill or training because a document is limited whereas the realm of possibilities is infinite. Without skill, you'll be swallowed by infinity faster than the fish swallowed Jonah.

So why do we keep doing this?

Because it's an industry practice?
This isn't elementary school where we do things just because it's popular. We should foster practices that produce the best results.

Because the client asked for it?
We're not just code monkeys pumping out a website, we're consultants. It's our duty to the client to educate them on why technical processes should be executed by people with skills in those processes. The truth of the matter is that by doing so, we help to insulate themselves from disaster.

Because the client offered to pay us a lot of money for it?
Do I even need to say that this would be unethical? I hope this is never a factor. Yes, the client is paying us but with that we have an obligation to better the client not just give them what they want because of it.

Skilled Work for Skilled People
The most important thing that we need to take to heart is, as I said before, technical processes should be executed by people with skills in those processes otherwise the exposure to risk becomes extraordinary.

  • Why risk having a site that isn't designed to do what it needs to? (BA)
  • Why risk having a site that can't do what it's designed to do? (Development)
  • Why risk having a site that doesn't do what it's designed to do? (Testing)
  • Why risk having a site that isn't up when it needs to be? (Deployment)

References, Not Instructions
With that in mind, I think we ought to shift our paradigm about how we think about technical documents. They're not instructions, they're references. They guide a familiar process for someone that lacks specific knowledge for a given situation. When necessary, training should be offered to fully realize the necessary knowledge transfer. The training though is not to impart a skill, such as coding in PHP, but to apply an existing skill to a situation, such as coding a widget within the existing framework and dependencies.

Friday, April 20, 2012

Thoughts on Effective Project Management

Effective project management and execution has many components. To name a few that comes to mind:
  • Planning
  • Communication
  • Cooperation
Projects don't always go smoothly though and so we compensate with:
  • Process
  • Documentation
  • Incentives (or bribery)
Don't get me wrong, process, documentation, and incentives are not bad things but they're not guarantees for great projects. Just because you have process doesn't mean your project will be executed well. Just because you have documentation doesn't mean the project will be universally perfectly understood. Just because you have incentives does not mean your team will be motivated.

When the project struggles, don't panic and don't think that there's going to be a miracle cure. So what should you do? Breathe. Relax. Rather than jumping to a change in the model, figure out first if the model is being effectively executed. Sometimes all it takes is some individual coaching or team training.

Think for a second about treating a medical ailment:
Symptoms: Excruciating pain, being unable to walk or stand
Solution: Prescribe heavy medications, such as Oxycodone. If you can't feel the pain, then what does it matter right? But if you only treat the symptoms and don't fix the problem, the symptoms will reoccur as soon as the drugs wear off.

Problem: The leg is broken
Solution: Set the let and/or perform surgery to bolt the bone back together. YES! We are out of pain and we're mobile again. But if you only fix the problem and don't eliminate the cause, then someone is sure to come along and break their leg--OR WORSE!

Cause: The rug is bunched up
Solution: Smooth out the rug. If the rug is prone to bunching, tack it down or figure out what causes it to get bunched up. Perhaps it's to close to a door that opens frequently. If tacking is not an option or it can't be moved further from the door, replace the rug with one that has a non-slip back or that is shorter.

I contend that most issues in project management and execution are like this. Simple causes that, when ignored, cause expensive problems and exhibit symptoms disconnected with the cause itself. Fixing the rug is a simple fix. It's not a change to the model, it's just taking the existing model and working the kinks out.

Thursday, April 19, 2012

[Not] Everything Is Urgent!

When projects get down to the wire, sometimes certain people (you know who they are) become prone to throwing out the rules for determining defect priority. As I previously wrote in Prioritizing Defect Reports, there are four factors that I consider when setting priority: Severity, Exposure, Business Need, and Timeframe. Unfortunately, Timeframe becomes a stumbling block when they fall prey to the terminal thought, "The deadline is right around the corner and all of these issues need to be done, therefore they are all Urgent!"

I call this a "terminal" thought because it leads to a disastrous method of project management: panic. Panic management occurs when organization goes out the window; and that's exactly what happens when all issues are prioritized the same. The priority indicator loses its value. Even when time is running short and all of the issues are crucial to launch, issues varying levels of priority and some need to be done before others. And what happens when nothing has any priority? Developers decided themselves which issues to do and when.

When we get to crunch time, I think it's appropriate to not only redefine the spans of time for the Timeframe factor but redefine the factor. Instead of thinking about a Timeframe, think about a Sequence:
  • Urgent: Drop everything and work on this
  • High: Complete before working on any lower priority issues
  • Normal and Low: After confirming with the project manager that there is nothing more important that needs to be done, concentrate on the Normal priority issues first but may incorporate Low priority issues if there's an efficiency advantage. Low priority issues are completed last.
By maintaining your system of priorities, you'll help keep your team focused and everyone will have a clearer vision of the outstanding risk in meeting the deadline.

Friday, April 13, 2012

The Twitter Gun

While I've had a Twitter account for a couple of years, I have only recently begun to utilize it. And while I'm definitely not an expert in Twitter strategy, I am a user and therefore I know what practices annoy the bajeebers out of me.

Twitter is like a mini blog and should generally follow the same guidelines as a regular blog. So in a regular blog, at least every one that I've ever read, each entry is unique. Yes, there may be recurring themes and topics but no one ever reposts a single entry multiple times. By and large, Twitter should be the same. However, some companies I've noticed like to tweet the same thing over and over. Perhaps they have one important message that they're trying to share like, "We're hiring" but they also like to share other useful things like, "Hey check out this link". And so after each time they post a "Hey check out this link" tweet, they repost the "We're hiring" tweet.

To me this practice is Twitter SPAM. It's unprofessional. It's rude to your followers. It encourages your followers not to pay attention to your tweets - which is the absolute last think you want to happen. And if you happen to be a company that claims to be a social media expert, it can be bad for business.

Consider the similarities and contrasts between two types of guns: the flare gun and handgun. Both are used to launch projectiles. Both are used to convey a message. A flare gun, however, will only shoot off one round at a time while a hand gun can fire many rounds in short succession. You run towards a flare gun but away from a handgun.

Twitter, used effectively, is like a flare gun because you want people to come to you. Therefore, you carefully plan your tweets and each contains a unique message. On the other hand, Twitter, used as I described above, is like a handgun. There's no strategy; you're just blasting away and people are going to run because it's the same thing over and over.

Of course, I think there's a third type of Twitter gun too, the machine gun. These are the companies that just don't stop sending out tweets. Seriously? While you are a news service, you're going to send out a tweet every three minutes when you post a new article? UN-SUB-SCRIBE! Maybe some people don't mind that but to me it just clogs up the feed and makes the useful information impossible to glean.

So if you run a company with an active Twitter strategy, think about how you're affecting your followers. Plan your tweets and don't inundate your followers with repetitive information.

Wednesday, April 11, 2012

The Quality Assurance Misnomer

Over the years, the industry standard title for a software tester has become “Quality Assurance Analyst/Engineer”. I don’t know the history of this but I do know that it’s not without controversy. When most people in the software industry hear the title, they think of the person’s role as being someone who assures the quality of the software product.  That’s a big problem though because, as a Quality Assurance Engineer myself, that’s not what my job is, nor is it generally the job of any software tester that I know.

Here’s the root of the problem: Quality is a business decision that evaluates whether a product or part thereof is GOOD or BAD and that is a decision that lies in the hands of the product owner – that person which is or represents the product purchaser/user. The reason for this is because they are the person that best knows whether the purchaser/user will be satisfied. Testers don't know the user so it would be presumptuous at best to put that responsibility on them

What I do is I poke, stress, and exercise the software in every way that I can think of and make observations. I compare those observations to documentation and common heuristics that help identify when intended functionality deviates from actual functionality. And then my job is actually to help inform that business decision by reporting all of my observations through conversation, defect reports, and progress reports to the product owner. You see, I can report RIGHT or WRONG based on my comparison to documentation and heuristics but that’s different than GOOD or BAD.

Determining GOOD or BAD is a murky process that is above the pay grade of a lowly software tester and that has to factor in many components including:
  • Test coverage
  • How stale is the testing (i.e. when was the last time part x was tested?)
  • How many failed test cases are outstanding
  • What the severity is of the outstanding defects
  • Amount of time allowed for planning, developing, and testing
  • The gut factor (i.e. does it feel like it’s ready?)

Then taking all of this information, to determine whether the product is GOOD or BAD, you need to have a sense of whether the customer will be satisfied as I mentioned above. The only way to develop this sense is to have an active, direct interaction with the customer from the beginning of the project or to just leave it directly up to the customer. Though, I don’t recommend the latter because without somehow quantifying an acceptance level, they may never agree that the product is good.

All that said, I don’t think that the title “Quality Assurance Analyst/Engineer” is BAD, just misunderstood. While I don’t assure quality through analysis and/or engineering, I do analyze and engineer so that quality can be assured.

Friday, March 23, 2012

WordPress: PHP Is Not Installed

So I'm starting to work on a little project at home involving a website built in WordPress. This is all new to me so naturally, there are going to be bumps in the road. Anyway, I managed to install Apache, MySQL, and PHP. However, when attempting to install WordPress, I was continually presented with the error:
Error: PHP is not running
WordPress requires that your web server is running PHP. Your server does not have PHP installed, or PHP is turned off.
WHAT??? The PHP test file worked perfectly fine! I googled and googled some more trying to find a solution to this and while there question appeared frequently, there was never a good answer. I finally figured it out myself though!

The problem was in how I was accessing install.php. Through Windows Explorer, I had navigated to my WordPress directory and double clicked on readme.html. The file opened in Firefox. Then I clicked the hyperlink in Step 2 "wp-admin/install.php". This opened the file with the error, and here's the problem: look at the address bar:
file:///C:/apache/htdocs/wordpress/wp-admin/install.php
In order to make it work, you need to hit it from LOCALHOST
http://localhost/wordpress/wp-admin/install.php

BAM! Done! 

Environmental Variables:
OS: Windows XP SP3
Webserver: Apache 2.2
MySQL: 5.5.21
PHP: 5.2.17
WordPress 3.3.1

Friday, March 16, 2012

Gems of Wisdom from Fellow Software Testers

This past week, I've been feeling very cerebral and the timing couldn't have been better because I've been able give my blog some much needed attention and I've also been able to get almost caught up on the backlog of blogs that I follow. In my reading, I have come across a number of gems of wisdom that I think are well worth sharing:

James Bach in Why Scripted Testing is Not for Novices
  • [A] scripted tester, to do well, must apprehend the intent of the one who wrote the script. Moreover, the scripted tester must go beyond the stated intent and honor the tacit intent, as well– otherwise it’s just shallow, bad testing. - TW: This problem is a direct result of divorcing test design and execution. And for a novice tester, they simply don't have the skills yet to "read between the lines" of the script to see the intent.

Michael Bolton in Why Checking Is Not Enough
  • But even when we’re working on the best imaginable teams in the best-managed projects, as soon as we begin to test test, we begin immediately to discover things that no one—neither testers, designers, programmers, nor product owner—had anticipated or considered before testing revealed them.
  • It’s important not to confuse checks with oracles. An oracle is a principle or mechanism by which we recognize a problem. A check is a mechanism, an observation linked to a decision rule.
  • Testing is not governed by rules; it is governed by heuristics that, to be applied appropriately, require sapient awareness and judgement.
  • A passing check doesn’t tell us that the product is acceptable. At best, a check that doesn’t pass suggests that there is a problem in the product that might make it unacceptable.
  • Yet not even testing is about telling people that the product is acceptable. - TW: I've been trying to promote this concept for at least a year in my own evolutionary understanding of my craft. You can expect my own blogging on this topic soon.
  • Testing is about investigating the product to reveal knowledge that informs the acceptability decision.

Michael Bolton in What Exploratory Testing Is Not (Part 3):  Tool-Free Testing
  • People often make a distinction between “automated” and “exploratory” testing. - TW: This is the first sentance and BAM! did it cause a paradigm shift for me!
  • That traditional view of test automation focuses on performing checks, but that’s not the only way in which automation can help testing. In the Rapid Software Testing class, James Bach and I suggest a more expansive view of test automation: any use of tools to support testing.

Anne-Marie Charrett in Please don't hire me
  • If you want me to break your code - TW: This set my brain going for at least two hours thinking about the implications of this. It's a great point though, testers don't break code. Look forward to more on this in the future.

I hope you find some wisdom in this too.

Thursday, March 15, 2012

Counting My M&M's

Like millions of other Americans, I keep a stash of munchies in my desk drawer at work. I find that a little treat in the afternoon is a highly effective way to keep me focused on my work. One of my snacks, interestingly (or oddly), has evolved into a bit of a ritual.

It all started back in 2010. I was at my local Dominick's before work picking up a donut, something for lunch, and stock for the stash. I happened down the candy aisle and saw that they had the jumbo bags of M&M's on sale. I'm always a sucker for the lowest price per unit so I couldn't resist. Fast forward to the afternoon-snack time-and a funny little thought popped into my head: "Are there the same number of each color of M&M's in the package?" I decided to find out, just for the heck of it.


The jumbo sack of M&M's is 42 ounces so there are two things to keep in mind. 1. I do not eat the entire thing in one helping. 2. I was not about to dump out the whole sack and count them right then and there. Instead, I started a spreadsheet. Whenever I decided to have a handful of M&M's, I would first sort them by color, count them, and then record the stats in my spreadsheet. It didn't take too long for multiple passersby to notice and remark upon my method for eating M&M's. Suddenly I had a reputation that needed to be maintained! Never again could I reach into a sack of M&M's and NOT count each color.

Two years later, I'm now on my fourth sack of M&M's. I don't eat them every day and I don't always have them in my stash. But when I do, I keep statistics. Today, I am pleased to announce the public sharing of this information on my M&M Counter page!

The M&M Counter page has several fun features:
  • The graph at the top shows the combined stats for all of the M&M's that I have consumed over the years.
  • Then, I provide a graph and chart that are updated in real time that show the stats for the sack that I'm currently in the process of devouring.
  • Finally, I have provided a form, for what I think is the most exciting feature of all, to allow YOU to join in the fun. You are welcomed and encouraged to contribute your own counts. Currently, the form only supports the regularly colored candies. Pick the closest option when selecting the size of the package. Soon, I will include a graph or two to display the user submitted data.
So yeah, it's a little weird. There's nothing scientific about this. There's no ploy to identify certain colors as subject of discrimination. It's just for fun. Enjoy!