Friday, March 23, 2012

WordPress: PHP Is Not Installed

So I'm starting to work on a little project at home involving a website built in WordPress. This is all new to me so naturally, there are going to be bumps in the road. Anyway, I managed to install Apache, MySQL, and PHP. However, when attempting to install WordPress, I was continually presented with the error:
Error: PHP is not running
WordPress requires that your web server is running PHP. Your server does not have PHP installed, or PHP is turned off.
WHAT??? The PHP test file worked perfectly fine! I googled and googled some more trying to find a solution to this and while there question appeared frequently, there was never a good answer. I finally figured it out myself though!

The problem was in how I was accessing install.php. Through Windows Explorer, I had navigated to my WordPress directory and double clicked on readme.html. The file opened in Firefox. Then I clicked the hyperlink in Step 2 "wp-admin/install.php". This opened the file with the error, and here's the problem: look at the address bar:
file:///C:/apache/htdocs/wordpress/wp-admin/install.php
In order to make it work, you need to hit it from LOCALHOST
http://localhost/wordpress/wp-admin/install.php

BAM! Done! 

Environmental Variables:
OS: Windows XP SP3
Webserver: Apache 2.2
MySQL: 5.5.21
PHP: 5.2.17
WordPress 3.3.1

Friday, March 16, 2012

Gems of Wisdom from Fellow Software Testers

This past week, I've been feeling very cerebral and the timing couldn't have been better because I've been able give my blog some much needed attention and I've also been able to get almost caught up on the backlog of blogs that I follow. In my reading, I have come across a number of gems of wisdom that I think are well worth sharing:

James Bach in Why Scripted Testing is Not for Novices
  • [A] scripted tester, to do well, must apprehend the intent of the one who wrote the script. Moreover, the scripted tester must go beyond the stated intent and honor the tacit intent, as well– otherwise it’s just shallow, bad testing. - TW: This problem is a direct result of divorcing test design and execution. And for a novice tester, they simply don't have the skills yet to "read between the lines" of the script to see the intent.

Michael Bolton in Why Checking Is Not Enough
  • But even when we’re working on the best imaginable teams in the best-managed projects, as soon as we begin to test test, we begin immediately to discover things that no one—neither testers, designers, programmers, nor product owner—had anticipated or considered before testing revealed them.
  • It’s important not to confuse checks with oracles. An oracle is a principle or mechanism by which we recognize a problem. A check is a mechanism, an observation linked to a decision rule.
  • Testing is not governed by rules; it is governed by heuristics that, to be applied appropriately, require sapient awareness and judgement.
  • A passing check doesn’t tell us that the product is acceptable. At best, a check that doesn’t pass suggests that there is a problem in the product that might make it unacceptable.
  • Yet not even testing is about telling people that the product is acceptable. - TW: I've been trying to promote this concept for at least a year in my own evolutionary understanding of my craft. You can expect my own blogging on this topic soon.
  • Testing is about investigating the product to reveal knowledge that informs the acceptability decision.

Michael Bolton in What Exploratory Testing Is Not (Part 3):  Tool-Free Testing
  • People often make a distinction between “automated” and “exploratory” testing. - TW: This is the first sentance and BAM! did it cause a paradigm shift for me!
  • That traditional view of test automation focuses on performing checks, but that’s not the only way in which automation can help testing. In the Rapid Software Testing class, James Bach and I suggest a more expansive view of test automation: any use of tools to support testing.

Anne-Marie Charrett in Please don't hire me
  • If you want me to break your code - TW: This set my brain going for at least two hours thinking about the implications of this. It's a great point though, testers don't break code. Look forward to more on this in the future.

I hope you find some wisdom in this too.

Thursday, March 15, 2012

Counting My M&M's

Like millions of other Americans, I keep a stash of munchies in my desk drawer at work. I find that a little treat in the afternoon is a highly effective way to keep me focused on my work. One of my snacks, interestingly (or oddly), has evolved into a bit of a ritual.

It all started back in 2010. I was at my local Dominick's before work picking up a donut, something for lunch, and stock for the stash. I happened down the candy aisle and saw that they had the jumbo bags of M&M's on sale. I'm always a sucker for the lowest price per unit so I couldn't resist. Fast forward to the afternoon-snack time-and a funny little thought popped into my head: "Are there the same number of each color of M&M's in the package?" I decided to find out, just for the heck of it.


The jumbo sack of M&M's is 42 ounces so there are two things to keep in mind. 1. I do not eat the entire thing in one helping. 2. I was not about to dump out the whole sack and count them right then and there. Instead, I started a spreadsheet. Whenever I decided to have a handful of M&M's, I would first sort them by color, count them, and then record the stats in my spreadsheet. It didn't take too long for multiple passersby to notice and remark upon my method for eating M&M's. Suddenly I had a reputation that needed to be maintained! Never again could I reach into a sack of M&M's and NOT count each color.

Two years later, I'm now on my fourth sack of M&M's. I don't eat them every day and I don't always have them in my stash. But when I do, I keep statistics. Today, I am pleased to announce the public sharing of this information on my M&M Counter page!

The M&M Counter page has several fun features:
  • The graph at the top shows the combined stats for all of the M&M's that I have consumed over the years.
  • Then, I provide a graph and chart that are updated in real time that show the stats for the sack that I'm currently in the process of devouring.
  • Finally, I have provided a form, for what I think is the most exciting feature of all, to allow YOU to join in the fun. You are welcomed and encouraged to contribute your own counts. Currently, the form only supports the regularly colored candies. Pick the closest option when selecting the size of the package. Soon, I will include a graph or two to display the user submitted data.
So yeah, it's a little weird. There's nothing scientific about this. There's no ploy to identify certain colors as subject of discrimination. It's just for fun. Enjoy!

Paper Airplane Science Experiment

When I was in elementary school, we would have a science fair every other year and all of the students would do some sort of science project. In fifth grade I simulated acid rain and in seventh grade I made recycled paper. I didn't find the acid rain experiment to be fun at all and it had been the only thing I could think of. The paper experiment was more fun because I got to make a big mess but it still didn't engage me, even though it was something I really wanted to do. So for  you parents out there trying to help your kid find something different to do for a science experiment, here's an idea: PAPER AIRPLANES.


Lots of little kids like making paper airplanes and if you think about it, it's super-simple to turn it into a science experiment and there's a a number of different approaches that you can take. The idea when doing different experiments, is that you want to identify all of the different variables and then just change ONE. In any approach, you'll form hypotheses around the same questions:
  1. Which airplane flies the farthest?
  2. Which airplane flies the straightest?
Experiment Method 1: Airplane design
Go online (or to the library) and find instructions on how to create several different designs of paper airplanes. Then construct them using the SAME size and type of paper. If you want, you could measure the wing dimensions and area and use those numbers to form your hypotheses.

Experiment Method 2: Airplane material
Pick a single airplane design and change up the type of paper used. Make sure the size of the paper doesn't change though. You could use construction paper, aluminum foil, printer paper, wrapping papper, cardstock, or whatever else you can think of. Here you'll probably want to weigh each airplane and use that for forming hypotheses.

Experiment Method 3: Airplane size
Again, pick a single airplane design but this time change up the size of the paper but use same type of paper. You may need to cut down large pieces of paper to make smaller planes. I say, find the biggest cut of paper you can find. Most schools have large rolls of paper for making big posters or bulletin boards. For each variation, make sure that the length to width proportion of the paper remains constant.

Experiment Method 4: Airplane proportions
Pick a single airplane design and type of paper. This time, vary the dimensions of the paper, while maintaining the same number of square inches. For example: (12x10) and (15x8) both have 120 square inches of area. You could also do (12x10) and (10x12) - rotating the axis of the plane. In doing this type of experiment, you should take lots of measurements in forming your hypotheses: wing area, plane length, wing width, plane height.

Conducting the experiment
Reduce the potential for errors:
  • I would make three copies of each plane to try to account for any variations there might be in the construction. Also, practice making the plane first and then make new ones for the experiment.
  • Throw and measure each plane at least five times. The more times you throw it, the more accurate your data will be, unless, of course, the plane gets really beat up from doing a nosedive into the floor. You can always disregard measurements (so you take 10 but only use the first 5 because you see that after that the data started to skew from bent noses).
Location selection:
  • You should try to find a wide open place to throw the airplanes. The school gymnasium would be a great place because it's wide open and there won't be any wind.
Location preparation:
  • Place a mark on the floor from where you will launch the planes each time (Use gaffers tape in the gymnasium so you don't gunk up the floors!). Mark out a straight line from that point. You might want to put a target on the wall or something that is in line with the starting point and straight line. When you throw the planes, aim for the target and that will help ensure that you throw consistently.
Throwing:
  • Again, aim for the target and make sure that your throwing arm is lined up with your target.
Measuring:
  • Measure the distance from the throwing point to where the plane LANDED. Have someone there to help watch in case the planes want to slide on the floor.
  • Measure the angle between the center line and where the plane lands.
One Last Idea:
You could also experiment with airplane surface friction. I don't know how you would MEASURE friction though so I'm just floating this out there for you. Using glossy inkjet photo paper, the same size paper, dimensions, and airplane design, make one plane with the glossy side out and one plane with the back side out.

Share Your Results
If you decide to try this experiment in one form or another, please share your results. I would love to hear what you discover!

Wednesday, March 14, 2012

Editior Fail at Yahoo! News

While browsing the news at Yahoo! over my lunch break on my way to read Dear Abby, I noticed this rather prominent error. Don't rely on spell checker!

Tuesday, March 13, 2012

Predator II: Just Die Already!

What do they do in in Hollywood if a movie does well at the box office? Make a sequel and use the success of the first movie to milk consumers for more money. Predator did well, earning almost $100M so naturally, Predator II came along three years later. Sequels usually have a common relationship with the original. In the first movie, the hero, after great struggle, finally kills the villain and then the sequel comes along and somehow the villain is back to life causing even more havoc. I see that and get to thinking, "Gosh darn! Why won't that thing JUST DIE ALREADY!?" I ask that question about software defects often because often times, defects have sequels too (and so should blogs)!

Despite my best efforts - thorough coverage, testing every test case I can think of, making sure everything WORKS - out of nowhere pops up some bug that I thought was squashed weeks ago! Sometimes it doesn't matter how big your muscles are, how many bullets you have in your gun, or how many other people have their guns pointed at the same monster, they still come back to life.

This can be caused by any number of things. It could be laziness on the part of the tester (or the developer) but I don't think that's usually the case. Here are some other causes:
  1. Unclear requirements
  2. Inexperience
  3. Lack of time
  4. Introduction of new/altered code
  5. Using [testing] the software in a way that it wasn't previously used
That last one is a big ball of mud - "WHY DIDN'T YOU TEST IT THAT WAY BEFORE???" See causes 1-4. The truth is, software is extremely complex and it's impossible to account for every scenario. As a tester, you must know how to prioritize and how to look for the scenarios that are most likely to occur and those which have the gravest consequences because there just isn't time/budget/people-power to do it all.

When defects do reoccur, use it as a learning catalyst. Was there an obvious breakdown in the development or testing methodology? Is this a "new" method of execution that should be noted for the future and with application other scenarios? Is there a different testing technique or tool that might have been able to expose that the defect was not fully resolved earlier?

Old defects will continue to pop up. Don't be discouraged but remain vigilant. Keep sharpening your skills and be ready for the enemy to return, because he always does, sometimes with Aliens.

Please study for my interview

I recently volunteered myself into the position of reviewing resumes, identifying candidates, and executing first round interviews for a QA position at my company. This has been a real learning experience for me as I've garnered a better understanding of the hiring process; but more so, it has been an eye opening experience. The great majority of applicants in the field are ROBOTS, figuratively speaking. Requirements get turned into scripts and that's how you test. As long as you have the right process, the right charts, the right formula, that's all you need.

I disagree. Don't get me wrong, I want someone that can transform requirements into test scripts but I what I really want is someone that can THINK. Someone that can test what's not in the requirements document. To quote Anne-Marie Charrett, I want to hire someone that says, "Don't hire me if you want perfect software. Don't hire me if you're looking for a tester to only "check" if things look ok."

For my interviews, I developed a standard list of questions to ask. Each question was carefully thought out and has a specific purpose for being asked. Here's my list:
  1. How did you get into software testing?
  2. What do you like about testing?
  3. What are your frustrations with testing and how do you deal with them?
    1. For example: How do you deal with being the resident naysayer?
    2. For example: How do you deal with defects that just never seem to get fixed?
  4. What comes to mind when you hear the term ‘Quality Assurance’?
  5. Compare and contrast automated and manual testing.
  6. Compare and contrast scripted and exploratory testing.
  7. How confident are you in your ability to deliver defect free software?
  8. What essential information should be included in defect report?
  9. How do you determine if something is working correctly when you have no documentation?
  10. What’s your experience with Agile?
  11. How do you sharpen your testing skills?
So there you go. If you're ever on the other side of the table being interviewed by me, nothing I ask should catch you by surprise. In fact, I encourage you to study for my interview. However, for now, I'm not going to explain any of my rationale; you'll have to come back for that later. For those of you that conduct interviews yourselves, is there anything you think I should add? Please share in the comments.

Monday, March 12, 2012

Using XQuery in MS SQL for Novices Part 2

How to use the value() function

Just the other day I started the conversation about how to use XQuery functions in SQL to pull specific XML data out of a table. It's time now for Part 2.

In the first discussion, I explained how to use the query() function. It's a very nice function because it returns both the XML nodes and the data within them. However, if all you want is the specific data in the node, having the nodes display in each result can make it more difficult to read the results. For this purpose we can use the value() function.

The value() function is ever-eager to throw the error "'value()' requires a singleton (or empty sequence)". It's also eager to throw the error "The value function requires 2 argument(s)." Obviously, these errors occur when we don't provide all of the necessary information.

The two arguments required by the values() function are:
  1. The reference to the node
  2. A data type specification in which to host the returned information
The requirement of a singleton is admittedly, not something I fully understand. It is solved, however, by appending a number one in brackets - [1] - to the end of the first argument. Just put everything inside of the single quotes and the argument itself inside of parentheses.

This is how it looks:
SELECT XMLColumn.value('(//XML-Node-Name)[1]', 'DataType(#)') FROM Table
Don't forget to add a comma between the arguments!
 
We can further tidy up the results by aliasing the column (assigning it a custom name of our choice) and using the exists() function.
SELECT XMLColumn.value('(//XML-Node-Name)[1]', 'DataType(#)') 'Alias Column Name' FROM Table WHERE XMLColumn.exists('//XML-Node-Name') = 1

Sunday, March 11, 2012

What should be the easiest archaeological dig in the world

When I entered the field of software testing, the last last thing on my mind was archaeology. Software testing is about the future whereas archaeology is about the past. Technology is clean and non-physical but archaeology is dirty and labor-intensive. However, the two fields aren't all that different. A good software tester is going to thrive on artifacts, but not the kind buried in the ruins of ancient civilizations.

In software projects, an artifact is any piece of documentation that relates to the project:

  • RFP
  • RFP Response
  • SOW
  • Project Plan
  • Requirements
  • Design comps
  • Story boards
  • User personas
  • Test scripts
  • Defect reports
  • Meeting summaries
  • etc.
The artifacts mentioned above help software testers the same way bones help paleontologists. Both are vital in the acquisition of knowledge. Testers need to have a thirst for project knowledge. How can one assert if the product works as intended without having an authoritative source? Regardless of approach, the more the tester knows about how the software is supposed to work, the more thoroughly they can test through broader coverage and more intricate scenarios.

When projects don't have a decent amount of documentation, you're asking for scope creep and unsatisfied client expectations. Testers, get dirty and dig into the documentation. Insist on having access to it and demand it early. Study it thoroughly. Use it to help design your tests. If the documentation doesn't exist, consult the project experts (well, consult them anyway) and then write it down. Once it's written down, it can't be disputed - it might be changed later but that can happen to any document. Moral of the story: use everything at your disposal to learn about the project.

Friday, March 9, 2012

Using XQuery in MS SQL for Novices Part 1

A recent project at work presented me with a new challenge. Generally in a database, at least the ones that I've  had to work with, each different type of data is stored in its own column. Well, the project currently under test downloads information from a third party system in the form of an XML file. Even though we don't use every bit of data in provided by the third party, we still want to preserve it in its original format. Therefore we store the entire XML string in the database.

When XML data is stored in a database, there are at least a couple different data types that can be used.
  1. It could be stored as just regular old text using the varchar data type. In that case whatever program is using the data has to convert it back into XML in order to use it. The Umbraco content management system likes to do it this way.
  2. On the other hand, it could be stored natively using the XML data type, which is what our system does. Storing the data AS XML presents some interesting challenges as well as many opportunities when it comes to digging for information within the XML string itself -- most of which I'm still figuring out. For today, I'd like to start with just the basics.
Doing It Old School
If all you're trying to do is find the rows where a specific series of characters occurs, you can search just like you always do... almost. 
SELECT * FROM Table WHERE XMLColumn LIKE '%key-text%'
This traditional query will throw an error faster than a lazy developer's code. Instead, you need to change data types on the fly:
SELECT * FROM Table
WHERE CAST(XMLColumn AS varchar(max)) LIKE '%key-text%'
So to reiterate, this query uses the CAST command to convert the XML column into a text data type.

Being All Fancy About It
Like I said, the old school works fine if you're just trying to identify entire records in the table that match your criteria. If you're just looking for specific nodes in the XML data, the approach above is NOT very efficient. You can harness the power of the XML data type to do things much more quickly! Since v.2005, MS SQL Server has supported XQuery. I'm not a programmer so I don't yet understand the full power that this yields. I've found, however, that the existing help on the internet was not written for people like me so here's what I've learned so far:

STEP 1: Using query()
Assuming that we just want to find the data stored in a specific XML node, we can use the Query() function.
SELECT XMLColumn.query('//XML-Node-Name') FROM Table
A couple of notes about what's going on there:
  1. Query() is a function. Inside of the function is one ore more arguments (RAWR! J/K), which are written inside of the parentheses. Each argument must be enclosed in single quotes and multiple arguments are comma separated. The query() function will accept accept ONE argument.
  2. The double slashes // at the start of the argument tell the system that you're looking for a node and for our purposes, it doesn't matter whether it has child or parent nodes.
  3. You need to put the goods of the function inside of the single quotes.
  4. The node name is CASE SENSITIVE
  5. Don't capitalize the the function: e.g. type "query" not "Query"
When this query runs, unless you add a WHERE clause, it's going return a result for every single row in the table, including the rows where the specified node does not exist. If the node does not exist for that row, there will be an empty cell in the results table. If the node does exist, the data returned will look like this:
<xml-node-name>XML data stored in node</xml-node-name>
STEP 2: Using exists()
Because using query() alone can be a little messy returning empty rows, we can use another function that checks YES or NO does something exist in the XML data.
SELECT XMLColumn.query('//XML-Node-Name'FROM Table WHERE XMLColumn.exists('//XML-Node-Name') = 1
 This query will only return the rows where the specified node exists without any empty rows. A couple of notes about what's going on here:

  1. I used the same argument for both functions because if the node doesn't exist that I'm looking for, that row doesn't need to be returned.
  2. The = 1 part is telling it to evaluate to TRUE (e.g. Yes, it exists) whereas, if I set it to = 0 it would be the opposite (e.g. No, it doesn't exist). Setting it to 0 would only return the blank rows in this example.
Please let me know in the comments if you've found this useful. Look for more posts to come as I learn to harness more power of the XQuery!

Helpful but technical links (also probably the first ones you came across when googling "XQuery SQL Server":