Weekly Testing Tips
by Cordell Vail, cste


Copyright 2006-2007 By Cordell Vail - All Rights Reserved

(NOTE: If you have visited here recently you may need to
hit the REFRESH button on your browser to see the latest additions.)

This is an attempt by a grass roots software tester to "GROW THE TESTING PROCESS" where I work as well as to do all in my power to have an influence on the entire Software Testing Quality Assurance-Quality Control profession. I have taken the liberty to hang a "testing tips" message on the outer wall of my cubical each week. As other testers, developers and managers walk past; they stop and read it. It seems to have had a very positive impact on our testing effort. Now I am posting them here on this web page to share them with you in hopes that it will help you to improve the testing processes where you work. Together, we can help to improve the whole profession!

If there are tips posted here that were copyrighted and proper credit is not given I apologize. Sometimes things come to me by email from friends and I have no way to know the source. If one of your sayings is posted here with out proper credit, please let me know and I will add your copyright notice and contact information of if you desire, remove it from my web page.







Stopping GOSSIP in the work place!

In the seminar I was giving in Denver for SQuAD on the 17th of Aug 2007 titled "The Art Of Building Relationships: The Key To Having Influence At Work" we were talking about gossip and the destructive effect it can have on your team or anywhere in the work place. Ann Marie Kjerland raised her hand and made the most profound comment on that subject that I have ever heard. Here is what she suggested:

Mary--- comes to you and starts spouting off about her co-worker Tom.

Don’t respond to Mary's gossip. Take a second and then say, “Tom thinks so highly of your Mary”

At that point Mary feels like a real louse and is stunned by the comment and walks away.


Ann Marie that is the most profound response to gossip I have ever heard. You should be giving seminars. "TOM THINKS SO HIGHLY OF YOU". That is so profound. Thank you Ann Marie for sharing it with us, and for giving me permission to now share it here with others.





ATTENTION MANAGEMENT:

On Aug 16, 2007 I was on a plane going to Denver to give a seminar on Creating Exclence in the work place. I sat by Suz Curry. She started to ask me about my seminars. I like to tell people who know very little about the software testing profession about my seminars because I find they make some of the most profound observations. This was one of those occasions. I was telling her part of the seminar was about conflict resolution in the work place. After listening to me talk for a few minutes about how hard it is some times for managers to help their employees want to make suggestions, especially about how you as a manager could improve, here is what Suz suggested.

If you are having problems with your team and don't seem to be able to get them to talk to you, try having an anonymous suggestion box where team members can put their ideas, positive suggestions, and comments. That way they don't have to fear reprimand for pointing out things that you are doing as a manager that they feel could be improved. It is also a good way to get the team thinking of positive things that can improve processes if you encourage them to fill the box with positive suggestions and ideas they have rather than problems they think need to be solved. Any one can complain. Team builders make suggestions on how to improve processes.

Maybe that idea is not totally new, but I think it is a wonderful suggestion and I think many a manager would be wise to take her advice. Might just improve your management style and give you lots of new ideas on how to improve processes that you might not have other wise thought of. Thanks for the insightful suggestion Suz....





THE POWER OF INFLUENCE













WHITE BOX TESTING QUESTIONS ON THE TEST ENGINEER CERTIFICATION TESTS:

Taking certification tests always involves WHITE BOX testing as well as BLACK BOX testing questions. One question that always seems to be there is the definition of CYCLOMATIC COMPLEXITY. Ever try to find a layman's definition of it? Normally they say it is measuring the "linearly independent paths" through a program. If you are not a developer, that could be too technical. In fact I asked some programmers and they did not know what Cyclomatic Complexity linearly independent paths meant. So here is a little research from the Internet that may help you with that test question (note the last definition... it is for layman):

http://en.wikipedia.org/wiki/Cyclomatic_complexity
Cyclomatic complexity is a software metric (measurement). It was developed by Thomas McCabe and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code

http://www.onjava.com/pub/a/onjava/2004/06/16/ccunittest.html
Cyclomatic complexity essentially represents the number of paths through a particular section of code, which in object-oriented languages applies to methods.

http://www.codeproject.com/dotnet/Cyclomatic_Complexity.asp
Cyclomatic Code Complexity This measure provides a single ordinal number that can be compared to the complexity of other programs. It is one of the most widely accepted static software metrics and is intended to be independent of language and language format. Code Complexity is a measure of the number of linearly-independent paths through a program module and is calculated by counting the number of decision points found in the code (if, else, do, while, throw, catch, return, break etc.)
CC=E-N+p
Where
CC = Cyclomatic Complexity
E = the number of edges of the graph
N = the number of nodes of the graph
p = the number of connected components

From a layman’s perspective the above equation can be pretty daunting to comprehend. Fortunately there is a simpler equation which is easier to understand and implement by following the guidelines shown below:
Start with 1 for a straight path through the routine.
Add 1 for each of the following keywords or their equivalent: if, while, repeat, for, and, or.
Add 1 for each case in a switch statement.

http://javaboutique.internet.com/tutorials/metrics/
Cyclomatic Complexity (CC) = number of decision points +1
Decision points are conditional statements such as if/else, while etc.
The following table summarizes the impact of CC values in terms of testing and maintenance of the code:

CC Value Risk
1-10 Low risk program
11-20 Moderate risk
21-50 High risk
>50 Most complex and highly unstable method

From the complexity perspective of a program, decision points—such as if-else, etc.—are not the only factor to consider. Logical operations such as AND, OR, etc. also impact the complexity of the program.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OK... now you should not miss that white box question on any certification exam....
And if you want to learn more, here are some excellent study guides to learn white box terms for the ISTQB certification exams:
www.astqb.org/documents/ISTQB_Exam_Guidelines_2005.pdf
www.istqb.org/fileadmin/media/SyllabusFoundation.pdf
www.istqb.org/fileadmin/media/SyllabusAdvanced.pdf

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Process Engineering Defined:

POLICY: Answers the question "WHY". Why is this process important and establishes measurable goals and objectives.

STANDARD: Answers the question "WHAT". What will we do to satisfy the objectives stated in the policy.

PROCEDURE: Answers the question "HOW". How will we satisfy the intent of the standard. Procedures represent tasks that are performed.

Tim Pelland
QAI World Wide
At QAI Canada 2006 International Conference, Toronto, Canada





GREAT DEFINITONS OF LEVELS OF SEVERITY FROM AHAMAD'S BLOG ON THIS WEB PAGE:
http://testingsoftware.blogspot.com/2005/09/defect-severity-and-defet-priority.html

Defect Severity and Defect Priority

This document defines the defect Severity scale for determining defect criticality and the associated defect Priority levels to be assigned to errors found in software. It is a scale which can be easily adapted to other automated test management tools.

ANSI/IEEE Std 729-1983 Glossary of Software Engineering Terminology defines Criticality as,
"A classification of a software error or fault based on an evaluation of the degree of impact that error or fault on the development or operation of a system (often used to determine whether or when a fault will be corrected)."

The severity framework for assigning defect criticality that has proven most useful in actual testing practice is a five level scale. The criticality associated with each level is based on the answers to several questions.

First, it must be determined if the defect resulted in a system failure. ANSI/IEEE Std 729-1983 defines a failure as,
"The termination of the ability of a functional unit to perform its required function."

Second, the probability of failure recovery must be determined. ANSI/IEEE 729-1983 defines failure recovery as,
"The return of a system to a reliable operating state after failure."

Third, it must be determined if the system can do this on its own or if remedial measures must be implemented in order to return the system to reliable operation.

Fourth, it must be determined if the system can operate reliably with the defect present if it is not manifested as a failure.

Fifth, it must be determined if the defect should or should not be repaired.
The following five level scale of defect criticality addresses the these questions.


The five Levels are:

1. Critical

2. Major

3. Average

4. Minor

5. Exception

1. Critical - The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system.

2. Major - The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system. There is no way to make the failed component(s), however, there are acceptable processing alternatives which will yield the desired result.

3. Average - The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability.

4. Minor - The defect does not cause a failure, does not impair usability, and the desired processing results are easily obtained by working around the defect.

5. Exception - The defect is the result of non-conformance to a standard, is related to the aesthetics of the system, or is a request for an enhancement. Defects at this level may be deferred or even ignored.
In addition to the defect severity level defined above, defect priority level can be used with severity categories to determine the immediacy of repair.


A five repair priority scale has also be used in common testing practice. The levels are:

1. Resolve Immediately

2. Give High Attention

3. Normal Queue

4. Low Priority

5. Defer


1. Resolve Immediately - Further development and/or testing cannot occur until the defect has been repaired. The system cannot be used until the repair has been effected.

2. Give High Attention - The defect must be resolved as soon as possible because it is impairing development/and or testing activities. System use will be severely affected until the defect is fixed.

3. Normal Queue - The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.

4. Low Priority - The defect is an irritant which should be repaired but which can be repaired after more serious defect have been fixed.

5. Defer - The defect repair can be put of indefinitely. It can be resolved in a future major system revision or not resolved at all.















Most everything you every wanted to learn about testing can be found at this web page:
www.testingeducation.org





If you don't think outsourcing of testing projects will be a problem in your future, check out this chart Presented by Rex Black in the SQuAD Conference in Denver, CO in Feb. 2006 www.rexblackconsulting.com


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
NOTE BY CORDELL VAIL: Data entry workers in the USA get about $10 an hour (about $20,000 a year). In Jamaica the data entry workers there get about 90 cents an hour (about $3,000 a year). In Beijing, China the data entry workers get about 30 cents an hour (about $1,000 a year). When the Chinese labor market is unleashed on the world, we will all be getting the same shock treatment as the "BUGGY WHIP MAKER" did in the early 1900's when cars were introduced. Now is the time to start thinking about those kinds of things as a future in your career, not after you are replaced by "OUTSOURCING".





"You can't manage what you don't measure"
Kerry K. Killinger
Chairman, President, and Chief Executive Officer
Washington Mutual Bank










Personal certification as a test engineer is not everything,
but it is way ahead of what ever is in second place!
Cordell Vail


























Some time back, IBM Canada Ltd. of Markham, Ont., ordered some parts from a new supplier in Japan. The company noted in its order that acceptable quality allowed for 1.5 per cent defective parts (a fairly high standard in North America at the time).

The Japanese sent the order, with a few parts packaged separately in plastic.

The accompanying letter said: "We don't know why you want 1.5 per cent defective parts, but for your convenience, we've packed them separately."

- Japanese Quality, from an article in The Toronto Globe and Mail




We need to "transition out of the corrective action mode and into the continuous improvement mode".
Vincent C. Guess CMII for Business Process Infrastructure page viii




          PLANNING
                    DOCUMENTATION
                              EXECUTION
                                        FOLLOW UP
                                                 
BRINGS
                                                            PREDICTABLE
                                                                      REPEATABLE
                                                                                SUCCESS

                                                                                                                                  Cordell Vail





Automated regression testing should be planned during the requirements gathering - design phase of the software development life cycle. In practice, automated regression testing is normally planned and scripts written after the application has been released to production and it is decided that some regression tests need to be run.


v
From my presentation 16 Nov 2004 on How To Improve Processes given to WSIPC in Everett, WA








"The greatest value of a defect is to learn from it.”
Randy Rice - http://www.riceconsulting.com




HOW DO YOU STOP GETTING SO MUCH JUNK EMAIL FROM YOUR WEB PAGE?

If you put your email address on your web page you are going to get killed with junk email. Junk email dealers have COMPUTER ROBOTS that read the text of every web page in existence and they look for text with an @ and .com or .net etc etc then put it on their lists. I have found an easy way to get around that. Instead of putting your email address in text, take a picture of it and put it on the web page as a graphic image. Then people can still see it but the ROBOTS cant. For example try to click on my email address below:



It is just a picture. However the URL above for Randy Rice's web page is an actual LINK you can click on. I never make the email address on my web pages an HTML MAIL TO live link. I just don't like junk email that much.





Have you ever had to try to figure out all the possible combinations of options on a screen or report so you can test every possible combination? Here is an example of how you can find every possible combination of 2 check boxes that can either be checked or unchecked (or have Y or N). It is called a DECISION TABLE. Of course if you could have Y N or Blank then you would do 3X1=3 - 3X3=9 - 3x3x3=27 etc etc. It gets exponentially large very quickly if you have very many options.

My thanks to Louise Temples and Sue Vail Howard for helping me figure this out....




In testing we seem to always have trouble using the words AFFECT and EFFECT correctly.

Is this the correct usage?

How will adding tables AFFECT our timeline?
What will the EFFECT be on our time line if we add tables?

Andrew Werth, who works with me at WSIPC, came up with a little test that will help you know the right word to use.

So here is his little test:

EFFECT is a Noun
AFFECT is a Action Verb

Normally if you can substitute the word RESULT or RESULTS for the word then it should be EFFECT. If it is awkward to use the word RESULT or RESULTS in the sentence then it probably is AFFECT.

Example:

How will adding tables AFFECT (RESULT) our timeline? (Does not make sense so we should use AFFECT)
What will the EFFECT (RESULTS) be on our time line if we add tables? (Sounds OK so we should use EFFECT)


Does that help? Thanks Andy.....





Here are some terms to help you know just how fast "FAST" is: (information taken from
http://whatis.techtarget.com , http://www.free-definition.com and http://www.physlink.com/Education/AskExperts/ae281.cfm

In Network testing, time is measured in milliseconds (one thousandth of a second). Does that seem fast? Well if there are 6000 send and receive commands on the network and each takes a millisecond, that is a 6 second delay. Would you wait that long for a web page to be displayed? Probably not!

So from that you can see in the computer world that there needs to be other clocked times that are much faster.

(This definition follows U.S. usage in which a billion is a thousand million and a trillion is a 1 followed by 12 zeros.)

Millisecond (ms or msec )- one thousandth of a second and is commonly used in measuring the time to read to or write from a hard disk or a CD-ROM player or to measure packet travel time on the Internet.
Microsecond (us or Greek letter mu plus s) - one millionth (10 to the 6th power) of a second.
Nanosecond (ns or nsec) - one billionth (10 to the 9th power) of a second and is a common measurement of read or write access time to random access memory (RAM).
Picosecond - one trillionth (10 to the 12 power) of a second, or one millionth of a microsecond.
Femtosecond - one millionth of a nanosecond or 10 to the 15th power of a second and is a measurement sometimes used in laser technology.
Attosecond - one quintillionth (10 to the 18th power) of a second and is a term used in photon research.
Yoctosecond - is one septillionth (10 to the -24th power) of a second

and if you can think this small.......

Planck length - is the scale at which classical ideas about gravity and space-time cease to be valid, and quantum effects dominate. This is the 'quantum of length', the smallest measurement of length with any meaning. And roughly equal to 1.6 x 10 to the -35 m or about 10 to the -20 times the size of a proton. The Planck time is the time it would take a photon traveling at the speed of light to cross a distance equal to the Planck length. This is the 'quantum of time', the smallest measurement of time that has any meaning, and is equal to 10 to the -43 seconds. No smaller division of time has any meaning. With in the framework of the laws of physics as we understand them today, we can say only that the universe came into existence when it already had an age of 10 to the -43 seconds.

("We can say only".... Some times scientists just make me smile.... But at least now you know how fast I can test an application and find a bug in it.... ha ha ha )











                        and how big is that cable in front of the power shovel used to pull it when it gets stuck?





A good tester will look both ways when crossing a one-way street
Cem Kaner, Jack Falk, Hung Quoc Nguyen - Testing Computer Software





THE ISSUE:
You have learned all these new testing processes you want to share with management.

THE RESULT:

(Used with permission of Dan Reynolds - cartoonist.)
You can see more of Dan's work in the Readers Digest, in his books on amazon.com or visit his web page at:
www.reynoldsunwrapped.com

The right software, delivered defect free, on time and on cost, every time
SEI Compatibility Maturity Model





The two key rules for good testers are
Rule #1
Always be thorough and meticulous
Rule #2
When tempted to cut corners, refer to rule #1





A good tester is not the one who finds the most bugs, but rather the one who helps get the most bugs fixed.
Cem Kaner, Jack Falk, Hung Quoc Nguyen - Testing Computer Software









In God we trust. All others we test

This is the Trade Mark of Medical Software, Inc. - Jim Kandler, csqe, cste
www.medicalsoftwr.com

(shared here with Jim's permission. Thanks Jim)





Practices can be repeated. If you do not repeat an activity, there is no reason to improve it.
SEI Compatibility Maturity Model





Testers are ordinary people doing extraordinary things to make sure the software delivered is error free
Cordell Vail





Quality work will be long remembered after speed has been forgotten
Brian Tracy





Policies, practices and procedures commit the organization to implementing and performing consistently.
SEI Compatibility Maturity Model





Testing effectiveness is finding bugs that users care about and that developers will fix.
James A. Whittaker




In testing, JUST IN TIME, most of the time, is too late
Cordell Vail





Test sets that have been "designed" from the start as reusable "products" with the goal of effective software measurement permit testing to be executed over and over again during the life cycle and pay for themselves many times over.... Rather than "code and test", we should be saying "test and code".
Bill Hazel - Software Quality Engineering




TERMS FOR TESTING PROCESSES
THAT ARE OFTEN CONFUSED,
MISUSED OR USED AS SYNONYMS:

Stress Testing
Tests the server – peek volume over a short span of time

Load Testing
Tests the data base – Largest load it can handle at one time

Volume Testing
Tests the server and the database – Heavy volumes of data over time (combination of Stress testing and Load testing over time)

Performance Testing
Tests the server – user response time

Contention Testing
Verifies that the server can handle multiple user demands on the same resource (that is, data records or memory).

Bench Mark Testing
Compares your standard to the same standard in other businesses in the industry

Base Line Testing
Setting a standard to be compared to later within you own company





TYPES OF TESTING WE WOULD RATHER NOT SEE

Aggression Testing
If this doesn’t work, I’m gonna kill somebody.

Compression Testing
{}

Confession Testing
Okay, okay, I did program that bug.

Congressional Testing
Are you now, or have you ever been a bug?

Depression Testing
If this doesn’t work, I’m goinga kill myself.

Egression Testing
Uh-0h, a bug... I’m outta here.

Digression Testing
Well, it should work, but let me tell you about my truck.

Expression Testing
#$(%&#*^#$

Obsession Testing
I’ll find this bug if it is the last thing that I do

Repression Testing
It’s not a bug, it’s a missing feature
(I actually had a vendor say that to me recently when I found a major bug in their software -
so that comment is the very reason I placed this list here.... they were suppose to be funny not true)

Succession Testing
The system is dead. Long live the new system!

Suggestion Testing
Well, it works but wouldn’t it be better if....





TESTING TERMS THAT ARE OFTEN MISUNDERSTOOD

EQUIVALENCE CLASSES
(all test the same thing - same input)

EQUIVALENCE TESTING (Yes or No)
(y or Y vrs n or N and all not y or Y)

EQUIVALENCE PARTITIONING
(Less than - Between - Greater than)

BOUNDARY VALUE ANALYSIS
(Min + - 1 and Max + - 1)

NUMBERS THAT ARE GOOD TO HAVE MEMORIZED
-32767 to 32767 and 0 to 65535
If you don't know what those numbers are you need to do some night reading (smile)





Words worth knowing as a Test Engineer

syn•chro•nous (Runs in the foreground)(Continuous connection)
Function: adjective

1 : happening, existing, or arising at precisely the same time
2 : recurring or operating at exactly the same periods
3 : involving or indicating synchronism
4 a : having the same period; also : having the same period and phase b : GEOSTATIONARY
5 : of, used in, or being digital communication (as between computers) in which a common timing signal is established that dictates when individual bits can be transmitted, in which characters are not individually delimited, and which allows for very high rates of data transfer
6: (digital communication) pertaining to a transmission technique that requires a common clock signal (a timing reference) between the communicating devices in order to coordinate their transmissions
7. <operating system, communications> Two or more processes
that depend upon the occurrences of specific events such as
common timing signals.
8. Occurring at the same time or at the same rate or with a
regular or predictable time relationship or sequence.


asyn•chro•nous (Runs in the background)(Runs independent)
Function: adjective

1 : not synchronous
2 : of, used in, or being digital communication (as between computers) in which there is no timing requirement for transmission and in which the start of each character is individually signaled by the transmitting device
3: (digital communication) pertaining to a transmission technique that does not require a common clock between the communicating devices; timing signals are derived from special characters in the data stream itself [ant: synchronous] 2: not synchronous; not occurring or existing at the same time or having the same period or phase [ant: synchronous]
4. <operating system> A process in a multitasking system
whose execution can proceed independently, "in the
background". Other processes may be started before the
asynchronous process has finished.
5. <communications> A communications system in which data
transmission may start at any time and is indicated by a
start bit, e.g. EIA-232. A data byte (or other element
defined by the protocol) ends with a stop bit. A
continuous marking condition (identical to stop bits but not
quantized in time), is then maintained until data resumes.





Seems we are always trying to explain the difference
between QA and QC

Quality Assurance (QA) (Quality Analysts) (Certified Software Quality Analyst) (CSQA)
      Management Staff Function - Evaluating Practices & Procedures - normally done by Quality Assurance Analysts and QA Managers

Quality Control (QC) (Test Engineers) (Certified Software Test Engineer) (CSTE)
      Line Worker or Test Engineer Function -Testing Principles and Concepts - normally done by Quality Control Test Engineers





MORE WORD DEFINITIONS AND CLARIFICATIONS

DEFECT (BUG): An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction. An error in the hardware or software. (A missing feature... that is a joke of course)

FAULT: A manifestation of a defect in the software. A fault, if encountered, may cause a failure.

FAILURE: The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.

RISK: The probability that a failure will occur causing a loss. If controls are inadequate to reduce the risk it creates vulnerability.

RISK ANALYSIS: Analysis of the systems vulnerabilities combining the loss potential with an estimated rate of occurrence to establish the potential level of damage that may occur.

RISK EXPOSURE: The risk exposure always exists, although the loss may not occur.
THREAT: Something capable of exploiting vulnerability (a defect)

VULNERABILITY: A flaw or defect that may be exploited by a threat. The flaw if exposed would causes the system to operate in a fashion different from its intended use.

CONTROL: Anything that tends to cause the reduction of risk. Controls can accomplish this by reducing the frequency of occurrence.

Now you know





Too little testing is a crime
Too much testing is a sin
William Perry





VERIFICATION: Testing to see if the SYSTEM IS RIGHT
Looking for defects from the developers perspective - a variance from the specifications - usually using white box testing, unit testing and reviews. The process of determining whether or not the products of a given phase in the life-cycle fulfil a set of established requirements.

VALIDATION: Testing to see if the RIGHT SYSTEM was created
Looking for defects from a customer or user perspective - a variance from what the user wanted - usually using black box testing, system testing, acceptance testing, walk-through. The stage in the software life-cycle at the end of the development process where software is evaluated to ensure that it complies with the requirements.

Here is another set of similar definitions that is helpful from Software QA/Test Resource Center:
© 1996-2006 by Rick Hower
www.softwareqatest.com/qatfaq1.html#FAQ1_7

What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.

What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading thru the document; most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in 'Why is it often hard for organizations to get serious about quality assurance?'. Their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.





Quality Assurance Plans (Created by the QA staff - management)
   contain Test Plans (Created by the QC staff - test engineers)
       which contain Test Suites
         which contain Test Scenarios
            which contain Test Cases or Use Cases
               which contain Test Scripts
                  which contain Test Steps





If you think you can learn testing in one day, then you have no business testing!
James A. Whittaker




Testers must always remember that the "Bugs" are the enemy not the developers.
It would be good for developers to remember that too when testers show the bugs to the developer.
Cordell Vail





Cave Men Testing The Soccer Is Fun Theory

If you have a copy of this cartoon with the artist's name still on it,
or if you know who the cartoonist is who drew it, would you please send
the name to me. I would like to get permission to use it in one of my presentations.

Please send the cartoonist's name to me at:





If you have other helpful tips related to software testing that you would like to post here please send them to me. I will put them here with your name and contact information. (Good opportunity for networking for you... look at the counter below)

Please send them to me at:






COPYRIGHT

I grant permission to make digital or hard copies of anything on this web page for personal or classroom use, provided that (a) Copies are not made or distributed for profit or commercial advantage, (b) Copies bear my copyright notice or the copyright notice of the author as noted on this web page. The proper citation for information taken from this web page is "Weekly Testing Tips By Cordell Vail, www.vcaa.com/testengineer/weeklytips.htm (c)". Each item or image that you use from this web page must bear the copyright notice as noted on this web page if not mine or, if you modify the item or image for your own use, the modified version must bear the notice, "Modified information originally from Cordell Vail", and (d) If a large portion of the information you use is derived from the information on this web page, advertisements of that information must include the statement, "Partially based on materials provided by Cordell Vail, www.vcaa.com/testengineer/weeklytips.htm." To copy otherwise, to republish or post on servers, or to distribute to lists requires prior specific permission.

Request permission to republish from Cordell Vail at:








Vail Consulting home page

Last updated 20 Aug 2007