This is an attempt by a grass roots software tester to "GROW THE TESTING PROCESS"
where I work as well as to do all in my power to have an influence on the entire
Software Testing Quality Assurance-Quality Control profession. I have taken
the liberty to hang a "testing tips" message on the outer wall of
my cubical each week. As other testers, developers and managers walk past; they
stop and read it. It seems to have had a very positive impact on our testing
effort. Now I am posting them here on this web page to share them with you in
hopes that it will help you to improve the testing processes where you work.
Together, we can help to improve the whole profession!
If
there are tips posted here that were copyrighted and proper credit is not given
I apologize. Sometimes things come to me by email from friends and I have no
way to know the source. If one of your sayings is posted here with out proper
credit, please let me know and I will add your copyright notice and contact
information of if you desire, remove it from my web page.
On Aug 16, 2007 I was on a plane going to Denver to give a seminar on Creating
Exclence in the work place. I sat by Suz Curry. She started to ask me about
my seminars. I like to tell people who know very little about the software testing
profession about my seminars because I find they make some of the most profound
observations. This was one of those occasions. I was telling her part of the
seminar was about conflict resolution in the work place. After listening to
me talk for a few minutes about how hard it is some times for managers to help
their employees want to make suggestions, especially about how you as a manager
could improve, here is what Suz suggested.
If you are having problems with your team and don't seem
to be able to get them to talk to you, try having an anonymous suggestion box
where team members can put their ideas, positive suggestions, and comments.
That way they don't have to fear reprimand for pointing out things that you
are doing as a manager that they feel could be improved. It is also a good way
to get the team thinking of positive things that can improve processes if you
encourage them to fill the box with positive suggestions and ideas they have
rather than problems they think need to be solved. Any one can complain. Team
builders make suggestions on how to improve processes.
Maybe that idea is not totally new, but I think it is a wonderful suggestion
and I think many a manager would be wise to take her advice. Might just improve
your management style and give you lots of new ideas on how to improve processes
that you might not have other wise thought of. Thanks for the insightful suggestion
Suz....
http://www.onjava.com/pub/a/onjava/2004/06/16/ccunittest.html
Cyclomatic complexity essentially represents the number of paths through a particular
section of code, which in object-oriented languages applies to methods.
http://www.codeproject.com/dotnet/Cyclomatic_Complexity.asp
Cyclomatic Code
Complexity This measure provides a single ordinal number that can be compared
to the complexity of other programs. It is one of the most widely accepted static
software metrics and is intended to be independent of language and language
format. Code Complexity is a measure of the number of linearly-independent paths
through a program module and is calculated by counting the number of decision
points found in the code (if, else, do, while, throw, catch, return, break etc.)
CC=E-N+p
Where
CC = Cyclomatic Complexity
E = the number of edges of the graph
N = the number of nodes of the graph
p = the number of connected components
From a layman’s perspective the above equation can be pretty daunting
to comprehend. Fortunately there is a simpler equation which is easier to understand
and implement by following the guidelines shown below:
Start with 1 for a straight path through the routine.
Add 1 for each of the following keywords or their equivalent: if, while, repeat,
for, and, or.
Add 1 for each case in a switch statement.
http://javaboutique.internet.com/tutorials/metrics/
Cyclomatic Complexity (CC) = number of decision points +1
Decision points are conditional statements such as if/else, while etc.
The following table summarizes the impact of CC values in terms of testing and
maintenance of the code:
CC Value Risk
1-10 Low risk program
11-20 Moderate risk
21-50 High risk
>50 Most complex and highly unstable method
From the complexity perspective of a program, decision points—such as
if-else, etc.—are not the only factor to consider. Logical operations
such as AND, OR, etc. also impact the complexity of the program.
This document defines the defect Severity scale for determining defect criticality and the associated defect Priority levels to be assigned to errors found in software. It is a scale which can be easily adapted to other automated test management tools.
ANSI/IEEE Std 729-1983 Glossary of Software Engineering Terminology defines
Criticality as,
"A classification of a software error or fault based on an evaluation
of the degree of impact that error or fault on the development or operation
of a system (often used to determine whether or when a fault will be corrected)."
The severity framework for assigning defect criticality that has proven most useful in actual testing practice is a five level scale. The criticality associated with each level is based on the answers to several questions.
First, it must be determined if the defect resulted in a system failure.
ANSI/IEEE Std 729-1983 defines a failure as,
"The termination of the ability of a functional unit to perform its required
function."
Second, the probability of failure recovery must be determined. ANSI/IEEE
729-1983 defines failure recovery as,
"The return of a system to a reliable operating state after failure."
Third, it must be determined if the system can do this on its own or if remedial measures must be implemented in order to return the system to reliable operation.
Fourth, it must be determined if the system can operate reliably with the defect present if it is not manifested as a failure.
Fifth, it must be determined if the defect should or should not be repaired.
The following five level scale of defect criticality addresses the these questions.
The five Levels are:
1. Critical
2. Major
3. Average
4. Minor
5. Exception
1. Critical - The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system.
2. Major - The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system. There is no way to make the failed component(s), however, there are acceptable processing alternatives which will yield the desired result.
3. Average - The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability.
4. Minor - The defect does not cause a failure, does not impair usability, and the desired processing results are easily obtained by working around the defect.
5. Exception - The defect is the result of non-conformance to a standard,
is related to the aesthetics of the system, or is a request for an enhancement.
Defects at this level may be deferred or even ignored.
In addition to the defect severity level defined above, defect priority level
can be used with severity categories to determine the immediacy of repair.
A five repair priority scale has also be used in common testing practice.
The levels are:
1. Resolve Immediately
2. Give High Attention
3. Normal Queue
4. Low Priority
5. Defer
1. Resolve Immediately - Further development and/or testing cannot occur until
the defect has been repaired. The system cannot be used until the repair has
been effected.
2. Give High Attention - The defect must be resolved as soon as possible because it is impairing development/and or testing activities. System use will be severely affected until the defect is fixed.
3. Normal Queue - The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
4. Low Priority - The defect is an irritant which should be repaired but which can be repaired after more serious defect have been fixed.
5. Defer - The defect repair can be put of indefinitely. It can be resolved
in a future major system revision or not resolved at all.
Cordell Vail
HOW DO YOU STOP GETTING SO MUCH JUNK EMAIL FROM YOUR WEB PAGE?
If you put your email address on your web page you are going to get killed with
junk email. Junk email dealers have COMPUTER ROBOTS that read the text of every
web page in existence and they look for text with an @ and .com or .net etc
etc then put it on their lists. I have found an easy way to get around that.
Instead of putting your email address in text, take a picture of it and put
it on the web page as a graphic image. Then people can still see it but the
ROBOTS cant. For example try to click on my email address below:
It is just a picture. However the URL above for Randy Rice's web page is an
actual LINK you can click on. I never make the email address on my web pages
an HTML MAIL TO live link. I just don't like junk email that
much.
In testing we seem to always have trouble using the words AFFECT and EFFECT correctly.
Is this the correct usage?
How will adding tables AFFECT our timeline?
What will the EFFECT be on our time line if we add tables?
Andrew Werth, who works with me at WSIPC, came up with a little test that will help you know the right word to use.
So here is his little test:
EFFECT is a Noun
AFFECT is a Action Verb
Normally if you can substitute the word RESULT or RESULTS for the word then it should be EFFECT. If it is awkward to use the word RESULT or RESULTS in the sentence then it probably is AFFECT.
Example:
How will adding tables AFFECT (RESULT) our timeline? (Does not
make sense so we should use AFFECT)
What will the EFFECT (RESULTS) be on our time line if we add tables? (Sounds
OK so we should use EFFECT)
Does that help? Thanks Andy.....
Here are some terms to help you know just how fast "FAST" is:
(information taken from
http://whatis.techtarget.com ,
http://www.free-definition.com and http://www.physlink.com/Education/AskExperts/ae281.cfm
In Network testing, time is measured in milliseconds (one thousandth of a second). Does that seem fast? Well if there are 6000 send and receive commands on the network and each takes a millisecond, that is a 6 second delay. Would you wait that long for a web page to be displayed? Probably not!
So from that you can see in the computer world that there needs to be other clocked times that are much faster.
(This definition follows U.S. usage in which a billion is a thousand million and a trillion is a 1 followed by 12 zeros.)
Millisecond (ms or msec )- one thousandth of a second and
is commonly used in measuring the time to read to or write from a hard disk
or a CD-ROM player or to measure packet travel time on the Internet.
Microsecond (us or Greek letter mu plus s) - one millionth
(10 to the 6th power) of a second.
Nanosecond (ns or nsec) - one billionth (10 to the 9th power)
of a second and is a common measurement of read or write access time to random
access memory (RAM).
Picosecond - one trillionth (10 to the 12 power) of a second,
or one millionth of a microsecond.
Femtosecond - one millionth of a nanosecond or 10 to the 15th
power of a second and is a measurement sometimes used in laser technology.
Attosecond - one quintillionth (10 to the 18th power) of a
second and is a term used in photon research.
Yoctosecond - is one septillionth (10 to the -24th power) of a second
and if you can think this small.......
Planck length - is the scale at which classical ideas about gravity and space-time cease to be valid, and quantum effects dominate. This is the 'quantum of length', the smallest measurement of length with any meaning. And roughly equal to 1.6 x 10 to the -35 m or about 10 to the -20 times the size of a proton. The Planck time is the time it would take a photon traveling at the speed of light to cross a distance equal to the Planck length. This is the 'quantum of time', the smallest measurement of time that has any meaning, and is equal to 10 to the -43 seconds. No smaller division of time has any meaning. With in the framework of the laws of physics as we understand them today, we can say only that the universe came into existence when it already had an age of 10 to the -43 seconds.
("We can say only".... Some times scientists just make me smile.... But at least now you know how fast I can test an application and find a bug in it.... ha ha ha )
and how big is that cable in front of the power shovel used to pull it when it gets stuck?
THE ISSUE:
You have learned all these new testing processes you want to share with management.
THE RESULT:
(Used with permission of Dan Reynolds - cartoonist.)
You can see more of Dan's work in the Readers Digest, in his books on amazon.com or visit his web page at:
www.reynoldsunwrapped.com
The right software, delivered defect free, on time and on cost, every
time
SEI Compatibility Maturity Model
The two key rules for good testers are
Rule #1
Always be thorough and meticulous
Rule #2
When tempted to cut corners, refer to rule #1
In God we trust. All others we test
This is the Trade Mark of Medical Software, Inc. - Jim Kandler, csqe, cste(shared here with Jim's permission. Thanks Jim)
Quality work will be long remembered after speed has been forgotten
Brian Tracy
TERMS FOR TESTING PROCESSES
THAT ARE OFTEN CONFUSED,
MISUSED OR USED AS SYNONYMS:
Stress Testing
Tests the server – peek volume over a short span of time
Load Testing
Tests the data base – Largest load it can handle at one time
Volume Testing
Tests the server and the database – Heavy volumes of data over time (combination
of Stress testing and Load testing over time)
Performance Testing
Tests the server – user response time
Contention Testing
Verifies that the server can handle multiple user demands on the same
resource (that is, data records or memory).
Bench Mark Testing
Compares your standard to the same standard in other businesses in the industry
Base Line Testing
Setting a standard to be compared to later within you own company
TYPES OF TESTING WE WOULD RATHER NOT SEE
Aggression Testing
If this doesn’t work, I’m gonna kill somebody.
Compression Testing
{}
Confession Testing
Okay, okay, I did program that bug.
Congressional Testing
Are you now, or have you ever been a bug?
Depression Testing
If this doesn’t work, I’m goinga kill myself.
Egression Testing
Uh-0h, a bug... I’m outta here.
Digression Testing
Well, it should work, but let me tell you about my truck.
Expression Testing
#$(%&#*^#$
Obsession Testing
I’ll find this bug if it is the last thing that I do
Succession Testing
The system is dead. Long live the new system!
Suggestion Testing
Well, it works but wouldn’t it be better if....
TESTING TERMS THAT ARE OFTEN MISUNDERSTOOD
EQUIVALENCE CLASSES
EQUIVALENCE TESTING (Yes or No)
(y or Y vrs n or N and all not y or Y)
EQUIVALENCE PARTITIONING
(Less than - Between - Greater than)
BOUNDARY VALUE ANALYSIS
(Min + - 1 and Max + - 1)
NUMBERS THAT ARE GOOD TO HAVE MEMORIZED
-32767 to 32767 and 0 to 65535
If you don't know what those numbers are you need to do some night reading (smile)
Words worth knowing as a Test Engineer
syn•chro•nous (Runs in the
foreground)(Continuous connection)
Function: adjective
1 : happening, existing, or arising at precisely the same time
2 : recurring or operating at exactly the same periods
3 : involving or indicating synchronism
4 a : having the same period; also : having the same period and phase b : GEOSTATIONARY
5 : of, used in, or being digital communication (as between computers) in which
a common timing signal is established that dictates when individual bits can
be transmitted, in which characters are not individually delimited, and which
allows for very high rates of data transfer
6: (digital communication) pertaining to a transmission technique that requires
a common clock signal (a timing reference) between the communicating devices
in order to coordinate their transmissions
7. <operating system, communications> Two or more processes
that depend upon the occurrences of specific events such as
common timing signals.
8. Occurring at the same time or at the same rate or with a
regular or predictable time relationship or sequence.
asyn•chro•nous (Runs in the
background)(Runs independent)
Function: adjective
1 : not synchronous
2 : of, used in, or being digital communication (as between computers) in which
there is no timing requirement for transmission and in which the start of each
character is individually signaled by the transmitting device
3: (digital communication) pertaining to a transmission technique that does
not require a common clock between the communicating devices; timing signals
are derived from special characters in the data stream itself [ant: synchronous]
2: not synchronous; not occurring or existing at the same time or having the
same period or phase [ant: synchronous]
4. <operating system> A process in a multitasking system
whose execution can proceed independently, "in the
background". Other processes may be started before the
asynchronous process has finished.
5. <communications> A communications system in which data
transmission may start at any time and is indicated by a
start bit, e.g. EIA-232. A data byte (or other element
defined by the protocol) ends with a stop bit. A
continuous marking condition (identical to stop bits but not
quantized in time), is then maintained until data resumes.
Seems we are always trying to explain the difference
between QA and QC
Quality Assurance (QA) (Quality Analysts)
(Certified Software Quality Analyst) (CSQA)
Management Staff Function - Evaluating Practices
& Procedures - normally done by Quality Assurance Analysts and QA Managers
Quality Control (QC) (Test Engineers)
(Certified Software Test Engineer) (CSTE)
Line Worker or Test Engineer Function -Testing
Principles and Concepts - normally done by Quality Control Test Engineers
MORE WORD DEFINITIONS AND CLARIFICATIONS
DEFECT (BUG): An unwanted and unintended property of a program or piece of hardware, especially one that causes it to malfunction. An error in the hardware or software. (A missing feature... that is a joke of course)
FAULT: A manifestation of a defect in the software. A fault, if encountered, may cause a failure.
FAILURE: The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.
RISK: The probability that a failure will occur causing a loss. If controls are inadequate to reduce the risk it creates vulnerability.
RISK ANALYSIS: Analysis of the systems vulnerabilities combining the loss potential with an estimated rate of occurrence to establish the potential level of damage that may occur.
RISK EXPOSURE: The risk exposure always exists, although the loss may not occur.
THREAT: Something capable of exploiting vulnerability (a defect)
VULNERABILITY: A flaw or defect that may be exploited by a threat. The flaw if exposed would causes the system to operate in a fashion different from its intended use.
CONTROL: Anything that tends to cause the reduction of risk. Controls can accomplish this by reducing the frequency of occurrence.
Now you knowToo little testing is a crime
Too much testing is a sin
William Perry
VERIFICATION: Testing to see if the SYSTEM IS RIGHT
Looking for defects from the developers perspective - a variance from the specifications
- usually using white box testing, unit testing and reviews. The process of determining
whether or not the products of a given phase in the life-cycle fulfil a set of established
requirements.
VALIDATION: Testing to see if the RIGHT SYSTEM
was created
Looking for defects from a customer or user perspective - a variance from what
the user wanted - usually using black box testing, system testing, acceptance
testing, walk-through. The stage in the software life-cycle at the end of the
development process where software is evaluated to ensure that it complies
with the requirements.
Here is another set of similar definitions that is helpful from Software
QA/Test Resource Center:
© 1996-2006 by Rick Hower
www.softwareqatest.com/qatfaq1.html#FAQ1_7
What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents,
plans, code, requirements, and specifications. This can be done with checklists,
issues lists, walkthroughs, and inspection meetings. Validation typically involves
actual testing and takes place after verifications are completed. The term 'IV
& V' refers to Independent Verification and Validation.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes.
Little or no preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of
the inspection is typically a document such as a requirements spec or a test
plan, and the purpose is to find problems and see what's missing, not to fix
anything. Attendees should prepare for this type of meeting by reading thru
the document; most problems will be found during this preparation. The result
of the inspection meeting should be a written report. Thorough preparation for
inspections is difficult, painstaking work, but is one of the most cost effective
methods of ensuring quality. Employees who are most skilled at inspections are
like the 'eldest brother' in the parable in 'Why is it often hard for organizations
to get serious about quality assurance?'. Their skill may have low visibility
but they are extremely valuable to any software development organization, since
bug prevention is far more cost-effective than bug detection.
If you have a copy of this cartoon with the artist's name still on it,
or if you know who the cartoonist is who drew it, would you please send
the name to me. I would like to get permission to use it in one of my presentations.
Please send the cartoonist's name to me at:
If you have other helpful tips related to software testing that you would like
to post here please send them to me. I will put them here with your name and
contact information. (Good opportunity for networking for you... look at the
counter below)
Please send them to me at: