Testing strategies
The tests that you must write in XP are isolated and automatic.
First, each test doesn't interact with the others you write. That way
you avoid the problem that one test fails and causes a hundred other
failures. Nothing discourages testing more than false negatives. You
get this adrenaline rush when you arrive in the morning and find a
pile of defects. When it turns out to be no big deal, it's a big
letdown. Are you going to pay careful attention to the tests after
this has happened five or ten times? No way.
The tests are also automatic. Tests are most valuable when the stress
level rises, when people are working too much, when human judgment
starts to fail. So the tests must be automatic - returning an
unqualified thumbs up/thumbs down indication of whether the system is
behaving.
It is impossible to test absolutely everything, without the tests
being as complicated and error-prone as the code. It is suicide to
test nothing (in this sense of isolated, automatic tests). So, of all
the things you can imagine testing, what should you test?
You should test things that might break. If code is so simple that it
can't possibly break, then you shouldn't write a test for it.
One way a test can pay off is when a test works that you didn't expect
to work. Then you better go find out why it works, because the code is
smarter than you are. Another way a test can pay off is when a test
breaks when you expected it to work. In either case, you learn
something.
200409 200412 200501 200502 200503 200504 200505 200506 200507 200508 200509 200510 200511 200512 200601 200602 200603 200604 200605 200606 200607 200608 200609 200610 200611 200612 200701 200702 200703 200704 200705 200707 200708 200709 200710 200711 200712 200801 200802 200803 200804 200805 200806 200807 200808 200809 200810 200811 200812 200901 200902 200903 200904 200905 200906 200907 200908 200909 200912 201001 201002 201003 201004 201007 201009 201011 201102
Subscribe to Posts [Atom]