Discussion:
[SCRUMDEVELOPMENT] The Value of Unit Testing Per Se
Michael Wollin yahoo@mercurysw.com [SCRUMDEVELOPMENT]
2015-04-02 06:30:10 UTC
Permalink
I am talking to a client dev manager who, while all on board with automated testing and continuous integration, does not see the value in unit tests. This I see as a problem. His argument is that if the automated system tests are fast enough, that is all you need to find any regressions. Moreover, if for some reason you can’t exercise all the cases at that level, you the integration tests can. So writing unit tests is generally a waste of time.

I have two requests. First would be a very good article that addresses this specific argument and also explains the benefits of JUnit (XUnit). This is DISTINCT from the value of TDD. I suspect the answer is about code fragility but he argues that you can refactor and the system tests will catch any regressions. Second, would be a great article on the deeper value of TDD and why it yields better design and cleaner code, but that is less important to me right now.

The first argument is the primary. I want to explain why Unit testing specifically has value, even if the unit tests are written after the code.

Any great references would be appreciated.

Thanks,

Michael
georgievh@yahoo.co.uk [SCRUMDEVELOPMENT]
2015-04-02 07:08:04 UTC
Permalink
Hi Michael,

Unit testing is mainly a design technique. Providing test coverage is a nice side effect.
You will benefit the most if you write the unit tests before you write the code. This will help you create a better design
If you write tests after you have written the code you will loose this benefit. In that sense, the value you get from unit testing is very little (if any).

I hope this helps.


Kind regards,

Hristo
Ron Jeffries ronjeffries@acm.org [SCRUMDEVELOPMENT]
2015-04-02 09:11:37 UTC
Permalink
Hristo,
Post by ***@yahoo.co.uk [SCRUMDEVELOPMENT]
Unit testing is mainly a design technique. Providing test coverage is a nice side effect.
You will benefit the most if you write the unit tests before you write the code. This will help you create a better design
I would say “Test-First, and specifically TDD, is a design technique. Testing after the fact does not provide this benefit. There are other benefits to unit tests even if written after the fact.”

Ron Jeffries
ronjeffries.com <http://ronjeffries.com/>
You never know what is enough unless you know what is more than enough. -- William Blake
jonas@auken.net [SCRUMDEVELOPMENT]
2015-04-02 09:02:57 UTC
Permalink
Hi Michael,

I made a simple Google search on "system versus unit testing" and the two first hits were these and they are both good reads:
http://en.wikipedia.org/wiki/Software_testing http://en.wikipedia.org/wiki/Software_testing http://www.mypersonalangle.com/tag/unit-tests-vs-system-tests/ http://www.mypersonalangle.com/tag/unit-tests-vs-system-tests/


The main difference between system test and unit test in my mind is that system tests are usually "black box" and unit tests are usually "white box" (please read the first link for an explanation, if you haven't seen this distinction before). In other words, system tests test the whole system while the unit tests test specific parts in various combinations. Writing system tests that cover every situation and every possible combination of data would be very hard and time consuming. And since the system tests DO take longer to run than unit tests, it's hardly feasible to write hundreds of system tests and run them every time you check in. (Read the second link now for more pointers.)


Unit tests are written by the programmers to check if their code works as expected. They require knowing the intimate details of the code. Their purpose is to quickly and efficiently prove (!) that the code works as expected. The unit tests should be executed on every commit or at least several times a day. The unit tests can show very precisely where the error has been made.
System tests are created (with some tool) by the business people to check that all parts of the system are still working nicely together. The system tests should be executed during deployment in your production environment and once or twice a day in a system test environment.

Another reason to make a distinction and do both kinds of tests is that unit tests can be run in a mocked environment without even deploying the application. The system tests only makes sense if run in a production-like environment. Hence, programmers will find it much easier to run a suite of unit tests on their local system, than setting up a complete environment on the local machine in order to run system tests.

I hope this will help you.

Kind regards,
Jonas
Flavius Ștef flavius.stef@gmail.com [SCRUMDEVELOPMENT]
2015-04-02 09:08:21 UTC
Permalink
You might want to follow the reasoning in JB's *Integration tests are a
scam*: http://www.infoq.com/presentations/integration-tests-scam
Post by ***@yahoo.co.uk [SCRUMDEVELOPMENT]
Hi Michael,
I made a simple Google search on "system versus unit testing" and the two
http://en.wikipedia.org/wiki/Software_testing
http://www.mypersonalangle.com/tag/unit-tests-vs-system-tests/
The main difference between system test and unit test in my mind is that
system tests are usually "black box" and unit tests are usually "white box"
(please read the first link for an explanation, if you haven't seen this
distinction before). In other words, system tests test the whole system
while the unit tests test specific parts in various combinations. Writing
system tests that cover every situation and every possible combination of
data would be very hard and time consuming. And since the system tests DO
take longer to run than unit tests, it's hardly feasible to write hundreds
of system tests and run them every time you check in. (Read the second link
now for more pointers.)
Unit tests are written by the programmers to check if their code works as
expected. They require knowing the intimate details of the code. Their
purpose is to quickly and efficiently prove (!) that the code works as
expected. The unit tests should be executed on every commit or at least
several times a day. The unit tests can show very precisely where the error
has been made.
System tests are created (with some tool) by the business people to check
that all parts of the system are still working nicely together. The system
tests should be executed during deployment in your production environment
and once or twice a day in a system test environment.
Another reason to make a distinction and do both kinds of tests is that
unit tests can be run in a mocked environment without even deploying the
application. The system tests only makes sense if run in a production-like
environment. Hence, programmers will find it much easier to run a suite of
unit tests on their local system, than setting up a complete environment on
the local machine in order to run system tests.
I hope this will help you.
Kind regards,
Jonas
Ron Jeffries ronjeffries@acm.org [SCRUMDEVELOPMENT]
2015-04-02 09:09:57 UTC
Permalink
Hi Michael,
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
I am talking to a client dev manager who, while all on board with automated testing and continuous integration, does not see the value in unit tests. This I see as a problem. His argument is that if the automated system tests are fast enough, that is all you need to find any regressions. Moreover, if for some reason you can’t exercise all the cases at that level, you the integration tests can. So writing unit tests is generally a waste of time.
No less a luminary than Jim Coplien would agree with your client dev manager. He has spoken a number of times against unit tests and perhaps written about it as well.

Of course, the automated system tests are NOT fast enough. When I have unit tests I run them every few minutes, since they run in seconds. I’ve never seen a system-level test running anywhere near that fast.

And I can think of no reason why one couldn’t run all the tests one had and yet COULD run the integration tests. Perhaps someone more creative than I can think of that example, preferably one not involving quantum chromodynamics.
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
I have two requests. First would be a very good article that addresses this specific argument and also explains the benefits of JUnit (XUnit). This is DISTINCT from the value of TDD. I suspect the answer is about code fragility but he argues that you can refactor and the system tests will catch any regressions. Second, would be a great article on the deeper value of TDD and why it yields better design and cleaner code, but that is less important to me right now.
I don’t know of an article addressing that, offhand. Here are a few bullet points:

Unit tests are invariably faster than system tests;
Unit tests are more targeted. Rather than saying “The customer was charged $123.45 and should have been charged $124.01, the unit test will say “State tax on item 16 was $0.00, expected $0.56.
Unit tests support re-use of code (for great customer savings) and system tests do not.
Often system tests are not in the hands of the developer working on some module and therefore do not fit the developer’s needs.
Unit tests can test things that customer tests cannot or often do not.
And 
 IT’S NONE OF A CD MANAGER'S BUSINESS HOW THE PROGRAMMERS PROGRAM.

That last also applies to ScrumMasters, Product Owners, Tech Leads, Managers, and everyone who isn’t a Dev Team member: The team’s testing practice belongs to the team. That said, if they’re not operating at a very very low defect injection rate (like one bug reaching clients every six months), then they need to up their game. Unit tests might be one way to do that.

I’ve noticed that I am far more likely to write tests when I TDD than when I write them after the fact. After the fact, I almost never bother. But I am profoundly lazy and entirely undisciplined, as should be obvious.

However, the evidence is not clear cut. In some situations, I do not use TDD nor unit tests. These are usually in situations where things are small in scale, but there are also situations that are difficult enough to test that it does not pay off.

It’s worth noting that “difficult enough to test” often means “Ron’s skill in testing this kind of thing is not as high as it might be”. Often, when I learn some new way to test that kind of thing, I begin to test more.

Anyway, that’s what I’ve got ...

Ron Jeffries
ronjeffries.com <http://ronjeffries.com/> Speed is ppoor subsittute fo accurancy -- Fortune Cookie
Avi Naparstek avi.naparstek@gmail.com [SCRUMDEVELOPMENT]
2015-04-02 10:40:19 UTC
Permalink
Hristo,

You mean ** TDD** (not unit tests) is a design technique.

TDD and Unit Tests are two distinct and different things.

All the best,
Avi
georgievh@yahoo.co.uk [SCRUMDEVELOPMENT]
2015-04-02 11:41:21 UTC
Permalink
That is correct Avi.

Doing TDD allowes for creating better designs.
However, I find unit testing contributing the most in achieving this in comparisson with functional and integration testing. That is not to say that functional and integration testing do not contribute to better design.


Kind regards,

Hristo
Cass Dalton cassdalton73@gmail.com [SCRUMDEVELOPMENT]
2015-04-02 11:54:55 UTC
Permalink
Would you ever consider building a car without the ability to test that
individual components meet their individual specifications? System and
integrative tests happen behind the wheel. But unit tests make sure that
the wheels are aligned, that the rotors are the correct thickness, and that
the spark plugs fire when given the correct current.
To make the analogy more like software development, consider a client
asking for a custom crafted car built to unique specs.
Post by ***@yahoo.co.uk [SCRUMDEVELOPMENT]
That is correct Avi.
Doing TDD allowes for creating better designs.
However, I find unit testing contributing the most in achieving this in
comparisson with functional and integration testing. That is not to say
that functional and integration testing do not contribute to better design.
Kind regards,
Hristo
georgievh@yahoo.co.uk [SCRUMDEVELOPMENT]
2015-04-02 13:37:05 UTC
Permalink
Hi Cass,

Could I please ask you to ellaborate a bit further?
Which bit of what I said you dissagree with and why?


Kind regards,

Hristo
Cass Dalton cassdalton73@gmail.com [SCRUMDEVELOPMENT]
2015-04-02 13:47:12 UTC
Permalink
I was not disagreeing with you. I was discussing the difference between
unit and system tests. The OP asked what the value of unit tests were
relative to system/integration tests. It looked like I was replying to
directly to you because I hit reply on my phone and it auto replied to the
latest post.
Post by ***@yahoo.co.uk [SCRUMDEVELOPMENT]
Hi Cass,
Could I please ask you to ellaborate a bit further?
Which bit of what I said you dissagree with and why?
Kind regards,
Hristo
Tim Wright tim@tfwright.co.nz [SCRUMDEVELOPMENT]
2015-04-02 06:35:09 UTC
Permalink
Hi,


No references, but I can tell you what we found in the last project:


* End to end integration tests are great. They tell you that the code has
stopped meeting requirements. But it can take ages to figure out why. Unit
tests provide a more focused test and it's more obvious why something is
failing
* Some conditions (related generally to timing) are tricky to test with end
to end tests.
* Doing unit tests (before and after) forces a developer to think about
edge cases for the component - whereas an end-to-end test won't always
catch all the odd behaviours. Those tests focus more on the edge cases for
an application not for a code component.
* Unit tests can give you information when you're being hit with a fragile
base class problem whereas integration tests might not pick this up.


Tim
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
I am talking to a client dev manager who, while all on board with
automated testing and continuous integration, does not see the value in
unit tests. This I see as a problem. His argument is that if the automated
system tests are fast enough, that is all you need to find any regressions.
Moreover, if for some reason you can’t exercise all the cases at that
level, you the integration tests can. So writing unit tests is generally a
waste of time.
I have two requests. First would be a very good article that addresses
this specific argument and also explains the benefits of JUnit (XUnit).
This is DISTINCT from the value of TDD. I suspect the answer is about code
fragility but he argues that you can refactor and the system tests will
catch any regressions. Second, would be a great article on the deeper value
of TDD and why it yields better design and cleaner code, but that is less
important to me right now.
The first argument is the primary. I want to explain why Unit testing
specifically has value, even if the unit tests are written after the code.
Any great references would be appreciated.
Thanks,
Michael
--
Tim
021 251 5593
http://www.linkedin.com/in/drtimwright
firepoet78@yahoo.com [SCRUMDEVELOPMENT]
2015-04-02 12:46:36 UTC
Permalink
This is totally off-topic, but I must also question the manager’s desire to dictate what practices the team uses.


“Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”
http://www.agilemanifesto.org/principles.html <http://www.agilemanifesto.org/principles.html>


If the team believes in unit tests, and they are consistently producing quality work, the dev manager should leave them alone. That’s where I would start with mentoring him/her. Without team autonomy, agile implementation fails.


Now, if you’re attempting to convince the team that unit testing is good for them, that’s another question altogether, and I would suggest that they should not just look for studies, but also experience it themselves. Does it help them think better? Do they get good regression coverage? Is the resultant code better organized? Are they producing better software?


And don’t forget, TDD at the unit level should also be paired with merciless refactoring, synchronous continuous integration, collective code ownership, and a strong knowledge of good design at the very least. Otherwise it’s hard to tell if perceived failures in unit tests are related to poor design or the fact of writing tests. See http://www.extremeprogramming.org/rules/testfirst.html <http://www.extremeprogramming.org/rules/testfirst.html> for a nice view of the interconnectedness of all the practices. Pair Programming doesn’t hurt either. ;-)


Happy to chat more one-on-one if it would help!


Stephen Starkey, ACC, CSP, CSM, CHP
empathy | connectedness | restorative | relator | belief
coreagile.co/coaching <http://coreagile.co/coaching>
steveropa@gmail.com [SCRUMDEVELOPMENT]
2015-04-02 16:12:41 UTC
Permalink
I don’t think you are off topic at all with your comment. To the contrary, I would suggest that is the best response for that manager. I have had this conversation with managers many times. If we have the right relationship I will respond with a question something like “how many for loops will you allow the programmers to write?”. That usually gets a wry laugh and a realization that the manager is getting out of his wheel house.


Programmers have the right to not make crap. If Unit Test is part of how they do that, which I believe strongly it is, then its not up to the manager to get in their way.


Steve






Sent from Surface





From: ***@yahoogroups.com
Sent: ‎Thursday‎, ‎April‎ ‎2‎, ‎2015 ‎6‎:‎46‎ ‎AM
To: ***@yahoogroups.com








This is totally off-topic, but I must also question the manager’s desire to dictate what practices the team uses.




“Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.”

http://www.agilemanifesto.org/principles.html




If the team believes in unit tests, and they are consistently producing quality work, the dev manager should leave them alone. That’s where I would start with mentoring him/her. Without team autonomy, agile implementation fails.




Now, if you’re attempting to convince the team that unit testing is good for them, that’s another question altogether, and I would suggest that they should not just look for studies, but also experience it themselves. Does it help them think better? Do they get good regression coverage? Is the resultant code better organized? Are they producing better software?




And don’t forget, TDD at the unit level should also be paired with merciless refactoring, synchronous continuous integration, collective code ownership, and a strong knowledge of good design at the very least. Otherwise it’s hard to tell if perceived failures in unit tests are related to poor design or the fact of writing tests. See http://www.extremeprogramming.org/rules/testfirst.html for a nice view of the interconnectedness of all the practices. Pair Programming doesn’t hurt either. ;-)




Happy to chat more one-on-one if it would help!












Stephen Starkey, ACC, CSP, CSM, CHP

empathy | connectedness | restorative | relator | belief

coreagile.co/coaching
Michael Wollin yahoo@mercurysw.com [SCRUMDEVELOPMENT]
2015-04-03 01:38:09 UTC
Permalink
Post by ***@yahoo.com [SCRUMDEVELOPMENT]
This is totally off-topic, but I must also question the manager’s desire to dictate what practices the team uses.
Some of the comments across this thread pre-suppose that these teams want to do unit testing. I’m not seeing that. To be clear, the manager would rather NOT dictate team practices wrt TDD or JUnit. The conversation came up in the context of developing the Definition of Done. The QA manager echoed what is a common practice, to have a X% code coverage requirement for automated testing. The conversation is at its root one about technical debt, removing it, avoiding it, and about an executive wish (mandate) to ship with less defects (which is a good thing).

Of topic, but relevant:
The teams are not in the practice of TDD. Personally, I prefer to take a coaching stance here, which has me be in the inquiry, “What is in the way of the teams wanted to do this? What is the lid on craftsmanship as a celebrated value?” My hope with that approach is to solve the root problem. So instead of mandating a best practice, the teams are hungry to do them. This would thus generalize the problem way beyond unit testing and TDD to values and culture, and fostering a culture of excellence.

In general, the majority of “senior developers” at the majority of companies do not see the value of unit testing or TDD, refactoring and emergent design. They see it as extra work, or at least, a disruptive (and thus something to resit) change to what they’ve been doing all their careers. It’s another topic (and a worthy one) to ask how to shift this with the teams. [I don’t want to take us off topic. I debated whether to leave this last off topic part in or take it out.]
Ron Jeffries ronjeffries@acm.org [SCRUMDEVELOPMENT]
2015-04-03 02:12:30 UTC
Permalink
Michael,
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
In general, the majority of “senior developers” at the majority of companies do not see the value of unit testing or TDD, refactoring and emergent design. They see it as extra work, or at least, a disruptive (and thus something to resit) change to what they’ve been doing all their careers.
Is there code free of defects? Then they do not need testing. Does it have defects? How and when do they find this out?

Do they fix the defects on their own time, having given us code that didn’t work while they were presumably getting paid? Probably not: they get paid when they write code that doesn’t work, and paid again when they fix it. This is, putting the best possible face on it, very wasteful.

Is some other agency testing the developers’ code and reporting some time later that it didn’t work? Where is the developers’ sense of pride if they are so doggone “senior”? Shouldn’t a senior developer be able to write code that works?

Of course if the code is free of defects, all is well. If, as I suspect, it is not in any sense free of defects, I’d like to see these “senior” developers challenged to solve the defect problem. They might find testing useful for that: I know that I would.
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
It’s another topic (and a worthy one) to ask how to shift this with the teams. [I don’t want to take us off topic. I debated whether to leave this last off topic part in or take it out.]
I suggest that management should measure what they want. Presumably, they want features — done software.

A bug fix is not something we want. It is rework. It is waste. Furthermore, the code that is being fixed was not done, was not a feature, was not what we want.

Team gets five things “done”. They get “credit” (in no strict sense) for those five. But two have defects. Oops, team only got three things done. Bad. Frown at them.

Ask them what their plan is for making those defects, that waste, stop.

Ron Jeffries
ronjeffries.com <http://ronjeffries.com/>
I'm really pissed off by what people are passing off as "agile" these days.
You may have a red car, but that does not make it a Ferrari.
-- Steve Hayes
Adam Sroka adam.sroka@gmail.com [SCRUMDEVELOPMENT]
2015-04-03 03:13:15 UTC
Permalink
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
The teams are not in the practice of TDD. Personally, I prefer to take a
coaching stance here, which has me be in the inquiry, “What is in the way
of the teams wanted to do this? What is the lid on craftsmanship as a
celebrated value?” My hope with that approach is to solve the root problem.
So instead of mandating a best practice, the teams are hungry to do them.
This would thus generalize the problem way beyond unit testing and TDD to
values and culture, and fostering a culture of excellence.
It's perfectly within the purview of a high-level consultant, "coach" or
otherwise, to suggest that something will work better than something else,
or to suggest that something won't work at all.

IMO, if the team can self-organize effectively they probably don't need a
coach, and if they can't some level of "this is better than what you are
doing" is precisely why you are there.
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
In general, the majority of “senior developers” at the majority of
companies do not see the value of unit testing or TDD, refactoring and
emergent design. They see it as extra work, or at least, a disruptive (and
thus something to resit) change to what they’ve been doing all their
careers. It’s another topic (and a worthy one) to ask how to shift this
with the teams. [I don’t want to take us off topic. I debated whether to
leave this last off topic part in or take it out.]
Fear of change underlies most resistance. That fear is valid and worth
examining. However, people are going to feed you all kinds of BS excuses in
order to avoid examining it. It's best to move past that swiftly and get on
with trying new things.
Eric Gunnerson Eric.Gunnerson@microsoft.com [SCRUMDEVELOPMENT]
2015-04-02 17:09:48 UTC
Permalink
This is a pretty common opinion in my neck of the woods, and we’ve had a number of discussions about it.

First off, I’m presuming that you are asserting that the team should adopt TDD globally. If I’m off the mark, what follows may not make sense.

I don’t think that your approach is likely to be effective in this situation, nor do I think it’s the right approach. Having a team adopt TDD or unit testing across their whole codebase is a big change, a disruptive change, and dev managers typically (and rightly) try to protect their team from big disruptive changes.

From the dev manager’s perspective, you are proposing that the team invest in this new way of working. The team members will need to learn how to write good unit tests, they will need to learn how to break dependencies, and they will probably need to do some refactoring along the way. All of that is sunk cost, and there will also certainly be some disruption/confusion cost as well. The change will slow the team down and they will produce less business value. You are asserting that, over time, the impact of these changes will increase the velocity of the team and pay back the investment. For the team to try this, the dev manager not only needs to agree with your estimation, he may need to make a compelling argument up the chain to explain why the team will be producing less.

Articles will not help in this situation. I have no idea what the politics are in this company, but in some companies, if this went badly, the dev manager could be choosing to spend more time with his family, if you know what I mean.

What you need is something that is cheap, low-risk, and quick. Something the dev manager can agree to without it being a big deal. Something that can set up a pattern of small incremental experiments for the team.

In all the integration-tested codebases I have worked in there has been at least one bug farm, an area of the code that is hard to get right. It’s typically something that is hard to get to through integration tests. Find one where it’s straightforward to do, write solid unit tests for that area, and close the bug farm. You have now fixed an issue that was slowing the team down. You have a small success that you can build on.

Not only is this more likely to work with the dev manager, it’s just a better approach. I have been in a group (4 teams, 30-some people) that tried to add unit tests to an integration-tested product, and we tried the “you shall write unit tests” approach. It’s easy to underestimate how hard it can be to break dependencies in code that has no unit tests even if you are pretty good at it; we had cases where it took a developer hours to be able to write a test for some small change, and – despite being committed to unit testing – the developer found it very frustrating. The return for the time spent just wasn’t there, we weren’t making progress, and it wasn’t good for morale.

We settled on what we called an “islands of goodness” approach. We looked for opportunities where making part of the codebase bright and shiny was a small investment, and applied TDD/added unit tests as appropriate when working in that area. Over time, we got better at doing this, the islands got closer together, and the code got better.

Hope that helps

Eric

From: ***@yahoogroups.com [mailto:***@yahoogroups.com]
Sent: Wednesday, April 01, 2015 11:30 PM
To: ***@yahoogroups.com
Subject: [SCRUMDEVELOPMENT] The Value of Unit Testing Per Se



I am talking to a client dev manager who, while all on board with automated testing and continuous integration, does not see the value in unit tests. This I see as a problem. His argument is that if the automated system tests are fast enough, that is all you need to find any regressions. Moreover, if for some reason you can’t exercise all the cases at that level, you the integration tests can. So writing unit tests is generally a waste of time.

I have two requests. First would be a very good article that addresses this specific argument and also explains the benefits of JUnit (XUnit). This is DISTINCT from the value of TDD. I suspect the answer is about code fragility but he argues that you can refactor and the system tests will catch any regressions. Second, would be a great article on the deeper value of TDD and why it yields better design and cleaner code, but that is less important to me right now.

The first argument is the primary. I want to explain why Unit testing specifically has value, even if the unit tests are written after the code.

Any great references would be appreciated.

Thanks,

Michael

________________________________
No virus found in this message.
Checked by AVG - www.avg.com<http://www.avg.com>
Version: 2015.0.5751 / Virus Database: 4315/9381 - Release Date: 03/25/15
Internal Virus Database is out of date.
George Dinwiddie lists@idiacomputing.com [SCRUMDEVELOPMENT]
2015-04-02 17:30:27 UTC
Permalink
Michael,

Another value of unit tests that I haven't seen mentioned is the ability
to be able to change a piece of code and know I haven't violated any
assumptions previously made about that code.

Another value is being able to test the behavior in the face of error
conditions that I don't know how to create at the whole system level.

Another value is that, by testing at the unit level, I don't have the
combinatorial explosion of tests that checking all possible combinations
of inputs and internal states through system tests produces.

- George
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
I am talking to a client dev manager who, while all on board with automated testing and continuous integration, does not see the value in unit tests. This I see as a problem. His argument is that if the automated system tests are fast enough, that is all you need to find any regressions. Moreover, if for some reason you can’t exercise all the cases at that level, you the integration tests can. So writing unit tests is generally a waste of time.
I have two requests. First would be a very good article that addresses this specific argument and also explains the benefits of JUnit (XUnit). This is DISTINCT from the value of TDD. I suspect the answer is about code fragility but he argues that you can refactor and the system tests will catch any regressions. Second, would be a great article on the deeper value of TDD and why it yields better design and cleaner code, but that is less important to me right now.
The first argument is the primary. I want to explain why Unit testing specifically has value, even if the unit tests are written after the code.
Any great references would be appreciated.
Thanks,
Michael
------------------------------------
------------------------------------
------------------------------------
Yahoo Groups Links
--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------
Adam Sroka adam.sroka@gmail.com [SCRUMDEVELOPMENT]
2015-04-02 18:44:49 UTC
Permalink
I would start with this:
http://www.infoq.com/presentations/integration-tests-scam

The guys who came up with TDD (as we know it today) were pretty smart. They
started with a rule that they would test everything that could possibly
break. Then they figured out that automating that often involved
reconsidering design choices they had already made and possibly breaking
other things. Somewhere along the line someone figured out that if you
wrote the test first the code that passed the test couldn't possibly be
untestable. Somewhere along the line they also figured out that if you do
this consistently you can refactor with ease because everything in the
system has a test.

I am going to make a couple of assertions that your client manager probably
wouldn't like, but they are accurate based on my experience:

1) If you write *unit* tests after you write the implementation you will
not test things that someone performing TDD would. No matter how smart or
great you are at unit testing this is universally true. Even if you are
using TDD you likely won't remember to test everything, but you will
*always* end up testing more than if you did it another way.

2) If you write "system" tests you won't run them as often (or at all,
locally.) This delays feedback. Delaying feedback is bad for all sorts of
reasons not the least of which are that it slows overall development and
greatly increases the chances that the feedback will not be properly
addressed.

3) "System" tests are not informative. That is: they don't fail fast enough
and they don't tell you exactly what went wrong. Fixing a regression found
by "system" tests is more like debugging a production defect than making a
simple code change revealed by a failing unit test. It can take an
indeterminate amount of investigation and experimentation to find out what
actually needs to be fixed, and this often takes longer than the actual
fix.
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
I am talking to a client dev manager who, while all on board with
automated testing and continuous integration, does not see the value in
unit tests. This I see as a problem. His argument is that if the automated
system tests are fast enough, that is all you need to find any regressions.
Moreover, if for some reason you can’t exercise all the cases at that
level, you the integration tests can. So writing unit tests is generally a
waste of time.
I have two requests. First would be a very good article that addresses
this specific argument and also explains the benefits of JUnit (XUnit).
This is DISTINCT from the value of TDD. I suspect the answer is about code
fragility but he argues that you can refactor and the system tests will
catch any regressions. Second, would be a great article on the deeper value
of TDD and why it yields better design and cleaner code, but that is less
important to me right now.
The first argument is the primary. I want to explain why Unit testing
specifically has value, even if the unit tests are written after the code.
Any great references would be appreciated.
Thanks,
Michael
George Dinwiddie lists@idiacomputing.com [SCRUMDEVELOPMENT]
2015-04-02 20:13:57 UTC
Permalink
Another good article by JB is
http://www.jbrains.ca/permalink/how-test-driven-development-works-and-more
Post by Flavius Ștef ***@gmail.com [SCRUMDEVELOPMENT]
http://www.infoq.com/presentations/integration-tests-scam
The guys who came up with TDD (as we know it today) were pretty smart.
They started with a rule that they would test everything that could
possibly break. Then they figured out that automating that often
involved reconsidering design choices they had already made and possibly
breaking other things. Somewhere along the line someone figured out that
if you wrote the test first the code that passed the test couldn't
possibly be untestable. Somewhere along the line they also figured out
that if you do this consistently you can refactor with ease because
everything in the system has a test.
I am going to make a couple of assertions that your client manager
1) If you write *unit* tests after you write the implementation you will
not test things that someone performing TDD would. No matter how smart
or great you are at unit testing this is universally true. Even if you
are using TDD you likely won't remember to test everything, but you will
*always* end up testing more than if you did it another way.
2) If you write "system" tests you won't run them as often (or at all,
locally.) This delays feedback. Delaying feedback is bad for all sorts
of reasons not the least of which are that it slows overall development
and greatly increases the chances that the feedback will not be properly
addressed.
3) "System" tests are not informative. That is: they don't fail fast
enough and they don't tell you exactly what went wrong. Fixing a
regression found by "system" tests is more like debugging a production
defect than making a simple code change revealed by a failing unit test.
It can take an indeterminate amount of investigation and experimentation
to find out what actually needs to be fixed, and this often takes longer
than the actual fix.
__
I am talking to a client dev manager who, while all on board with
automated testing and continuous integration, does not see the value
in unit tests. This I see as a problem. His argument is that if the
automated system tests are fast enough, that is all you need to find
any regressions. Moreover, if for some reason you can’t exercise all
the cases at that level, you the integration tests can. So writing
unit tests is generally a waste of time.
I have two requests. First would be a very good article that
addresses this specific argument and also explains the benefits of
JUnit (XUnit). This is DISTINCT from the value of TDD. I suspect the
answer is about code fragility but he argues that you can refactor
and the system tests will catch any regressions. Second, would be a
great article on the deeper value of TDD and why it yields better
design and cleaner code, but that is less important to me right now.
The first argument is the primary. I want to explain why Unit
testing specifically has value, even if the unit tests are written
after the code.
Any great references would be appreciated.
Thanks,
Michael
--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------
Giovanni Asproni aspro@acm.org [SCRUMDEVELOPMENT]
2015-04-03 09:05:32 UTC
Permalink
Hi Michael,

I've written a blog post some time ago about the problems with system
testing http://asprotunity.com/blog/system-tests-can-kill-your-project/
It's based on my experience working on quite a few systems both as a
contract developer and consultant.

In short, almost all legacy systems I've worked on had system tests, but
almost none of them had unit or even component tests. Despite the
presence of system tests, those systems were extremely difficult to
evolve and manage. The main reason was not the speed of the tests
(albeit that is important as well), but their coarse grained nature
which made it very difficult to use them for testing method, class and
component level changes (not to talk about systems developed by
distributed teams).

I hope you will find it useful.

Regards,
Giovanni
Post by Michael Wollin ***@mercurysw.com [SCRUMDEVELOPMENT]
I am talking to a client dev manager who, while all on board with
automated testing and continuous integration, does not see the value
in unit tests. This I see as a problem. His argument is that if the
automated system tests are fast enough, that is all you need to find
any regressions. Moreover, if for some reason you can’t exercise all
the cases at that level, you the integration tests can. So writing
unit tests is generally a waste of time.
I have two requests. First would be a very good article that addresses
this specific argument and also explains the benefits of JUnit
(XUnit). This is DISTINCT from the value of TDD. I suspect the answer
is about code fragility but he argues that you can refactor and the
system tests will catch any regressions. Second, would be a great
article on the deeper value of TDD and why it yields better design and
cleaner code, but that is less important to me right now.
The first argument is the primary. I want to explain why Unit testing
specifically has value, even if the unit tests are written after the
code.
Any great references would be appreciated.
Thanks,
Michael
--
Asprotunity Limited
http://asprotunity.com
Linkedin: http://www.linkedin.com/in/gasproni
Twitter: @gasproni
Skype: giovanniasproni
Mobile: +44 (0) 791 746 0453
firepoet78@yahoo.com [SCRUMDEVELOPMENT]
2015-04-03 12:13:10 UTC
Permalink
Post by Ron Jeffries ***@acm.org [SCRUMDEVELOPMENT]
Ask them what their plan is for making those defects, that waste, stop.
This. So much this.


Stephen Starkey, ACC, CSP, CSM, CHP
empathy | connectedness | restorative | relator | belief
coreagile.co/coaching <http://coreagile.co/coaching>
firepoet78@yahoo.com [SCRUMDEVELOPMENT]
2015-04-03 12:13:10 UTC
Permalink
Post by Ron Jeffries ***@acm.org [SCRUMDEVELOPMENT]
Ask them what their plan is for making those defects, that waste, stop.
This. So much this.


Stephen Starkey, ACC, CSP, CSM, CHP
empathy | connectedness | restorative | relator | belief
coreagile.co/coaching <http://coreagile.co/coaching>

Loading...