Discussion:
[SCRUMDEVELOPMENT] Stumped by 3rd party testing
'Richard Griffiths' richard@oneill-griffiths.net [SCRUMDEVELOPMENT]
2015-04-03 10:25:01 UTC
Permalink
I was asked a question yesterday and I’m stuck for an approach.



It’s not related to my workplace, we have teams with good mix of development, testing and design skills, but I was asked this by an ex co-worker who is looking to improve current work practices in his new place.



They are looking at scrum as an option.



So my first question was to ask what is the problem they are trying to solve and then we could see what suits. I saw this as a coaching opportunity and a valuable learning experience for me, even if it’s just a conversation over a few pints.



So I started to dig deeper. Most of the questions revolved around scrum, user stories, acceptance criteria, definition of done, and the need to improve time to market. Ok so far.



Then we came to team structure. Through discussion it was mentioned that they have a 3rd party test team that they pass everything over to towards the end of an iteration and then they fix the issues in the next iteration.



Smelly alarm bells and lots of spinning cogs.



So some points I’d need to consider are to see if it’s about throwing code over the wall, getting them to look at the perceived reduced cost, and asking how this 3rd party actually works. How do they communicate what stories they are working on, how do they test, how are things reported back?



Then thinking about this a bit more, the only way I can see this working is if the 3rd party can provide an agile testing service and be actively involved in that iteration; be it planning, any daily stand-ups, looking at remote pairing, and generally being in the loop. Now that could be done via skype, hangouts or webex/join.me. Looking at developing joint ownership would be key.



I don’t see scrum working for them otherwise, unless anyone has any experience? If not, there are some practices that might help them improve such as TDD, CI, and looking at their refactoring approach if it exists.



I’m just seeing lots of mini-waterfalls/shorter iterations.



Thanks



Richard
Ron Jeffries ronjeffries@acm.org [SCRUMDEVELOPMENT]
2015-04-03 10:56:34 UTC
Permalink
Hi Richard,
Then thinking about this a bit more, the only way I can see this working is if the 3rd party can provide an agile testing service and be actively involved in that iteration; be it planning, any daily stand-ups, looking at remote pairing, and generally being in the loop. Now that could be done via skype, hangouts or webex/join.me <http://join.me/>. Looking at developing joint ownership would be key.
I don’t see scrum working for them otherwise, unless anyone has any experience? If not, there are some practices that might help them improve such as TDD, CI, and looking at their refactoring approach if it exists.
You’re on the right track, I believe. I think about that this way:

If the 3rd party testing service is finding any defects that are worth fixing, the Scrum Product Owner must decide when to fix them, because the Product Owner is the only source of work for the team to do.

However, since the team is doing Scrum, all the Backlog Items that the 3rd party testers find bugs in were reported as “done” by the team. But Lo! they were not done after all. This is bad. Fixing the bugs is rework, and we all know that rework is waste.

Furthermore, having to schedule these items to be done over — which is what fixing a bug amounts to — makes it difficult for the Product Owner to deliver the best possible product by the desired date, because she never knows how much is actually done. This is bad.

Clearly the “problem” is that these items were not sufficiently tested during the Sprint. Scrum clearly states that the team is supposed to deliver a tested, integrated product increment at the end of every Sprint. If testing is not done inside the Sprint, this is just impossible.

Testing must be done inside the Sprint if the team is to deliver a suitable “done” increment. How do you do this? Scrum says that the Dev Team must have all the skills necessary to deliver the increment. The increment must be tested.

Therefore either the team must bring in these 3rd party people, as you suggest, or they must test inside the Sprint on their own, ensuring that the 3rd party people never find any bugs and that they starve.


The team might object, saying that they would go slower if they have to test the code themselves. This is not likely the case, unless they are somehow already magically not shipping bugs to the 3rd party folks. Otherwise, they have all their bug fixing time (how much is that? 30%? half?) to prevent bugs instead of fixing. This will quickly pay off: it takes less time to prevent a defect escaping by finding it in-Sprint.

And, of course, you’re on the right track wanting to look at TDD and CI, whether they think they’re doing Scrum or not. I think they should still work in a one- or two-week cycle, shipping integrated increments to the 3p people.


The real bug, of course, is the whole notion of code then test, which is an organizational issue. Scrum identifies it (in this case YOU identified it, thinking about Scrum :) and the fix is “don’t do it that way”. The same discovery could be made by examining the work flow with Kanban, or drawing the value stream ala Lean. No matter how you look at it, building broken things and fixing them is much slower than building things that work.


Good luck!


Ron Jeffries
ronjeffries.com <http://ronjeffries.com/>
Sometimes I give myself admirable advice, but I am incapable of taking it.
-- Mary Wortley Montagu
'Richard Griffiths' richard@oneill-griffiths.net [SCRUMDEVELOPMENT]
2015-04-03 17:39:50 UTC
Permalink
Ron



Thanks for your excellent response.
Post by Ron Jeffries ***@acm.org [SCRUMDEVELOPMENT]
Testing must be done inside the Sprint if the team is to deliver a suitable “done” increment.
That’s the immediate problem that stands out for me. They’re never really done. It would be more honest to say we have a development iteration that consists of fixing the work from the last one and then doing new development work. I need to talk to them in greater detail to determine how much testing they actually do, or are they just throwing it over the wall.
Post by Ron Jeffries ***@acm.org [SCRUMDEVELOPMENT]
I think they should still work in a one- or two-week cycle, shipping integrated increments to the 3p people
I can see reducing the cycle time would help in some way, but one of the other things to consider, again by more conversation, is to find out how they work in detail.



It’s just been an informal chat so far.



I appreciate the comments.



Richard
'Steve Ash' steve@ootac.com [SCRUMDEVELOPMENT]
2015-04-03 10:59:20 UTC
Permalink
Hi Richard



You are correct – they need to understand the perceived cost savings by working with third party versus the benefits of doing ‘integrated’ testing during the Sprint.



Again, you are correct that the only way you could call this Agile (let alone Scrum) is if the test team are fully integrated with the dev team; devs doing TDD and CI may not even need the test team!



What sort of tests are the 3rd party team doing? Do they have some sort of expensive test-suite?



Hope that helps



Steve

_,___
'Richard Griffiths' richard@oneill-griffiths.net [SCRUMDEVELOPMENT]
2015-04-03 17:39:48 UTC
Permalink
Steve
Post by 'Steve Ash' ***@ootac.com [SCRUMDEVELOPMENT]
What sort of tests are the 3rd party team doing? Do they have some sort of expensive test-suite?
That’s one of the questions to ask next, it’s all been pretty informal so far.



What does the team do, or not do, that requires a 3rd party testing team?



Thanks for your thoughts on this.



Richard

Tirrell Payton tpayton@payton-consulting.com [SCRUMDEVELOPMENT]
2015-04-03 11:03:28 UTC
Permalink
Hi Richard,


For companies where everyone is in the same place in the same location, the
easiest way to get them to stop doing this is to quantify the cost of delay
and feedback loops for:
tossing it over the wall->wait->get feedback->fix->repeat


For other companies, its a hard constraint, particularly when the company
has retained the services of a 3rd party/outsourced testing provider.
In that case, constraints being what they are, there are some steps this
company can take to become *more* agile, but they wont get the same result
as an organization that removes these kinds of constraints.
In particular:
*- Decrease Handoffs*
A headache associated with geographical distribution is meeting times. A
solution is to alternate the time of the team’s daily standup every
sprint. One sprint use a time that is convenient to the onshore team (and
let offshore suffer late nights/early mornings). The next sprint use a
time that is convenient to the offshore team (and let the onshore team
suffer late nights/early mornings).


*- Create Clear Separation of Testing Concerns*
An alternative is to create a clear separation of testing concerns. Engage
onshore developers for verification testing via automated unit tests (using
an xUnit framework). Engage offshore testers for validation testing (e.g.,
performance/integration/exploratory testing). This will eliminate waiting
for initial verification testing, but still ensure the more validation
testing still happens, using your offshore testing experts.


*- Enforce Modern Engineering Practices*
Companies tend to focus more on “process” based solutions when there are
many technical solutions to enable you to get the most from your offshore
partnership. Two crucial engineering practices are continuous integration
and automated unit testing.


These and other methods serve to eliminate the biggest bottleneck: Waiting
for the feedback loop to come back to you.




For more:
http://www.payton-consulting.com/best-practices-for-agile-and-outsourced-qa-testing/


- Tirrell Payton
FREE Visual Guide to Scrum:
http://www.payton-consulting.com/free-book-a-visual-guide-to-scrum/
@tirrellpayton
http://www.linkedin.com/in/tirrellpayton


On Fri, Apr 3, 2015 at 3:25 AM, 'Richard Griffiths'
Post by 'Richard Griffiths' ***@oneill-griffiths.net [SCRUMDEVELOPMENT]
I was asked a question yesterday and I’m stuck for an approach.
It’s not related to my workplace, we have teams with good mix of
development, testing and design skills, but I was asked this by an ex
co-worker who is looking to improve current work practices in his new place.
They are looking at scrum as an option.
So my first question was to ask what is the problem they are trying to
solve and then we could see what suits. I saw this as a coaching
opportunity and a valuable learning experience for me, even if it’s just a
conversation over a few pints.
So I started to dig deeper. Most of the questions revolved around scrum,
user stories, acceptance criteria, definition of done, and the need to
improve time to market. Ok so far.
Then we came to team structure. Through discussion it was mentioned that
they have a 3rd party test team that they pass everything over to towards
the end of an iteration and then they fix the issues in the next iteration.
Smelly alarm bells and lots of spinning cogs.
So some points I’d need to consider are to see if it’s about throwing code
over the wall, getting them to look at the perceived reduced cost, and
asking how this 3rd party actually works. How do they communicate what
stories they are working on, how do they test, how are things reported back?
Then thinking about this a bit more, the only way I can see this working
is if the 3rd party can provide an agile testing service and be actively
involved in that iteration; be it planning, any daily stand-ups, looking at
remote pairing, and generally being in the loop. Now that could be done via
skype, hangouts or webex/join.me. Looking at developing joint ownership
would be key.
I don’t see scrum working for them otherwise, unless anyone has any
experience? If not, there are some practices that might help them improve
such as TDD, CI, and looking at their refactoring approach if it exists.
I’m just seeing lots of mini-waterfalls/shorter iterations.
Thanks
Richard
--
Tirrell Payton 619.663.4582
http://www.payton-consulting.com
http://www.linkedin.com/in/tirrellpayton


*Upcoming Courses*
Intro to Scrum San Diego, CA
<http://www.eventbrite.com/e/introduction-to-scrum-1-day-course-tickets-16362395365>
Intro to Scrum Newport Beach, CA
<http://www.eventbrite.com/e/introduction-to-scrum-1-day-course-newport-beach-ca-tickets-16368784475>
Intro to Scrum Los Angeles, CA
<http://www.eventbrite.com/e/introduction-to-scrum-1-day-course-los-angeles-ca-tickets-16369586875>
Intro to Scrum Ventura, CA

<http://www.eventbrite.com/e/introduction-to-scrum-1-day-course-ventura-ca-tickets-16371335104>
'Richard Griffiths' richard@oneill-griffiths.net [SCRUMDEVELOPMENT]
2015-04-03 17:39:47 UTC
Permalink
Tirrell



* <http://www.payton-consulting.com/best-practices-for-agile-and-outsourced-qa-testing/> http://www.payton-consulting.com/best-practices-for-agile-and-outsourced-qa-testing/



Thanks for the link.



As soon as I heard 3rd party, I was concerned. I know they’ve looked to do scrum, but need to really ask what the problem is and then see what the dev team are doing that needs a 3rd party.



I’ve seen the approach you mentioned, splitting verification and validation testing in one place I’ve worked previously. It was game development. All teams worked on a fixed sprint cycle, with automation being used. Then there was a final testing cycle where there was a lot of manual testing, given the nature of the game. It worked and we had at least 5 releases a year. We knew it wasn’t the best, but given the complex dynamics of the front end, manual was more cost effective.



I’ve also seen the internal QA role work, especially when the 3rd party is remote, albeit not on a scrum project.



That’s given me some extra ideas to consider.



Richard
Cass Dalton cassdalton73@gmail.com [SCRUMDEVELOPMENT]
2015-04-03 11:54:09 UTC
Permalink
There is real value in third party verification. Any development team, no
matter how good, will miss something because of the assumptions that get
built up in their heads as they develop. Bringing in the 3rd party team
into the dev team is one approach, but it could be very difficult to really
integrate them as part of an agile team.
An alternative would be to consider the test team as sort of a customer.
One of the major problems with the way they are working is that their
process allows them to 1)skimp on the initial definition of done and 2)
allows the DoD to span multiple sprints and multiple teams. IMHO, the team
should ultimately see that as unacceptable. They should be working hard to
not allow those escaped defects. The team should analyze every issue that
comes out of the 3rd party team in the retro to figure out why they
continue to allow issues to escape the teams and escape the sprints.


In either scenario, the test team should be part of the demo. Are they?
What is the interaction with the test team and the PO? Is the PO treating
the team like stake holders? Is the PO negotiating with the test team to
determine which of their issues are actually escaped defects and which are
really unimplemented scope? If EVERY issue that the test team finds is
blindly going in to the next sprint, then the PO is not getting a chance to
differentiate bug from unimplemented feature, and that keeps the PO from
being able to prioritize work effectively


On Fri, Apr 3, 2015 at 6:25 AM, 'Richard Griffiths'
Post by 'Richard Griffiths' ***@oneill-griffiths.net [SCRUMDEVELOPMENT]
I was asked a question yesterday and I’m stuck for an approach.
It’s not related to my workplace, we have teams with good mix of
development, testing and design skills, but I was asked this by an ex
co-worker who is looking to improve current work practices in his new place.
They are looking at scrum as an option.
So my first question was to ask what is the problem they are trying to
solve and then we could see what suits. I saw this as a coaching
opportunity and a valuable learning experience for me, even if it’s just a
conversation over a few pints.
So I started to dig deeper. Most of the questions revolved around scrum,
user stories, acceptance criteria, definition of done, and the need to
improve time to market. Ok so far.
Then we came to team structure. Through discussion it was mentioned that
they have a 3rd party test team that they pass everything over to towards
the end of an iteration and then they fix the issues in the next iteration.
Smelly alarm bells and lots of spinning cogs.
So some points I’d need to consider are to see if it’s about throwing code
over the wall, getting them to look at the perceived reduced cost, and
asking how this 3rd party actually works. How do they communicate what
stories they are working on, how do they test, how are things reported back?
Then thinking about this a bit more, the only way I can see this working
is if the 3rd party can provide an agile testing service and be actively
involved in that iteration; be it planning, any daily stand-ups, looking at
remote pairing, and generally being in the loop. Now that could be done via
skype, hangouts or webex/join.me. Looking at developing joint ownership
would be key.
I don’t see scrum working for them otherwise, unless anyone has any
experience? If not, there are some practices that might help them improve
such as TDD, CI, and looking at their refactoring approach if it exists.
I’m just seeing lots of mini-waterfalls/shorter iterations.
Thanks
Richard
'Richard Griffiths' richard@oneill-griffiths.net [SCRUMDEVELOPMENT]
2015-04-03 17:39:43 UTC
Permalink
Cass
but it could be very difficult to really integrate them as part of an agile team.
True. This is early stages, just a conversation with someone who experienced the way we did things and wanted to see if it was possible to adopt scrum. As I’ve mentioned earlier, there are many more questions to ask and a lot more detail to consider.



I agree that the 3rd party needs to be in the loop from the beginning as I’m unsure how the stories, ACs and tests are developed and passed over.



I’d need more detail on number of defects and if they are prioritized in any way or just fixed in the next iteration.



One of the teams I work with has a zero tolerance policy on defects. They don’t raise a bug, just discuss the issue with the PO and fix if he thinks it’s needed. The other is gradually reducing the number of defects found but is logging them due to alleviate timezone and communications issues due to location. Both teams do have a conversation with the PO which boils down to “do we need to fix this”. More often than not they are edge cases and they’re closed off. When they do need to be fixed they’re just prioritized as any other work.



As for the people I’m talking to, I’ve just got a lot more to ask.



Thanks. I appreciate your thoughts.



Richard
Loading...