How to deal with or prevent idle in the test team?












7















I'm currently in two scrum projects. The team consists of 9 people (5 developers, 3 testers). We work with user stories, story point estimates and two-week sprints. The team has received a great deal of Scrum and delivers reliably finished (Code + Test + Documentation) software. The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team.



Nevertheless, we have the following problem:



At the beginning of the sprint it always takes until testable user stories are completed. Therefore, it comes to idle with the testers, because there is nothing to test.



Already taken countermeasures (which could not solve the problem):




  • Testers begin with the test preparation for all stories


  • Support testers - where possible - the work of the developers


  • "Open Issues List" by the team in case someone has nothing to do


  • Know How Transfer among the testers



Nevertheless, we have not gotten the problem under control so far.
Question: Has anyone had similar experiences? What can you do about it?
I am grateful for all suggestions!










share|improve this question























  • Are all developers full time on the project? Are all testers full time on the project?

    – user3067860
    Mar 26 at 1:39











  • yes, all developers and all testers are fully involved in the project

    – Mornon
    Mar 26 at 4:53






  • 4





    I would suggest you to write a blog based on the best practices mentioned in the answers below and the practices you have adopted. You have some idle time, this document can be used by your future teams, boosts productivity of fellow testers too.

    – Ja8zyjits
    Mar 27 at 8:11
















7















I'm currently in two scrum projects. The team consists of 9 people (5 developers, 3 testers). We work with user stories, story point estimates and two-week sprints. The team has received a great deal of Scrum and delivers reliably finished (Code + Test + Documentation) software. The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team.



Nevertheless, we have the following problem:



At the beginning of the sprint it always takes until testable user stories are completed. Therefore, it comes to idle with the testers, because there is nothing to test.



Already taken countermeasures (which could not solve the problem):




  • Testers begin with the test preparation for all stories


  • Support testers - where possible - the work of the developers


  • "Open Issues List" by the team in case someone has nothing to do


  • Know How Transfer among the testers



Nevertheless, we have not gotten the problem under control so far.
Question: Has anyone had similar experiences? What can you do about it?
I am grateful for all suggestions!










share|improve this question























  • Are all developers full time on the project? Are all testers full time on the project?

    – user3067860
    Mar 26 at 1:39











  • yes, all developers and all testers are fully involved in the project

    – Mornon
    Mar 26 at 4:53






  • 4





    I would suggest you to write a blog based on the best practices mentioned in the answers below and the practices you have adopted. You have some idle time, this document can be used by your future teams, boosts productivity of fellow testers too.

    – Ja8zyjits
    Mar 27 at 8:11














7












7








7


2






I'm currently in two scrum projects. The team consists of 9 people (5 developers, 3 testers). We work with user stories, story point estimates and two-week sprints. The team has received a great deal of Scrum and delivers reliably finished (Code + Test + Documentation) software. The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team.



Nevertheless, we have the following problem:



At the beginning of the sprint it always takes until testable user stories are completed. Therefore, it comes to idle with the testers, because there is nothing to test.



Already taken countermeasures (which could not solve the problem):




  • Testers begin with the test preparation for all stories


  • Support testers - where possible - the work of the developers


  • "Open Issues List" by the team in case someone has nothing to do


  • Know How Transfer among the testers



Nevertheless, we have not gotten the problem under control so far.
Question: Has anyone had similar experiences? What can you do about it?
I am grateful for all suggestions!










share|improve this question














I'm currently in two scrum projects. The team consists of 9 people (5 developers, 3 testers). We work with user stories, story point estimates and two-week sprints. The team has received a great deal of Scrum and delivers reliably finished (Code + Test + Documentation) software. The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team.



Nevertheless, we have the following problem:



At the beginning of the sprint it always takes until testable user stories are completed. Therefore, it comes to idle with the testers, because there is nothing to test.



Already taken countermeasures (which could not solve the problem):




  • Testers begin with the test preparation for all stories


  • Support testers - where possible - the work of the developers


  • "Open Issues List" by the team in case someone has nothing to do


  • Know How Transfer among the testers



Nevertheless, we have not gotten the problem under control so far.
Question: Has anyone had similar experiences? What can you do about it?
I am grateful for all suggestions!







manual-testing test-management team-management management scrum






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 25 at 16:57









MornonMornon

13210




13210













  • Are all developers full time on the project? Are all testers full time on the project?

    – user3067860
    Mar 26 at 1:39











  • yes, all developers and all testers are fully involved in the project

    – Mornon
    Mar 26 at 4:53






  • 4





    I would suggest you to write a blog based on the best practices mentioned in the answers below and the practices you have adopted. You have some idle time, this document can be used by your future teams, boosts productivity of fellow testers too.

    – Ja8zyjits
    Mar 27 at 8:11



















  • Are all developers full time on the project? Are all testers full time on the project?

    – user3067860
    Mar 26 at 1:39











  • yes, all developers and all testers are fully involved in the project

    – Mornon
    Mar 26 at 4:53






  • 4





    I would suggest you to write a blog based on the best practices mentioned in the answers below and the practices you have adopted. You have some idle time, this document can be used by your future teams, boosts productivity of fellow testers too.

    – Ja8zyjits
    Mar 27 at 8:11

















Are all developers full time on the project? Are all testers full time on the project?

– user3067860
Mar 26 at 1:39





Are all developers full time on the project? Are all testers full time on the project?

– user3067860
Mar 26 at 1:39













yes, all developers and all testers are fully involved in the project

– Mornon
Mar 26 at 4:53





yes, all developers and all testers are fully involved in the project

– Mornon
Mar 26 at 4:53




4




4





I would suggest you to write a blog based on the best practices mentioned in the answers below and the practices you have adopted. You have some idle time, this document can be used by your future teams, boosts productivity of fellow testers too.

– Ja8zyjits
Mar 27 at 8:11





I would suggest you to write a blog based on the best practices mentioned in the answers below and the practices you have adopted. You have some idle time, this document can be used by your future teams, boosts productivity of fellow testers too.

– Ja8zyjits
Mar 27 at 8:11










6 Answers
6






active

oldest

votes


















16















"there is nothing to test"




That is a strong statement.



I like to use James Bach's definition of testing:




Testing is the process of evaluating a product by learning about it
through exploration and experimentation, which includes to some
degree: questioning, study, modeling, observation, inference, etc.




So, unless there is nothing new to learn about the product, yes, you don't have anything to test.



However, there may be some new things that you can learn. Maybe if you do some of the following, you may uncover them:




  • Pair programming (yes, with the developer);

  • Investigate results of your monitoring and logging instrumentation;

  • Extend your monitoring and logging instrumentation;

  • Create chaos in your environments;

  • Refine backlog to remove duplication and increase simplicity;

  • Watch users (control groups or real ones) using your product;


  • Investigate competitors systems;

  • Refactor any piece of code (production code, automated checking code, deployment code, etc).


These activities may put a tester in new places, expanding his/hers understanding of the product.






share|improve this answer
























  • Most of the points have been completely implemented in the past few weeks. We are currently setting up Penetration Testing on a CI basis for our projects. Further education in the team are also on the agenda.

    – Mornon
    Mar 25 at 18:48



















9














In addition to some of the other suggestions, you could consider a few other options:




  • Build/run load tests for new/recent work (given the maturity of your test automation you may already have this under control)

  • Review existing test automation for obsolete or ineffective tests (You have no idea how much I wish I could reach this point)

  • Review and refactor existing test automation code. In any rapid development environment, automated test code can get dated quite quickly.

  • Review and update older customer documentation. In my experience this can become out of date fairly rapidly if development is quick.

  • Review other documentation to make sure it's up to date. This can include (but is not limited to) use cases, business requirements, database dictionaries, functional requirements, test documentation...

  • Work with product owners to refine any stories in the backlog - or just go in there and review them and ask questions anyway. Testers typically have a unique combination of breadth and depth with a product they're familiar with and can often pick up potentially problematic changes before they go to code.

  • Review the user stories and defects in the current sprint and start planning how to test them. If there's configuration that's needed, it can save a lot of time to set up as much of the configuration as possible before the story/defect is coded.


A lot of these are things I've done when I found myself in a holding pattern.






share|improve this answer































    5














    If you have a set of Regression Tests, testers can start automating them starting with which are easiest to automate. This will save you a lot of time in the long run during Regression Testing. Of course, this requires some programming skills and if the testers do not have those at the moment then this is the great time for them to learn it and apply to automating the tests. This is a win-win for both the testers and the team. As, testers are adding a new skill to their personal tool-set which will eventually benefit the business/company too.






    share|improve this answer
























    • The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team. We have also offered training courses and so the testers just trained in Selenium.

      – Mornon
      Mar 25 at 17:09











    • We are already implementing our CI on Penetration Test, and we are training our testers in terms of safety.

      – Mornon
      Mar 25 at 17:11











    • @Mornon That's great! Not sure what else can be done. I personally tend to help out on some low priority development tasks.

      – Baljeet Singh
      Mar 25 at 17:32



















    2














    Trying to add to the great suggestions here...
    How is the test team utilized towards the end of a sprint?
    Do they become the limiting factor then, i.e. are developers waiting for feedback from testers?



    A little bit of spare capacity is good to have (something like 15-20%), though it rarely becomes as visible as in your case.
    It's needed for keeping response times low; put simply, without spare capacity you get queues and thus delays.
    In your case, I assume you could make good use of some of that capacity towards the end of a sprint... if only you were able to move it there, right?



    First suggestion: Do one week sprints.
    As a result, you should get some user stories whose implementation will span two 1-week sprints.
    Some of these will become "ready to test" at the beginning of a sprint, thus using some capacity.
    Other user stories may also get "canceled" from the weekly release (or however you call the working software resulting from a sprint),
    just because they still have bugs that you weren't able to fix in time
    (Use feature flags to allow last-minute decisions of what gets released.)
    These will also become "ready to test" at the beginning of the next sprint.



    Second suggestion: Use Kanban instead of Scrum (there are hybrids of Kanban and Scrum as well).
    With a Kanban system you'll get user stories arriving in the "Ready to Test" column continuously.
    Then you can still collect a bunch of user stories in "Ready to Release" (i.e. development done, testing done) and
    make a weekly or biweekly release from those.



    Finally, whenever that team composition (5 developers, 3 testers) was decided,
    you problably had a greater need for testing capacity than you have now.
    With all your great automation and the other improvements, you may be able to put one tester into another team.



    Whatever you do, be ready to accept some "idle time" -- think of it as providing tester availability for faster feedback.
    If you reduce testing capacity too much, that will become the bottleneck and slow down the whole development cycle.






    share|improve this answer








    New contributor




    Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.




























      2














      Since you mentioned, it's a scrum.. its actually the scrum master role to make a deliverable level module to prepare from tiny user stories, so that tester can start preparing their test case based on the deliverable level.



      Say for example, if a micro service which cannot be tested, it should not be included alone in the sprint, rather connecting module (frontend or api) should be included.speak to your scrum master, ask him for such level, product owner need to make the user stories innthis manner.






      share|improve this answer

































        1














        How to deal with or prevent idle in the test team?



        First focus on the overall value of quality assurance - the value added to the organization from the quality of the product, the quality of the testing, the number of defects recently found, the number of users affected, etc.



        If your focus becomes short-term, such as "people look idle" you can very slip into a lot of bad practices equating output with number of hours (on a larger scale it doesn't), micromanaging, people living in fear, etc. This does not inspire quality work.



        I think you have a good question but I would suggest you may wish to consider phrasing and titling it as:



        What should quality professionals do while waiting for a new release to test?



        And remove that mention of idle. I know it may seem 'idle' time from one perspective (testing a release right now) but when you follow the advice of the many answers here you'll see it need not be 'idle' at all and in fact can be some of the most productive and value adding time for the quality professionals who can finally have time to do 'all that other stuff' like:




        • Time Off. we all need that.

        • Technical Education. in this field it should be life long and constant. Every available moment should be spent on it when not testing a release, writing automation, on vacation, etc.

        • Social Events. When done appropriately (often low-key) these strengthen connections.

        • Domain Education. Spend time with users and/or their representatives to better understand how they use the application in the real world.

        • Refactoring. The highlight of the day for many automation engineers.

        • Reducing technical debt. Who doesn't have that?






        share|improve this answer
























          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "244"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38429%2fhow-to-deal-with-or-prevent-idle-in-the-test-team%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          6 Answers
          6






          active

          oldest

          votes








          6 Answers
          6






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          16















          "there is nothing to test"




          That is a strong statement.



          I like to use James Bach's definition of testing:




          Testing is the process of evaluating a product by learning about it
          through exploration and experimentation, which includes to some
          degree: questioning, study, modeling, observation, inference, etc.




          So, unless there is nothing new to learn about the product, yes, you don't have anything to test.



          However, there may be some new things that you can learn. Maybe if you do some of the following, you may uncover them:




          • Pair programming (yes, with the developer);

          • Investigate results of your monitoring and logging instrumentation;

          • Extend your monitoring and logging instrumentation;

          • Create chaos in your environments;

          • Refine backlog to remove duplication and increase simplicity;

          • Watch users (control groups or real ones) using your product;


          • Investigate competitors systems;

          • Refactor any piece of code (production code, automated checking code, deployment code, etc).


          These activities may put a tester in new places, expanding his/hers understanding of the product.






          share|improve this answer
























          • Most of the points have been completely implemented in the past few weeks. We are currently setting up Penetration Testing on a CI basis for our projects. Further education in the team are also on the agenda.

            – Mornon
            Mar 25 at 18:48
















          16















          "there is nothing to test"




          That is a strong statement.



          I like to use James Bach's definition of testing:




          Testing is the process of evaluating a product by learning about it
          through exploration and experimentation, which includes to some
          degree: questioning, study, modeling, observation, inference, etc.




          So, unless there is nothing new to learn about the product, yes, you don't have anything to test.



          However, there may be some new things that you can learn. Maybe if you do some of the following, you may uncover them:




          • Pair programming (yes, with the developer);

          • Investigate results of your monitoring and logging instrumentation;

          • Extend your monitoring and logging instrumentation;

          • Create chaos in your environments;

          • Refine backlog to remove duplication and increase simplicity;

          • Watch users (control groups or real ones) using your product;


          • Investigate competitors systems;

          • Refactor any piece of code (production code, automated checking code, deployment code, etc).


          These activities may put a tester in new places, expanding his/hers understanding of the product.






          share|improve this answer
























          • Most of the points have been completely implemented in the past few weeks. We are currently setting up Penetration Testing on a CI basis for our projects. Further education in the team are also on the agenda.

            – Mornon
            Mar 25 at 18:48














          16












          16








          16








          "there is nothing to test"




          That is a strong statement.



          I like to use James Bach's definition of testing:




          Testing is the process of evaluating a product by learning about it
          through exploration and experimentation, which includes to some
          degree: questioning, study, modeling, observation, inference, etc.




          So, unless there is nothing new to learn about the product, yes, you don't have anything to test.



          However, there may be some new things that you can learn. Maybe if you do some of the following, you may uncover them:




          • Pair programming (yes, with the developer);

          • Investigate results of your monitoring and logging instrumentation;

          • Extend your monitoring and logging instrumentation;

          • Create chaos in your environments;

          • Refine backlog to remove duplication and increase simplicity;

          • Watch users (control groups or real ones) using your product;


          • Investigate competitors systems;

          • Refactor any piece of code (production code, automated checking code, deployment code, etc).


          These activities may put a tester in new places, expanding his/hers understanding of the product.






          share|improve this answer














          "there is nothing to test"




          That is a strong statement.



          I like to use James Bach's definition of testing:




          Testing is the process of evaluating a product by learning about it
          through exploration and experimentation, which includes to some
          degree: questioning, study, modeling, observation, inference, etc.




          So, unless there is nothing new to learn about the product, yes, you don't have anything to test.



          However, there may be some new things that you can learn. Maybe if you do some of the following, you may uncover them:




          • Pair programming (yes, with the developer);

          • Investigate results of your monitoring and logging instrumentation;

          • Extend your monitoring and logging instrumentation;

          • Create chaos in your environments;

          • Refine backlog to remove duplication and increase simplicity;

          • Watch users (control groups or real ones) using your product;


          • Investigate competitors systems;

          • Refactor any piece of code (production code, automated checking code, deployment code, etc).


          These activities may put a tester in new places, expanding his/hers understanding of the product.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 25 at 18:13









          João FariasJoão Farias

          3,151416




          3,151416













          • Most of the points have been completely implemented in the past few weeks. We are currently setting up Penetration Testing on a CI basis for our projects. Further education in the team are also on the agenda.

            – Mornon
            Mar 25 at 18:48



















          • Most of the points have been completely implemented in the past few weeks. We are currently setting up Penetration Testing on a CI basis for our projects. Further education in the team are also on the agenda.

            – Mornon
            Mar 25 at 18:48

















          Most of the points have been completely implemented in the past few weeks. We are currently setting up Penetration Testing on a CI basis for our projects. Further education in the team are also on the agenda.

          – Mornon
          Mar 25 at 18:48





          Most of the points have been completely implemented in the past few weeks. We are currently setting up Penetration Testing on a CI basis for our projects. Further education in the team are also on the agenda.

          – Mornon
          Mar 25 at 18:48











          9














          In addition to some of the other suggestions, you could consider a few other options:




          • Build/run load tests for new/recent work (given the maturity of your test automation you may already have this under control)

          • Review existing test automation for obsolete or ineffective tests (You have no idea how much I wish I could reach this point)

          • Review and refactor existing test automation code. In any rapid development environment, automated test code can get dated quite quickly.

          • Review and update older customer documentation. In my experience this can become out of date fairly rapidly if development is quick.

          • Review other documentation to make sure it's up to date. This can include (but is not limited to) use cases, business requirements, database dictionaries, functional requirements, test documentation...

          • Work with product owners to refine any stories in the backlog - or just go in there and review them and ask questions anyway. Testers typically have a unique combination of breadth and depth with a product they're familiar with and can often pick up potentially problematic changes before they go to code.

          • Review the user stories and defects in the current sprint and start planning how to test them. If there's configuration that's needed, it can save a lot of time to set up as much of the configuration as possible before the story/defect is coded.


          A lot of these are things I've done when I found myself in a holding pattern.






          share|improve this answer




























            9














            In addition to some of the other suggestions, you could consider a few other options:




            • Build/run load tests for new/recent work (given the maturity of your test automation you may already have this under control)

            • Review existing test automation for obsolete or ineffective tests (You have no idea how much I wish I could reach this point)

            • Review and refactor existing test automation code. In any rapid development environment, automated test code can get dated quite quickly.

            • Review and update older customer documentation. In my experience this can become out of date fairly rapidly if development is quick.

            • Review other documentation to make sure it's up to date. This can include (but is not limited to) use cases, business requirements, database dictionaries, functional requirements, test documentation...

            • Work with product owners to refine any stories in the backlog - or just go in there and review them and ask questions anyway. Testers typically have a unique combination of breadth and depth with a product they're familiar with and can often pick up potentially problematic changes before they go to code.

            • Review the user stories and defects in the current sprint and start planning how to test them. If there's configuration that's needed, it can save a lot of time to set up as much of the configuration as possible before the story/defect is coded.


            A lot of these are things I've done when I found myself in a holding pattern.






            share|improve this answer


























              9












              9








              9







              In addition to some of the other suggestions, you could consider a few other options:




              • Build/run load tests for new/recent work (given the maturity of your test automation you may already have this under control)

              • Review existing test automation for obsolete or ineffective tests (You have no idea how much I wish I could reach this point)

              • Review and refactor existing test automation code. In any rapid development environment, automated test code can get dated quite quickly.

              • Review and update older customer documentation. In my experience this can become out of date fairly rapidly if development is quick.

              • Review other documentation to make sure it's up to date. This can include (but is not limited to) use cases, business requirements, database dictionaries, functional requirements, test documentation...

              • Work with product owners to refine any stories in the backlog - or just go in there and review them and ask questions anyway. Testers typically have a unique combination of breadth and depth with a product they're familiar with and can often pick up potentially problematic changes before they go to code.

              • Review the user stories and defects in the current sprint and start planning how to test them. If there's configuration that's needed, it can save a lot of time to set up as much of the configuration as possible before the story/defect is coded.


              A lot of these are things I've done when I found myself in a holding pattern.






              share|improve this answer













              In addition to some of the other suggestions, you could consider a few other options:




              • Build/run load tests for new/recent work (given the maturity of your test automation you may already have this under control)

              • Review existing test automation for obsolete or ineffective tests (You have no idea how much I wish I could reach this point)

              • Review and refactor existing test automation code. In any rapid development environment, automated test code can get dated quite quickly.

              • Review and update older customer documentation. In my experience this can become out of date fairly rapidly if development is quick.

              • Review other documentation to make sure it's up to date. This can include (but is not limited to) use cases, business requirements, database dictionaries, functional requirements, test documentation...

              • Work with product owners to refine any stories in the backlog - or just go in there and review them and ask questions anyway. Testers typically have a unique combination of breadth and depth with a product they're familiar with and can often pick up potentially problematic changes before they go to code.

              • Review the user stories and defects in the current sprint and start planning how to test them. If there's configuration that's needed, it can save a lot of time to set up as much of the configuration as possible before the story/defect is coded.


              A lot of these are things I've done when I found myself in a holding pattern.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Mar 25 at 18:48









              Kate PaulkKate Paulk

              24.9k64085




              24.9k64085























                  5














                  If you have a set of Regression Tests, testers can start automating them starting with which are easiest to automate. This will save you a lot of time in the long run during Regression Testing. Of course, this requires some programming skills and if the testers do not have those at the moment then this is the great time for them to learn it and apply to automating the tests. This is a win-win for both the testers and the team. As, testers are adding a new skill to their personal tool-set which will eventually benefit the business/company too.






                  share|improve this answer
























                  • The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team. We have also offered training courses and so the testers just trained in Selenium.

                    – Mornon
                    Mar 25 at 17:09











                  • We are already implementing our CI on Penetration Test, and we are training our testers in terms of safety.

                    – Mornon
                    Mar 25 at 17:11











                  • @Mornon That's great! Not sure what else can be done. I personally tend to help out on some low priority development tasks.

                    – Baljeet Singh
                    Mar 25 at 17:32
















                  5














                  If you have a set of Regression Tests, testers can start automating them starting with which are easiest to automate. This will save you a lot of time in the long run during Regression Testing. Of course, this requires some programming skills and if the testers do not have those at the moment then this is the great time for them to learn it and apply to automating the tests. This is a win-win for both the testers and the team. As, testers are adding a new skill to their personal tool-set which will eventually benefit the business/company too.






                  share|improve this answer
























                  • The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team. We have also offered training courses and so the testers just trained in Selenium.

                    – Mornon
                    Mar 25 at 17:09











                  • We are already implementing our CI on Penetration Test, and we are training our testers in terms of safety.

                    – Mornon
                    Mar 25 at 17:11











                  • @Mornon That's great! Not sure what else can be done. I personally tend to help out on some low priority development tasks.

                    – Baljeet Singh
                    Mar 25 at 17:32














                  5












                  5








                  5







                  If you have a set of Regression Tests, testers can start automating them starting with which are easiest to automate. This will save you a lot of time in the long run during Regression Testing. Of course, this requires some programming skills and if the testers do not have those at the moment then this is the great time for them to learn it and apply to automating the tests. This is a win-win for both the testers and the team. As, testers are adding a new skill to their personal tool-set which will eventually benefit the business/company too.






                  share|improve this answer













                  If you have a set of Regression Tests, testers can start automating them starting with which are easiest to automate. This will save you a lot of time in the long run during Regression Testing. Of course, this requires some programming skills and if the testers do not have those at the moment then this is the great time for them to learn it and apply to automating the tests. This is a win-win for both the testers and the team. As, testers are adding a new skill to their personal tool-set which will eventually benefit the business/company too.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Mar 25 at 17:07









                  Baljeet SinghBaljeet Singh

                  649




                  649













                  • The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team. We have also offered training courses and so the testers just trained in Selenium.

                    – Mornon
                    Mar 25 at 17:09











                  • We are already implementing our CI on Penetration Test, and we are training our testers in terms of safety.

                    – Mornon
                    Mar 25 at 17:11











                  • @Mornon That's great! Not sure what else can be done. I personally tend to help out on some low priority development tasks.

                    – Baljeet Singh
                    Mar 25 at 17:32



















                  • The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team. We have also offered training courses and so the testers just trained in Selenium.

                    – Mornon
                    Mar 25 at 17:09











                  • We are already implementing our CI on Penetration Test, and we are training our testers in terms of safety.

                    – Mornon
                    Mar 25 at 17:11











                  • @Mornon That's great! Not sure what else can be done. I personally tend to help out on some low priority development tasks.

                    – Baljeet Singh
                    Mar 25 at 17:32

















                  The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team. We have also offered training courses and so the testers just trained in Selenium.

                  – Mornon
                  Mar 25 at 17:09





                  The test automation is up to date, and the CI runs through every day. However, this does not result in enough tasks for the test team. We have also offered training courses and so the testers just trained in Selenium.

                  – Mornon
                  Mar 25 at 17:09













                  We are already implementing our CI on Penetration Test, and we are training our testers in terms of safety.

                  – Mornon
                  Mar 25 at 17:11





                  We are already implementing our CI on Penetration Test, and we are training our testers in terms of safety.

                  – Mornon
                  Mar 25 at 17:11













                  @Mornon That's great! Not sure what else can be done. I personally tend to help out on some low priority development tasks.

                  – Baljeet Singh
                  Mar 25 at 17:32





                  @Mornon That's great! Not sure what else can be done. I personally tend to help out on some low priority development tasks.

                  – Baljeet Singh
                  Mar 25 at 17:32











                  2














                  Trying to add to the great suggestions here...
                  How is the test team utilized towards the end of a sprint?
                  Do they become the limiting factor then, i.e. are developers waiting for feedback from testers?



                  A little bit of spare capacity is good to have (something like 15-20%), though it rarely becomes as visible as in your case.
                  It's needed for keeping response times low; put simply, without spare capacity you get queues and thus delays.
                  In your case, I assume you could make good use of some of that capacity towards the end of a sprint... if only you were able to move it there, right?



                  First suggestion: Do one week sprints.
                  As a result, you should get some user stories whose implementation will span two 1-week sprints.
                  Some of these will become "ready to test" at the beginning of a sprint, thus using some capacity.
                  Other user stories may also get "canceled" from the weekly release (or however you call the working software resulting from a sprint),
                  just because they still have bugs that you weren't able to fix in time
                  (Use feature flags to allow last-minute decisions of what gets released.)
                  These will also become "ready to test" at the beginning of the next sprint.



                  Second suggestion: Use Kanban instead of Scrum (there are hybrids of Kanban and Scrum as well).
                  With a Kanban system you'll get user stories arriving in the "Ready to Test" column continuously.
                  Then you can still collect a bunch of user stories in "Ready to Release" (i.e. development done, testing done) and
                  make a weekly or biweekly release from those.



                  Finally, whenever that team composition (5 developers, 3 testers) was decided,
                  you problably had a greater need for testing capacity than you have now.
                  With all your great automation and the other improvements, you may be able to put one tester into another team.



                  Whatever you do, be ready to accept some "idle time" -- think of it as providing tester availability for faster feedback.
                  If you reduce testing capacity too much, that will become the bottleneck and slow down the whole development cycle.






                  share|improve this answer








                  New contributor




                  Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.

























                    2














                    Trying to add to the great suggestions here...
                    How is the test team utilized towards the end of a sprint?
                    Do they become the limiting factor then, i.e. are developers waiting for feedback from testers?



                    A little bit of spare capacity is good to have (something like 15-20%), though it rarely becomes as visible as in your case.
                    It's needed for keeping response times low; put simply, without spare capacity you get queues and thus delays.
                    In your case, I assume you could make good use of some of that capacity towards the end of a sprint... if only you were able to move it there, right?



                    First suggestion: Do one week sprints.
                    As a result, you should get some user stories whose implementation will span two 1-week sprints.
                    Some of these will become "ready to test" at the beginning of a sprint, thus using some capacity.
                    Other user stories may also get "canceled" from the weekly release (or however you call the working software resulting from a sprint),
                    just because they still have bugs that you weren't able to fix in time
                    (Use feature flags to allow last-minute decisions of what gets released.)
                    These will also become "ready to test" at the beginning of the next sprint.



                    Second suggestion: Use Kanban instead of Scrum (there are hybrids of Kanban and Scrum as well).
                    With a Kanban system you'll get user stories arriving in the "Ready to Test" column continuously.
                    Then you can still collect a bunch of user stories in "Ready to Release" (i.e. development done, testing done) and
                    make a weekly or biweekly release from those.



                    Finally, whenever that team composition (5 developers, 3 testers) was decided,
                    you problably had a greater need for testing capacity than you have now.
                    With all your great automation and the other improvements, you may be able to put one tester into another team.



                    Whatever you do, be ready to accept some "idle time" -- think of it as providing tester availability for faster feedback.
                    If you reduce testing capacity too much, that will become the bottleneck and slow down the whole development cycle.






                    share|improve this answer








                    New contributor




                    Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                    Check out our Code of Conduct.























                      2












                      2








                      2







                      Trying to add to the great suggestions here...
                      How is the test team utilized towards the end of a sprint?
                      Do they become the limiting factor then, i.e. are developers waiting for feedback from testers?



                      A little bit of spare capacity is good to have (something like 15-20%), though it rarely becomes as visible as in your case.
                      It's needed for keeping response times low; put simply, without spare capacity you get queues and thus delays.
                      In your case, I assume you could make good use of some of that capacity towards the end of a sprint... if only you were able to move it there, right?



                      First suggestion: Do one week sprints.
                      As a result, you should get some user stories whose implementation will span two 1-week sprints.
                      Some of these will become "ready to test" at the beginning of a sprint, thus using some capacity.
                      Other user stories may also get "canceled" from the weekly release (or however you call the working software resulting from a sprint),
                      just because they still have bugs that you weren't able to fix in time
                      (Use feature flags to allow last-minute decisions of what gets released.)
                      These will also become "ready to test" at the beginning of the next sprint.



                      Second suggestion: Use Kanban instead of Scrum (there are hybrids of Kanban and Scrum as well).
                      With a Kanban system you'll get user stories arriving in the "Ready to Test" column continuously.
                      Then you can still collect a bunch of user stories in "Ready to Release" (i.e. development done, testing done) and
                      make a weekly or biweekly release from those.



                      Finally, whenever that team composition (5 developers, 3 testers) was decided,
                      you problably had a greater need for testing capacity than you have now.
                      With all your great automation and the other improvements, you may be able to put one tester into another team.



                      Whatever you do, be ready to accept some "idle time" -- think of it as providing tester availability for faster feedback.
                      If you reduce testing capacity too much, that will become the bottleneck and slow down the whole development cycle.






                      share|improve this answer








                      New contributor




                      Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.










                      Trying to add to the great suggestions here...
                      How is the test team utilized towards the end of a sprint?
                      Do they become the limiting factor then, i.e. are developers waiting for feedback from testers?



                      A little bit of spare capacity is good to have (something like 15-20%), though it rarely becomes as visible as in your case.
                      It's needed for keeping response times low; put simply, without spare capacity you get queues and thus delays.
                      In your case, I assume you could make good use of some of that capacity towards the end of a sprint... if only you were able to move it there, right?



                      First suggestion: Do one week sprints.
                      As a result, you should get some user stories whose implementation will span two 1-week sprints.
                      Some of these will become "ready to test" at the beginning of a sprint, thus using some capacity.
                      Other user stories may also get "canceled" from the weekly release (or however you call the working software resulting from a sprint),
                      just because they still have bugs that you weren't able to fix in time
                      (Use feature flags to allow last-minute decisions of what gets released.)
                      These will also become "ready to test" at the beginning of the next sprint.



                      Second suggestion: Use Kanban instead of Scrum (there are hybrids of Kanban and Scrum as well).
                      With a Kanban system you'll get user stories arriving in the "Ready to Test" column continuously.
                      Then you can still collect a bunch of user stories in "Ready to Release" (i.e. development done, testing done) and
                      make a weekly or biweekly release from those.



                      Finally, whenever that team composition (5 developers, 3 testers) was decided,
                      you problably had a greater need for testing capacity than you have now.
                      With all your great automation and the other improvements, you may be able to put one tester into another team.



                      Whatever you do, be ready to accept some "idle time" -- think of it as providing tester availability for faster feedback.
                      If you reduce testing capacity too much, that will become the bottleneck and slow down the whole development cycle.







                      share|improve this answer








                      New contributor




                      Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      share|improve this answer



                      share|improve this answer






                      New contributor




                      Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      answered Mar 27 at 9:10









                      Matthias BerthMatthias Berth

                      1213




                      1213




                      New contributor




                      Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.





                      New contributor





                      Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      Matthias Berth is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.























                          2














                          Since you mentioned, it's a scrum.. its actually the scrum master role to make a deliverable level module to prepare from tiny user stories, so that tester can start preparing their test case based on the deliverable level.



                          Say for example, if a micro service which cannot be tested, it should not be included alone in the sprint, rather connecting module (frontend or api) should be included.speak to your scrum master, ask him for such level, product owner need to make the user stories innthis manner.






                          share|improve this answer






























                            2














                            Since you mentioned, it's a scrum.. its actually the scrum master role to make a deliverable level module to prepare from tiny user stories, so that tester can start preparing their test case based on the deliverable level.



                            Say for example, if a micro service which cannot be tested, it should not be included alone in the sprint, rather connecting module (frontend or api) should be included.speak to your scrum master, ask him for such level, product owner need to make the user stories innthis manner.






                            share|improve this answer




























                              2












                              2








                              2







                              Since you mentioned, it's a scrum.. its actually the scrum master role to make a deliverable level module to prepare from tiny user stories, so that tester can start preparing their test case based on the deliverable level.



                              Say for example, if a micro service which cannot be tested, it should not be included alone in the sprint, rather connecting module (frontend or api) should be included.speak to your scrum master, ask him for such level, product owner need to make the user stories innthis manner.






                              share|improve this answer















                              Since you mentioned, it's a scrum.. its actually the scrum master role to make a deliverable level module to prepare from tiny user stories, so that tester can start preparing their test case based on the deliverable level.



                              Say for example, if a micro service which cannot be tested, it should not be included alone in the sprint, rather connecting module (frontend or api) should be included.speak to your scrum master, ask him for such level, product owner need to make the user stories innthis manner.







                              share|improve this answer














                              share|improve this answer



                              share|improve this answer








                              edited Mar 27 at 17:41

























                              answered Mar 25 at 19:02









                              ManuManu

                              748




                              748























                                  1














                                  How to deal with or prevent idle in the test team?



                                  First focus on the overall value of quality assurance - the value added to the organization from the quality of the product, the quality of the testing, the number of defects recently found, the number of users affected, etc.



                                  If your focus becomes short-term, such as "people look idle" you can very slip into a lot of bad practices equating output with number of hours (on a larger scale it doesn't), micromanaging, people living in fear, etc. This does not inspire quality work.



                                  I think you have a good question but I would suggest you may wish to consider phrasing and titling it as:



                                  What should quality professionals do while waiting for a new release to test?



                                  And remove that mention of idle. I know it may seem 'idle' time from one perspective (testing a release right now) but when you follow the advice of the many answers here you'll see it need not be 'idle' at all and in fact can be some of the most productive and value adding time for the quality professionals who can finally have time to do 'all that other stuff' like:




                                  • Time Off. we all need that.

                                  • Technical Education. in this field it should be life long and constant. Every available moment should be spent on it when not testing a release, writing automation, on vacation, etc.

                                  • Social Events. When done appropriately (often low-key) these strengthen connections.

                                  • Domain Education. Spend time with users and/or their representatives to better understand how they use the application in the real world.

                                  • Refactoring. The highlight of the day for many automation engineers.

                                  • Reducing technical debt. Who doesn't have that?






                                  share|improve this answer




























                                    1














                                    How to deal with or prevent idle in the test team?



                                    First focus on the overall value of quality assurance - the value added to the organization from the quality of the product, the quality of the testing, the number of defects recently found, the number of users affected, etc.



                                    If your focus becomes short-term, such as "people look idle" you can very slip into a lot of bad practices equating output with number of hours (on a larger scale it doesn't), micromanaging, people living in fear, etc. This does not inspire quality work.



                                    I think you have a good question but I would suggest you may wish to consider phrasing and titling it as:



                                    What should quality professionals do while waiting for a new release to test?



                                    And remove that mention of idle. I know it may seem 'idle' time from one perspective (testing a release right now) but when you follow the advice of the many answers here you'll see it need not be 'idle' at all and in fact can be some of the most productive and value adding time for the quality professionals who can finally have time to do 'all that other stuff' like:




                                    • Time Off. we all need that.

                                    • Technical Education. in this field it should be life long and constant. Every available moment should be spent on it when not testing a release, writing automation, on vacation, etc.

                                    • Social Events. When done appropriately (often low-key) these strengthen connections.

                                    • Domain Education. Spend time with users and/or their representatives to better understand how they use the application in the real world.

                                    • Refactoring. The highlight of the day for many automation engineers.

                                    • Reducing technical debt. Who doesn't have that?






                                    share|improve this answer


























                                      1












                                      1








                                      1







                                      How to deal with or prevent idle in the test team?



                                      First focus on the overall value of quality assurance - the value added to the organization from the quality of the product, the quality of the testing, the number of defects recently found, the number of users affected, etc.



                                      If your focus becomes short-term, such as "people look idle" you can very slip into a lot of bad practices equating output with number of hours (on a larger scale it doesn't), micromanaging, people living in fear, etc. This does not inspire quality work.



                                      I think you have a good question but I would suggest you may wish to consider phrasing and titling it as:



                                      What should quality professionals do while waiting for a new release to test?



                                      And remove that mention of idle. I know it may seem 'idle' time from one perspective (testing a release right now) but when you follow the advice of the many answers here you'll see it need not be 'idle' at all and in fact can be some of the most productive and value adding time for the quality professionals who can finally have time to do 'all that other stuff' like:




                                      • Time Off. we all need that.

                                      • Technical Education. in this field it should be life long and constant. Every available moment should be spent on it when not testing a release, writing automation, on vacation, etc.

                                      • Social Events. When done appropriately (often low-key) these strengthen connections.

                                      • Domain Education. Spend time with users and/or their representatives to better understand how they use the application in the real world.

                                      • Refactoring. The highlight of the day for many automation engineers.

                                      • Reducing technical debt. Who doesn't have that?






                                      share|improve this answer













                                      How to deal with or prevent idle in the test team?



                                      First focus on the overall value of quality assurance - the value added to the organization from the quality of the product, the quality of the testing, the number of defects recently found, the number of users affected, etc.



                                      If your focus becomes short-term, such as "people look idle" you can very slip into a lot of bad practices equating output with number of hours (on a larger scale it doesn't), micromanaging, people living in fear, etc. This does not inspire quality work.



                                      I think you have a good question but I would suggest you may wish to consider phrasing and titling it as:



                                      What should quality professionals do while waiting for a new release to test?



                                      And remove that mention of idle. I know it may seem 'idle' time from one perspective (testing a release right now) but when you follow the advice of the many answers here you'll see it need not be 'idle' at all and in fact can be some of the most productive and value adding time for the quality professionals who can finally have time to do 'all that other stuff' like:




                                      • Time Off. we all need that.

                                      • Technical Education. in this field it should be life long and constant. Every available moment should be spent on it when not testing a release, writing automation, on vacation, etc.

                                      • Social Events. When done appropriately (often low-key) these strengthen connections.

                                      • Domain Education. Spend time with users and/or their representatives to better understand how they use the application in the real world.

                                      • Refactoring. The highlight of the day for many automation engineers.

                                      • Reducing technical debt. Who doesn't have that?







                                      share|improve this answer












                                      share|improve this answer



                                      share|improve this answer










                                      answered Mar 26 at 21:24









                                      Michael DurrantMichael Durrant

                                      14.5k22165




                                      14.5k22165






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Software Quality Assurance & Testing Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38429%2fhow-to-deal-with-or-prevent-idle-in-the-test-team%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Plaza Victoria

                                          Puebla de Zaragoza

                                          Musa