Is it a good idea to write all possible test cases after transforming the team to TDD to achieve a full...












16














Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my colleagues; it's just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.










share|improve this question
























  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    Nov 25 '18 at 19:35






  • 6




    You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    Nov 25 '18 at 19:39






  • 1




    @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    Nov 25 '18 at 19:51






  • 1




    @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    Nov 25 '18 at 20:06








  • 1




    It's not possible to write all possible test cases. It's only useful to write all test-cases you care about. For example, if you need a function that will accept an int value and return something specific, it's not possible to write a unit-test for every possible int value, but it probably makes sense to test a handful of useful values that might trip-up the code, such as negative numbers (including minint), zero, maxint, etc. to make sure that some edge-cases are covered.
    – Christopher Schultz
    Nov 26 '18 at 19:38
















16














Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my colleagues; it's just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.










share|improve this question
























  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    Nov 25 '18 at 19:35






  • 6




    You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    Nov 25 '18 at 19:39






  • 1




    @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    Nov 25 '18 at 19:51






  • 1




    @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    Nov 25 '18 at 20:06








  • 1




    It's not possible to write all possible test cases. It's only useful to write all test-cases you care about. For example, if you need a function that will accept an int value and return something specific, it's not possible to write a unit-test for every possible int value, but it probably makes sense to test a handful of useful values that might trip-up the code, such as negative numbers (including minint), zero, maxint, etc. to make sure that some edge-cases are covered.
    – Christopher Schultz
    Nov 26 '18 at 19:38














16












16








16


6





Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my colleagues; it's just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.










share|improve this question















Assume we have a large enterprise-level application without any unit/functional tests. There was no test-driven development process during the development due to very tight deadlines (I know we should never promise any tight deadlines when we are not sure, but what's done is done!)



Now that all deadlines passed and things are calm, everyone agreed to transform us into a productive TDD/BDD based team...Yay!



Now the question is about the code that we already have: (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or (2) it's better to wait for something bad happen and then during the fix write new unit tests, or (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.



There are a few good and related articles such as this one. I'm still not sure if it worth to invest on this considering we have a very limited time and many other projects/works are waiting for us.



Note: This question is explaining/imagining a totally awkward situation in a development team. This is not about me or any of my colleagues; it's just an imaginary situation. You may think this should never happen or the development manager is responsible for such a mess! But anyway, what's done is done. If possible, please do not downvote just because you think this should never happen.







unit-testing testing tdd bdd acceptance-testing






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited yesterday









Peter Mortensen

1,11621114




1,11621114










asked Nov 25 '18 at 19:16









Michel Gokan

202210




202210












  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    Nov 25 '18 at 19:35






  • 6




    You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    Nov 25 '18 at 19:39






  • 1




    @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    Nov 25 '18 at 19:51






  • 1




    @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    Nov 25 '18 at 20:06








  • 1




    It's not possible to write all possible test cases. It's only useful to write all test-cases you care about. For example, if you need a function that will accept an int value and return something specific, it's not possible to write a unit-test for every possible int value, but it probably makes sense to test a handful of useful values that might trip-up the code, such as negative numbers (including minint), zero, maxint, etc. to make sure that some edge-cases are covered.
    – Christopher Schultz
    Nov 26 '18 at 19:38


















  • Possible duplicate of Do I need unit test if I already have integration test?
    – gnat
    Nov 25 '18 at 19:35






  • 6




    You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
    – jonrsharpe
    Nov 25 '18 at 19:39






  • 1




    @gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
    – Michel Gokan
    Nov 25 '18 at 19:51






  • 1




    @gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
    – Michel Gokan
    Nov 25 '18 at 20:06








  • 1




    It's not possible to write all possible test cases. It's only useful to write all test-cases you care about. For example, if you need a function that will accept an int value and return something specific, it's not possible to write a unit-test for every possible int value, but it probably makes sense to test a handful of useful values that might trip-up the code, such as negative numbers (including minint), zero, maxint, etc. to make sure that some edge-cases are covered.
    – Christopher Schultz
    Nov 26 '18 at 19:38
















Possible duplicate of Do I need unit test if I already have integration test?
– gnat
Nov 25 '18 at 19:35




Possible duplicate of Do I need unit test if I already have integration test?
– gnat
Nov 25 '18 at 19:35




6




6




You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
– jonrsharpe
Nov 25 '18 at 19:39




You should probably be preparing for the next time deadlines arrive and you're not allowed to do TDD any more. Possibly by telling whoever drove the last round of technical debtvelopment why that wasn't a great idea.
– jonrsharpe
Nov 25 '18 at 19:39




1




1




@gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
– Michel Gokan
Nov 25 '18 at 19:51




@gnat I think it is not a duplicate question. The mentioned team don't have any kind of tests (not even integration tests)
– Michel Gokan
Nov 25 '18 at 19:51




1




1




@gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
– Michel Gokan
Nov 25 '18 at 20:06






@gnat the questions is: what will happen to our new unit tests? They might seem incomplete, or even worthless without writing all unit tests for the previously written codes. The question that you mention does not cover this specific concern.
– Michel Gokan
Nov 25 '18 at 20:06






1




1




It's not possible to write all possible test cases. It's only useful to write all test-cases you care about. For example, if you need a function that will accept an int value and return something specific, it's not possible to write a unit-test for every possible int value, but it probably makes sense to test a handful of useful values that might trip-up the code, such as negative numbers (including minint), zero, maxint, etc. to make sure that some edge-cases are covered.
– Christopher Schultz
Nov 26 '18 at 19:38




It's not possible to write all possible test cases. It's only useful to write all test-cases you care about. For example, if you need a function that will accept an int value and return something specific, it's not possible to write a unit-test for every possible int value, but it probably makes sense to test a handful of useful values that might trip-up the code, such as negative numbers (including minint), zero, maxint, etc. to make sure that some edge-cases are covered.
– Christopher Schultz
Nov 26 '18 at 19:38










7 Answers
7






active

oldest

votes


















36















There was no test-driven development process during the development due to very tight deadlines




This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning, because it shows you think TDD will slow you down and make you miss a deadline.



As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



TDD is something you should first practice at home. Learn to do it, because it helps you code now. Not because someone told you to do it. Not because it will help when you make changes later. When it becomes something you do because you're in a hurry then you're ready to do it professionally.



TDD is something you can do in any shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. When you do it right, the tests speed your development even if no one else runs them.



On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job to check in tests. It's to create proven working production code. If it happens to be testable, neat.



If you think management has to believe in TDD or that your fellow coders have to support your tests then you're ignoring the best thing TDD does for you. It quickly shows you the difference between what you think your code does and what it actually does.



If you can't see how that, on its own, can help you meet a deadline faster then you're not ready for TDD at work. You need to practice at home.



That said, it's nice when the team can use your tests to help them read your production code and when management will buy spiffy new TDD tools.




Is it a good idea to write all possible test cases after transforming the team to TDD?




Regardless of what the team is doing it's not always a good idea to write all possible test cases. Write the most useful test cases. 100% code coverage comes at a cost. Don't ignore the law of diminishing returns just because making a judgement call is hard.



Save your testing energy for the interesting business logic. The stuff that makes decisions and enforces policy. Test the heck out of that. Boring obvious easy-to-read structural glue code that just wires stuff together doesn't need testing nearly as badly.




(1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge. Do not ask management for time to write tests. Just write tests. Once you know what your doing, tests won't slow you down.




(2) it's better to wait for something bad happen and then during the fix write new unit tests, or



(3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well, since you're changing it anyway you can change it into something testable and test it.



That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change configuration files, build files, whatever it takes.



Michael Feathers gave us a book about this: Working Effectively with Legacy Code. Give it a read, and you'll see that you don't have to burn down everything old to make something new.






share|improve this answer



















  • 37




    "The tests speed your development even if no one else runs them." - I find this to be patently false. This is pro'lly not the place to start a discussion about this, but readers should keep in mind that the viewpoint presented here is not unanimous.
    – Martin Ba
    Nov 26 '18 at 10:47






  • 4




    Actually often tests increase your development in the long run and for TDD to be really efficient, everyone needs to believe in it, otherwise, you would spend half of your team fixing tests broken by others.
    – hspandher
    Nov 26 '18 at 13:24






  • 13




    "you think TDD will slow you down and make you miss a deadline." I think that probably is the case. Nobody uses TDD because they expect it to make their first deadline be met faster. The real benefit (at least in my estimation) is the ongoing dividends that test play in the future, to catch regressions, and to build confidence in safe experimentation. I think this benefit outweighs the up-front cost to writing tests, most would probably agree, but if you have to meet the tight deadline coming up, you don't really have a choice.
    – Alexander
    Nov 26 '18 at 15:16






  • 1




    I find it analogous to buying a house. If you had the lump sum to pay a house off, you would save a lot on interest, and it would be great in the long run. But if you need a house immediately... then you're forced to take a short term approach that's long-term suboptimal
    – Alexander
    Nov 26 '18 at 16:12






  • 3




    TDD =can= increase performance if the tests and code are developed in parallel, while the functionality is fresh in the mind of the developer. Code reviews will tell you if another human being thinks the code is correct. Test cases will tell you if the specification, as embodied in a test case, is being implemented. Otherwise, yeah, TDD can be a drag especially if there is no functional spec and the test writer is also doing reverse engineering.
    – Julie in Austin
    Nov 26 '18 at 17:30



















21















Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




Given legacy1 code, write unit tests in these situations:




  • when fixing bugs

  • when refactoring

  • when adding new functionality to the existing code


As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



1Legacy code is the code without unit tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






share|improve this answer























  • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
    – Michel Gokan
    Nov 25 '18 at 20:03






  • 7




    (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
    – Nick Alexeev
    Nov 25 '18 at 20:15










  • The downside to "write tests when you stumble upon needing them" is that there's an increased risk of ending up with a patchwork of tests written by different developers with different ideas. I'm not saying this answer is wrong, but it does require a firm hand that keeps the quality and style of the tests uniform.
    – Flater
    Nov 26 '18 at 9:30






  • 4




    @Flater uniformity offers false comfort. I want tests that make the production code easy to read. Not tests that all look the same. I'll forgive mixing completely different testing frameworks if it makes it easier to understand what the production code does.
    – candied_orange
    Nov 26 '18 at 9:48






  • 2




    @Flater I didn't assert for ugly production code. I assert that the point of tests is to make production code readable. I will gladly accept an eclectic mob of tests that make the production code easier to read. Be careful about making uniformity a goal in itself. Readability is king.
    – candied_orange
    Nov 26 '18 at 10:06





















12














In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




  • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

  • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

  • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready; only some manual smoke testing required.


The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once.



E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






share|improve this answer























  • "more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software." This sounds a bit off to me, FWIW, I would prefer having 30% coverage consisting of mostly edge cases than 90% coverage consisting entirely of expected path behavior (which is easy for manual testers to do); I recommend thinking "outside the box" when writing tests and basing them off of (unusual) test cases discovered manually whenever possible.
    – jrh
    Nov 26 '18 at 16:53





















5














Don't write tests for existing code. It's not worth it.



What you made is already somewhat tested in a completely informal way -- you tried it out by hand constantly, people did some nonautomated testing, it's being used now. That means that you won't find many bugs.



What's left are the bugs you didn't think about. But those are exactly the ones you won't think to write unit tests for either, so you probably still won't find them.



Also, a reason for TDD is to get you thinking about what the exact requirements of a bit of code are before writing it. In whatever different way, you already did that.



In the meantime, it's still just as much work to write these tests as it would have been to write them beforehand. It'll cost a lot of time, for little benefit.



And it's extremely boring to write lots and lots of tests with no coding in between and finding hardly any bugs. If you start out doing this, people new to TDD will hate it.



In short, devs will hate it and managers will see it as costly, while not many bugs are found. You will never get to the actual TDD part.



Use it on things that you want to change, as a normal part of the process.






share|improve this answer

















  • 1




    I disagree strongly with "Don't write tests for existing code. It's not worth it." If the code is functioning reasonably properly tests may be the only specification in existence. And if the code is in maintenance, adding tests is the only way to ensure those functions which do work aren't broken by seemingly unrelated changes.
    – Julie in Austin
    Nov 26 '18 at 17:34






  • 3




    @JulieinAustin: On the other hand, without a spec, you don't know exactly what the code is supposed to do. And if you don't already know what the code is supposed to do, you may well write useless tests -- or worse, misleading ones that subtly change the spec -- and now accidental and/or wrong behavior becomes required.
    – cHao
    Nov 26 '18 at 20:21



















2














A test is a means to communicate understanding.



Therefore only write tests for what you understand should be true.



You can only understand what should be true when you work with it.



Therefore only write tests for code that you are working with.



When you work with the code you will learn.



Therefore write and rewrite tests to capture what you have learnt.



Rinse and repeat.



Have a code-coverage tool run with your tests, and only accept commits to the mainline that do not reduce coverage. Eventually you'll reach a high level of coverage.



If you haven't worked with the code in a while, a business decision needs to be made. It is now quite possibly so legacy that no one on your team knows how to work with it. It probably has out-of-date libraries/compilers/documentation which is a massive liability in just about every way.



Two options:




  1. Invest the time to read it, learn from it, write tests for it, and refactor it. Small sets of changes with frequent releases.

  2. Find a way to ditch that software. You could not possibly make a modification to it when asked to anyway.






share|improve this answer































    0















    (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)?

    (2) it's better to wait for something bad happen and then during the fix write new unit tests




    One of the main purposes of tests is to ensure that a change didn't break anything. This is a three step process:




    1. Confirm that the tests succeed

    2. Make you changes

    3. Confirm that the tests still succeed


    This means that you need to have working tests before you actually change something. Should you choose the second path, that means that you're going to have to force your developers to write tests before they even touch the code. And I strongly suspect that when already faced with a real-world change, the developers are not going to give the unit tests the attention they deserve.



    So I suggest splitting the test-writing and change-making tasks to avoid developers sacrificing the quality of one for the other.




    even though everything is working completely OKAY (yet!)?




    Just to point this out specifically, it's a common misconception that you only need tests when the code isn't working. You need tests when the code is working too, for example to prove to someone that [newly occurring bug] is not due to your part because the tests are still passing.
    Confirming that everything still works like it did before is an important benefit of testing that you're omitting when you imply that you don't need tests when the code is working.




    (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor




    Ideally, all of the existing source code should now get unit tests. However, there is a reasonable argument that the time and effort (and cost) needed to do so is simply not relevant for certain projects.

    For example, application that are no longer being developed and are not expected to be changed anymore (e.g. the client no longer uses it, or the client isn't a client anymore), you can argue that it's not relevant to test this code anymore.



    However, it's not as clear cut where you draw the line. This is something that a company needs to look at in a cost benefit analysis. Writing tests costs time and effort, but are they expecting any future development on that application? Do the gains from having unit tests outweigh the cost of writing them?



    This is not a decision you (as a developer) can make. At best, you can offer an estimate for the needed time to implement tests on a given project, and it's up to management to decide if there is a sufficient expectation of actually needing to maintain/develop the project.




    and postpone everything to the next major refactor




    If the next major refactor is a given, then you do indeed need to write the tests.



    But don't put it off until you're faced with major changes. My initial point (not combining the writing of tests and updating the code) still stands, but I want to add a second point here: your developers currently know their way around the project better than they will in six months if they spend that time working on other projects. Capitalize on periods of time where the developers are already warmed up and don't need to figure out how things work again in the future.






    share|improve this answer





























      0














      My two cents:



      Wait to for a major technical upgrade to the system and write the tests then... officially with the support of the business.



      Alternatively, let's say you're a SCRUM shop, your workload is represented by capacity and you can allocated a % of that to unit testing, but...



      Saying you're going to go back and write the tests is naive, what you're really going to do is write tests, refactor, and write more tests after the refactor has made the code more testable, which is why it's best to start with tests as you are already aware, and...



      It's best for the original author to write tests for and refactor the code they wrote previously, it's not ideal, but from experience you want the refactor to make the code better not worse.






      share|improve this answer





















        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "131"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: false,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381996%2fis-it-a-good-idea-to-write-all-possible-test-cases-after-transforming-the-team-t%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown




















        StackExchange.ready(function () {
        $("#show-editor-button input, #show-editor-button button").click(function () {
        var showEditor = function() {
        $("#show-editor-button").hide();
        $("#post-form").removeClass("dno");
        StackExchange.editor.finallyInit();
        };

        var useFancy = $(this).data('confirm-use-fancy');
        if(useFancy == 'True') {
        var popupTitle = $(this).data('confirm-fancy-title');
        var popupBody = $(this).data('confirm-fancy-body');
        var popupAccept = $(this).data('confirm-fancy-accept-button');

        $(this).loadPopup({
        url: '/post/self-answer-popup',
        loaded: function(popup) {
        var pTitle = $(popup).find('h2');
        var pBody = $(popup).find('.popup-body');
        var pSubmit = $(popup).find('.popup-submit');

        pTitle.text(popupTitle);
        pBody.html(popupBody);
        pSubmit.val(popupAccept).click(showEditor);
        }
        })
        } else{
        var confirmText = $(this).data('confirm-text');
        if (confirmText ? confirm(confirmText) : true) {
        showEditor();
        }
        }
        });
        });






        7 Answers
        7






        active

        oldest

        votes








        7 Answers
        7






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        36















        There was no test-driven development process during the development due to very tight deadlines




        This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning, because it shows you think TDD will slow you down and make you miss a deadline.



        As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



        TDD is something you should first practice at home. Learn to do it, because it helps you code now. Not because someone told you to do it. Not because it will help when you make changes later. When it becomes something you do because you're in a hurry then you're ready to do it professionally.



        TDD is something you can do in any shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. When you do it right, the tests speed your development even if no one else runs them.



        On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job to check in tests. It's to create proven working production code. If it happens to be testable, neat.



        If you think management has to believe in TDD or that your fellow coders have to support your tests then you're ignoring the best thing TDD does for you. It quickly shows you the difference between what you think your code does and what it actually does.



        If you can't see how that, on its own, can help you meet a deadline faster then you're not ready for TDD at work. You need to practice at home.



        That said, it's nice when the team can use your tests to help them read your production code and when management will buy spiffy new TDD tools.




        Is it a good idea to write all possible test cases after transforming the team to TDD?




        Regardless of what the team is doing it's not always a good idea to write all possible test cases. Write the most useful test cases. 100% code coverage comes at a cost. Don't ignore the law of diminishing returns just because making a judgement call is hard.



        Save your testing energy for the interesting business logic. The stuff that makes decisions and enforces policy. Test the heck out of that. Boring obvious easy-to-read structural glue code that just wires stuff together doesn't need testing nearly as badly.




        (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




        No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge. Do not ask management for time to write tests. Just write tests. Once you know what your doing, tests won't slow you down.




        (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



        (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




        I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well, since you're changing it anyway you can change it into something testable and test it.



        That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change configuration files, build files, whatever it takes.



        Michael Feathers gave us a book about this: Working Effectively with Legacy Code. Give it a read, and you'll see that you don't have to burn down everything old to make something new.






        share|improve this answer



















        • 37




          "The tests speed your development even if no one else runs them." - I find this to be patently false. This is pro'lly not the place to start a discussion about this, but readers should keep in mind that the viewpoint presented here is not unanimous.
          – Martin Ba
          Nov 26 '18 at 10:47






        • 4




          Actually often tests increase your development in the long run and for TDD to be really efficient, everyone needs to believe in it, otherwise, you would spend half of your team fixing tests broken by others.
          – hspandher
          Nov 26 '18 at 13:24






        • 13




          "you think TDD will slow you down and make you miss a deadline." I think that probably is the case. Nobody uses TDD because they expect it to make their first deadline be met faster. The real benefit (at least in my estimation) is the ongoing dividends that test play in the future, to catch regressions, and to build confidence in safe experimentation. I think this benefit outweighs the up-front cost to writing tests, most would probably agree, but if you have to meet the tight deadline coming up, you don't really have a choice.
          – Alexander
          Nov 26 '18 at 15:16






        • 1




          I find it analogous to buying a house. If you had the lump sum to pay a house off, you would save a lot on interest, and it would be great in the long run. But if you need a house immediately... then you're forced to take a short term approach that's long-term suboptimal
          – Alexander
          Nov 26 '18 at 16:12






        • 3




          TDD =can= increase performance if the tests and code are developed in parallel, while the functionality is fresh in the mind of the developer. Code reviews will tell you if another human being thinks the code is correct. Test cases will tell you if the specification, as embodied in a test case, is being implemented. Otherwise, yeah, TDD can be a drag especially if there is no functional spec and the test writer is also doing reverse engineering.
          – Julie in Austin
          Nov 26 '18 at 17:30
















        36















        There was no test-driven development process during the development due to very tight deadlines




        This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning, because it shows you think TDD will slow you down and make you miss a deadline.



        As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



        TDD is something you should first practice at home. Learn to do it, because it helps you code now. Not because someone told you to do it. Not because it will help when you make changes later. When it becomes something you do because you're in a hurry then you're ready to do it professionally.



        TDD is something you can do in any shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. When you do it right, the tests speed your development even if no one else runs them.



        On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job to check in tests. It's to create proven working production code. If it happens to be testable, neat.



        If you think management has to believe in TDD or that your fellow coders have to support your tests then you're ignoring the best thing TDD does for you. It quickly shows you the difference between what you think your code does and what it actually does.



        If you can't see how that, on its own, can help you meet a deadline faster then you're not ready for TDD at work. You need to practice at home.



        That said, it's nice when the team can use your tests to help them read your production code and when management will buy spiffy new TDD tools.




        Is it a good idea to write all possible test cases after transforming the team to TDD?




        Regardless of what the team is doing it's not always a good idea to write all possible test cases. Write the most useful test cases. 100% code coverage comes at a cost. Don't ignore the law of diminishing returns just because making a judgement call is hard.



        Save your testing energy for the interesting business logic. The stuff that makes decisions and enforces policy. Test the heck out of that. Boring obvious easy-to-read structural glue code that just wires stuff together doesn't need testing nearly as badly.




        (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




        No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge. Do not ask management for time to write tests. Just write tests. Once you know what your doing, tests won't slow you down.




        (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



        (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




        I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well, since you're changing it anyway you can change it into something testable and test it.



        That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change configuration files, build files, whatever it takes.



        Michael Feathers gave us a book about this: Working Effectively with Legacy Code. Give it a read, and you'll see that you don't have to burn down everything old to make something new.






        share|improve this answer



















        • 37




          "The tests speed your development even if no one else runs them." - I find this to be patently false. This is pro'lly not the place to start a discussion about this, but readers should keep in mind that the viewpoint presented here is not unanimous.
          – Martin Ba
          Nov 26 '18 at 10:47






        • 4




          Actually often tests increase your development in the long run and for TDD to be really efficient, everyone needs to believe in it, otherwise, you would spend half of your team fixing tests broken by others.
          – hspandher
          Nov 26 '18 at 13:24






        • 13




          "you think TDD will slow you down and make you miss a deadline." I think that probably is the case. Nobody uses TDD because they expect it to make their first deadline be met faster. The real benefit (at least in my estimation) is the ongoing dividends that test play in the future, to catch regressions, and to build confidence in safe experimentation. I think this benefit outweighs the up-front cost to writing tests, most would probably agree, but if you have to meet the tight deadline coming up, you don't really have a choice.
          – Alexander
          Nov 26 '18 at 15:16






        • 1




          I find it analogous to buying a house. If you had the lump sum to pay a house off, you would save a lot on interest, and it would be great in the long run. But if you need a house immediately... then you're forced to take a short term approach that's long-term suboptimal
          – Alexander
          Nov 26 '18 at 16:12






        • 3




          TDD =can= increase performance if the tests and code are developed in parallel, while the functionality is fresh in the mind of the developer. Code reviews will tell you if another human being thinks the code is correct. Test cases will tell you if the specification, as embodied in a test case, is being implemented. Otherwise, yeah, TDD can be a drag especially if there is no functional spec and the test writer is also doing reverse engineering.
          – Julie in Austin
          Nov 26 '18 at 17:30














        36












        36








        36







        There was no test-driven development process during the development due to very tight deadlines




        This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning, because it shows you think TDD will slow you down and make you miss a deadline.



        As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



        TDD is something you should first practice at home. Learn to do it, because it helps you code now. Not because someone told you to do it. Not because it will help when you make changes later. When it becomes something you do because you're in a hurry then you're ready to do it professionally.



        TDD is something you can do in any shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. When you do it right, the tests speed your development even if no one else runs them.



        On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job to check in tests. It's to create proven working production code. If it happens to be testable, neat.



        If you think management has to believe in TDD or that your fellow coders have to support your tests then you're ignoring the best thing TDD does for you. It quickly shows you the difference between what you think your code does and what it actually does.



        If you can't see how that, on its own, can help you meet a deadline faster then you're not ready for TDD at work. You need to practice at home.



        That said, it's nice when the team can use your tests to help them read your production code and when management will buy spiffy new TDD tools.




        Is it a good idea to write all possible test cases after transforming the team to TDD?




        Regardless of what the team is doing it's not always a good idea to write all possible test cases. Write the most useful test cases. 100% code coverage comes at a cost. Don't ignore the law of diminishing returns just because making a judgement call is hard.



        Save your testing energy for the interesting business logic. The stuff that makes decisions and enforces policy. Test the heck out of that. Boring obvious easy-to-read structural glue code that just wires stuff together doesn't need testing nearly as badly.




        (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




        No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge. Do not ask management for time to write tests. Just write tests. Once you know what your doing, tests won't slow you down.




        (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



        (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




        I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well, since you're changing it anyway you can change it into something testable and test it.



        That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change configuration files, build files, whatever it takes.



        Michael Feathers gave us a book about this: Working Effectively with Legacy Code. Give it a read, and you'll see that you don't have to burn down everything old to make something new.






        share|improve this answer















        There was no test-driven development process during the development due to very tight deadlines




        This statement is very concerning. Not because it means you developed without TDD or because you aren't testing everything. This is concerning, because it shows you think TDD will slow you down and make you miss a deadline.



        As long as you see it this way you aren't ready for TDD. TDD isn't something you can gradually ease into. You either know how to do it or you don't. If you try doing it halfway you're going to make it and yourself look bad.



        TDD is something you should first practice at home. Learn to do it, because it helps you code now. Not because someone told you to do it. Not because it will help when you make changes later. When it becomes something you do because you're in a hurry then you're ready to do it professionally.



        TDD is something you can do in any shop. You don't even have to turn in your test code. You can keep it to yourself if the others disdain tests. When you do it right, the tests speed your development even if no one else runs them.



        On the other hand if others love and run your tests you should still keep in mind that even in a TDD shop it's not your job to check in tests. It's to create proven working production code. If it happens to be testable, neat.



        If you think management has to believe in TDD or that your fellow coders have to support your tests then you're ignoring the best thing TDD does for you. It quickly shows you the difference between what you think your code does and what it actually does.



        If you can't see how that, on its own, can help you meet a deadline faster then you're not ready for TDD at work. You need to practice at home.



        That said, it's nice when the team can use your tests to help them read your production code and when management will buy spiffy new TDD tools.




        Is it a good idea to write all possible test cases after transforming the team to TDD?




        Regardless of what the team is doing it's not always a good idea to write all possible test cases. Write the most useful test cases. 100% code coverage comes at a cost. Don't ignore the law of diminishing returns just because making a judgement call is hard.



        Save your testing energy for the interesting business logic. The stuff that makes decisions and enforces policy. Test the heck out of that. Boring obvious easy-to-read structural glue code that just wires stuff together doesn't need testing nearly as badly.




        (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)? Or




        No. This is "let's do a complete rewrite" thinking. This destroys hard won knowledge. Do not ask management for time to write tests. Just write tests. Once you know what your doing, tests won't slow you down.




        (2) it's better to wait for something bad happen and then during the fix write new unit tests, or



        (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor.




        I'll answer 2 and 3 the same way. When you change the code, for any reason, it's really nice if you can slip in a test. If the code is legacy it doesn't currently welcome a test. Which means it's hard to test it before changing it. Well, since you're changing it anyway you can change it into something testable and test it.



        That's the nuclear option. It's risky. You're making changes without tests. There are some creative tricks to put legacy code under test before you change it. You look for what are called seams that allow you change the behavior of your code without changing the code. You change configuration files, build files, whatever it takes.



        Michael Feathers gave us a book about this: Working Effectively with Legacy Code. Give it a read, and you'll see that you don't have to burn down everything old to make something new.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited yesterday









        Peter Mortensen

        1,11621114




        1,11621114










        answered Nov 26 '18 at 1:27









        candied_orange

        52.4k1697185




        52.4k1697185








        • 37




          "The tests speed your development even if no one else runs them." - I find this to be patently false. This is pro'lly not the place to start a discussion about this, but readers should keep in mind that the viewpoint presented here is not unanimous.
          – Martin Ba
          Nov 26 '18 at 10:47






        • 4




          Actually often tests increase your development in the long run and for TDD to be really efficient, everyone needs to believe in it, otherwise, you would spend half of your team fixing tests broken by others.
          – hspandher
          Nov 26 '18 at 13:24






        • 13




          "you think TDD will slow you down and make you miss a deadline." I think that probably is the case. Nobody uses TDD because they expect it to make their first deadline be met faster. The real benefit (at least in my estimation) is the ongoing dividends that test play in the future, to catch regressions, and to build confidence in safe experimentation. I think this benefit outweighs the up-front cost to writing tests, most would probably agree, but if you have to meet the tight deadline coming up, you don't really have a choice.
          – Alexander
          Nov 26 '18 at 15:16






        • 1




          I find it analogous to buying a house. If you had the lump sum to pay a house off, you would save a lot on interest, and it would be great in the long run. But if you need a house immediately... then you're forced to take a short term approach that's long-term suboptimal
          – Alexander
          Nov 26 '18 at 16:12






        • 3




          TDD =can= increase performance if the tests and code are developed in parallel, while the functionality is fresh in the mind of the developer. Code reviews will tell you if another human being thinks the code is correct. Test cases will tell you if the specification, as embodied in a test case, is being implemented. Otherwise, yeah, TDD can be a drag especially if there is no functional spec and the test writer is also doing reverse engineering.
          – Julie in Austin
          Nov 26 '18 at 17:30














        • 37




          "The tests speed your development even if no one else runs them." - I find this to be patently false. This is pro'lly not the place to start a discussion about this, but readers should keep in mind that the viewpoint presented here is not unanimous.
          – Martin Ba
          Nov 26 '18 at 10:47






        • 4




          Actually often tests increase your development in the long run and for TDD to be really efficient, everyone needs to believe in it, otherwise, you would spend half of your team fixing tests broken by others.
          – hspandher
          Nov 26 '18 at 13:24






        • 13




          "you think TDD will slow you down and make you miss a deadline." I think that probably is the case. Nobody uses TDD because they expect it to make their first deadline be met faster. The real benefit (at least in my estimation) is the ongoing dividends that test play in the future, to catch regressions, and to build confidence in safe experimentation. I think this benefit outweighs the up-front cost to writing tests, most would probably agree, but if you have to meet the tight deadline coming up, you don't really have a choice.
          – Alexander
          Nov 26 '18 at 15:16






        • 1




          I find it analogous to buying a house. If you had the lump sum to pay a house off, you would save a lot on interest, and it would be great in the long run. But if you need a house immediately... then you're forced to take a short term approach that's long-term suboptimal
          – Alexander
          Nov 26 '18 at 16:12






        • 3




          TDD =can= increase performance if the tests and code are developed in parallel, while the functionality is fresh in the mind of the developer. Code reviews will tell you if another human being thinks the code is correct. Test cases will tell you if the specification, as embodied in a test case, is being implemented. Otherwise, yeah, TDD can be a drag especially if there is no functional spec and the test writer is also doing reverse engineering.
          – Julie in Austin
          Nov 26 '18 at 17:30








        37




        37




        "The tests speed your development even if no one else runs them." - I find this to be patently false. This is pro'lly not the place to start a discussion about this, but readers should keep in mind that the viewpoint presented here is not unanimous.
        – Martin Ba
        Nov 26 '18 at 10:47




        "The tests speed your development even if no one else runs them." - I find this to be patently false. This is pro'lly not the place to start a discussion about this, but readers should keep in mind that the viewpoint presented here is not unanimous.
        – Martin Ba
        Nov 26 '18 at 10:47




        4




        4




        Actually often tests increase your development in the long run and for TDD to be really efficient, everyone needs to believe in it, otherwise, you would spend half of your team fixing tests broken by others.
        – hspandher
        Nov 26 '18 at 13:24




        Actually often tests increase your development in the long run and for TDD to be really efficient, everyone needs to believe in it, otherwise, you would spend half of your team fixing tests broken by others.
        – hspandher
        Nov 26 '18 at 13:24




        13




        13




        "you think TDD will slow you down and make you miss a deadline." I think that probably is the case. Nobody uses TDD because they expect it to make their first deadline be met faster. The real benefit (at least in my estimation) is the ongoing dividends that test play in the future, to catch regressions, and to build confidence in safe experimentation. I think this benefit outweighs the up-front cost to writing tests, most would probably agree, but if you have to meet the tight deadline coming up, you don't really have a choice.
        – Alexander
        Nov 26 '18 at 15:16




        "you think TDD will slow you down and make you miss a deadline." I think that probably is the case. Nobody uses TDD because they expect it to make their first deadline be met faster. The real benefit (at least in my estimation) is the ongoing dividends that test play in the future, to catch regressions, and to build confidence in safe experimentation. I think this benefit outweighs the up-front cost to writing tests, most would probably agree, but if you have to meet the tight deadline coming up, you don't really have a choice.
        – Alexander
        Nov 26 '18 at 15:16




        1




        1




        I find it analogous to buying a house. If you had the lump sum to pay a house off, you would save a lot on interest, and it would be great in the long run. But if you need a house immediately... then you're forced to take a short term approach that's long-term suboptimal
        – Alexander
        Nov 26 '18 at 16:12




        I find it analogous to buying a house. If you had the lump sum to pay a house off, you would save a lot on interest, and it would be great in the long run. But if you need a house immediately... then you're forced to take a short term approach that's long-term suboptimal
        – Alexander
        Nov 26 '18 at 16:12




        3




        3




        TDD =can= increase performance if the tests and code are developed in parallel, while the functionality is fresh in the mind of the developer. Code reviews will tell you if another human being thinks the code is correct. Test cases will tell you if the specification, as embodied in a test case, is being implemented. Otherwise, yeah, TDD can be a drag especially if there is no functional spec and the test writer is also doing reverse engineering.
        – Julie in Austin
        Nov 26 '18 at 17:30




        TDD =can= increase performance if the tests and code are developed in parallel, while the functionality is fresh in the mind of the developer. Code reviews will tell you if another human being thinks the code is correct. Test cases will tell you if the specification, as embodied in a test case, is being implemented. Otherwise, yeah, TDD can be a drag especially if there is no functional spec and the test writer is also doing reverse engineering.
        – Julie in Austin
        Nov 26 '18 at 17:30













        21















        Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




        Given legacy1 code, write unit tests in these situations:




        • when fixing bugs

        • when refactoring

        • when adding new functionality to the existing code


        As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



        1Legacy code is the code without unit tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






        share|improve this answer























        • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
          – Michel Gokan
          Nov 25 '18 at 20:03






        • 7




          (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
          – Nick Alexeev
          Nov 25 '18 at 20:15










        • The downside to "write tests when you stumble upon needing them" is that there's an increased risk of ending up with a patchwork of tests written by different developers with different ideas. I'm not saying this answer is wrong, but it does require a firm hand that keeps the quality and style of the tests uniform.
          – Flater
          Nov 26 '18 at 9:30






        • 4




          @Flater uniformity offers false comfort. I want tests that make the production code easy to read. Not tests that all look the same. I'll forgive mixing completely different testing frameworks if it makes it easier to understand what the production code does.
          – candied_orange
          Nov 26 '18 at 9:48






        • 2




          @Flater I didn't assert for ugly production code. I assert that the point of tests is to make production code readable. I will gladly accept an eclectic mob of tests that make the production code easier to read. Be careful about making uniformity a goal in itself. Readability is king.
          – candied_orange
          Nov 26 '18 at 10:06


















        21















        Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




        Given legacy1 code, write unit tests in these situations:




        • when fixing bugs

        • when refactoring

        • when adding new functionality to the existing code


        As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



        1Legacy code is the code without unit tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






        share|improve this answer























        • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
          – Michel Gokan
          Nov 25 '18 at 20:03






        • 7




          (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
          – Nick Alexeev
          Nov 25 '18 at 20:15










        • The downside to "write tests when you stumble upon needing them" is that there's an increased risk of ending up with a patchwork of tests written by different developers with different ideas. I'm not saying this answer is wrong, but it does require a firm hand that keeps the quality and style of the tests uniform.
          – Flater
          Nov 26 '18 at 9:30






        • 4




          @Flater uniformity offers false comfort. I want tests that make the production code easy to read. Not tests that all look the same. I'll forgive mixing completely different testing frameworks if it makes it easier to understand what the production code does.
          – candied_orange
          Nov 26 '18 at 9:48






        • 2




          @Flater I didn't assert for ugly production code. I assert that the point of tests is to make production code readable. I will gladly accept an eclectic mob of tests that make the production code easier to read. Be careful about making uniformity a goal in itself. Readability is king.
          – candied_orange
          Nov 26 '18 at 10:06
















        21












        21








        21







        Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




        Given legacy1 code, write unit tests in these situations:




        • when fixing bugs

        • when refactoring

        • when adding new functionality to the existing code


        As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



        1Legacy code is the code without unit tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].






        share|improve this answer















        Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning [...] ?




        Given legacy1 code, write unit tests in these situations:




        • when fixing bugs

        • when refactoring

        • when adding new functionality to the existing code


        As useful as unit tests are, creating a complete unit test suite for an existing1 codebase probably isn't a realistic idea. The powers that be have pushed you to deliver on a tight deadline. They didn't allow you time to create adequate unit tests as you were developing. Do you think they will give you adequate time to create tests for the "program that works" ?



        1Legacy code is the code without unit tests. This is the TDD definition of legacy code. It applies even if the legacy code is freshly delivered [even if the ink haven't dried yet].







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 26 '18 at 9:22









        candied_orange

        52.4k1697185




        52.4k1697185










        answered Nov 25 '18 at 19:44









        Nick Alexeev

        1,6321818




        1,6321818












        • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
          – Michel Gokan
          Nov 25 '18 at 20:03






        • 7




          (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
          – Nick Alexeev
          Nov 25 '18 at 20:15










        • The downside to "write tests when you stumble upon needing them" is that there's an increased risk of ending up with a patchwork of tests written by different developers with different ideas. I'm not saying this answer is wrong, but it does require a firm hand that keeps the quality and style of the tests uniform.
          – Flater
          Nov 26 '18 at 9:30






        • 4




          @Flater uniformity offers false comfort. I want tests that make the production code easy to read. Not tests that all look the same. I'll forgive mixing completely different testing frameworks if it makes it easier to understand what the production code does.
          – candied_orange
          Nov 26 '18 at 9:48






        • 2




          @Flater I didn't assert for ugly production code. I assert that the point of tests is to make production code readable. I will gladly accept an eclectic mob of tests that make the production code easier to read. Be careful about making uniformity a goal in itself. Readability is king.
          – candied_orange
          Nov 26 '18 at 10:06




















        • But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
          – Michel Gokan
          Nov 25 '18 at 20:03






        • 7




          (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
          – Nick Alexeev
          Nov 25 '18 at 20:15










        • The downside to "write tests when you stumble upon needing them" is that there's an increased risk of ending up with a patchwork of tests written by different developers with different ideas. I'm not saying this answer is wrong, but it does require a firm hand that keeps the quality and style of the tests uniform.
          – Flater
          Nov 26 '18 at 9:30






        • 4




          @Flater uniformity offers false comfort. I want tests that make the production code easy to read. Not tests that all look the same. I'll forgive mixing completely different testing frameworks if it makes it easier to understand what the production code does.
          – candied_orange
          Nov 26 '18 at 9:48






        • 2




          @Flater I didn't assert for ugly production code. I assert that the point of tests is to make production code readable. I will gladly accept an eclectic mob of tests that make the production code easier to read. Be careful about making uniformity a goal in itself. Readability is king.
          – candied_orange
          Nov 26 '18 at 10:06


















        But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
        – Michel Gokan
        Nov 25 '18 at 20:03




        But then our new unit tests for new features may seem incomplete, or even worthless without missing unit tests. Isn't it?
        – Michel Gokan
        Nov 25 '18 at 20:03




        7




        7




        (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
        – Nick Alexeev
        Nov 25 '18 at 20:15




        (1) Worthless? Certainly not. At the very least, they test the new feature. Next time somebody want to modify this feature, they will reuse much of the existing tests. (2) Incomplete? Maybe, maybe not. If you also create unit tests which test the legacy functionality that the new feature depends on, then the tests may be complete enough for practical purposes. In other words, do create additional unit tests that penetrate the legacy functionality. Penetrate to what depth? It depends on the program's architecture, resources available, institutional support.
        – Nick Alexeev
        Nov 25 '18 at 20:15












        The downside to "write tests when you stumble upon needing them" is that there's an increased risk of ending up with a patchwork of tests written by different developers with different ideas. I'm not saying this answer is wrong, but it does require a firm hand that keeps the quality and style of the tests uniform.
        – Flater
        Nov 26 '18 at 9:30




        The downside to "write tests when you stumble upon needing them" is that there's an increased risk of ending up with a patchwork of tests written by different developers with different ideas. I'm not saying this answer is wrong, but it does require a firm hand that keeps the quality and style of the tests uniform.
        – Flater
        Nov 26 '18 at 9:30




        4




        4




        @Flater uniformity offers false comfort. I want tests that make the production code easy to read. Not tests that all look the same. I'll forgive mixing completely different testing frameworks if it makes it easier to understand what the production code does.
        – candied_orange
        Nov 26 '18 at 9:48




        @Flater uniformity offers false comfort. I want tests that make the production code easy to read. Not tests that all look the same. I'll forgive mixing completely different testing frameworks if it makes it easier to understand what the production code does.
        – candied_orange
        Nov 26 '18 at 9:48




        2




        2




        @Flater I didn't assert for ugly production code. I assert that the point of tests is to make production code readable. I will gladly accept an eclectic mob of tests that make the production code easier to read. Be careful about making uniformity a goal in itself. Readability is king.
        – candied_orange
        Nov 26 '18 at 10:06






        @Flater I didn't assert for ugly production code. I assert that the point of tests is to make production code readable. I will gladly accept an eclectic mob of tests that make the production code easier to read. Be careful about making uniformity a goal in itself. Readability is king.
        – candied_orange
        Nov 26 '18 at 10:06













        12














        In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




        • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

        • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

        • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready; only some manual smoke testing required.


        The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



        Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



        While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



        Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once.



        E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






        share|improve this answer























        • "more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software." This sounds a bit off to me, FWIW, I would prefer having 30% coverage consisting of mostly edge cases than 90% coverage consisting entirely of expected path behavior (which is easy for manual testers to do); I recommend thinking "outside the box" when writing tests and basing them off of (unusual) test cases discovered manually whenever possible.
          – jrh
          Nov 26 '18 at 16:53


















        12














        In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




        • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

        • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

        • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready; only some manual smoke testing required.


        The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



        Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



        While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



        Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once.



        E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






        share|improve this answer























        • "more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software." This sounds a bit off to me, FWIW, I would prefer having 30% coverage consisting of mostly edge cases than 90% coverage consisting entirely of expected path behavior (which is easy for manual testers to do); I recommend thinking "outside the box" when writing tests and basing them off of (unusual) test cases discovered manually whenever possible.
          – jrh
          Nov 26 '18 at 16:53
















        12












        12








        12






        In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




        • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

        • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

        • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready; only some manual smoke testing required.


        The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



        Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



        While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



        Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once.



        E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.






        share|improve this answer














        In my experience, tests do not need total coverage to be helpful. Instead, you start reaping different kinds of benefits as coverage increases:




        • more than 30% coverage (aka a couple of integration tests): if your tests fail, something is extremely broken (or your tests are flaky). Thankfully the tests alerted you quickly! But releases will still require extensive manual testing.

        • more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software. But releases will still require some manual testing.

        • very high coverage of functions/statements/branches/requirements: you're living the TDD/BDD dream, and your tests are a precise reflection of the functionality of your software. You can refactor with high confidence, including large scale architectural changes. If the tests pass, your software is almost release ready; only some manual smoke testing required.


        The truth is, if you don't start with BDD you're never going to get there, because the work required to test after coding is just excessive. The issue is not writing the tests, but more so being aware of actual requirements (rather than incidental implementation details) and being able to design the software in a way that is both functional and easy to test. When you write the tests first or together with the code, this is practically free.



        Since new features require tests, but tests require design changes, but refactoring also requires tests, you have a bit of a chicken and egg problem. As your software creeps closer to decent coverage, you'll have to do some careful refactoring in those parts of the code where new features occur, just to make the new features testable. This will slow you down a lot – initially. But by only refactoring and testing those parts where new development is needed, the tests also focus on that area where they are needed most. Stable code can continue without tests: if it were buggy, you'd have to change it anyway.



        While you try adapting to TDD, a better metric than total project coverage would be the test coverage in parts that are being changed. This coverage should be very high right from the start, though it is not feasible to test all parts of the code that are impacted by a refactoring. Also, you do reap most of the benefits of high test coverage within the tested components. That's not perfect, but still fairly good.



        Note that while unit tests seem to be common, starting with the smallest pieces is not a suitable strategy to get a legacy software under test. You'll want to start with integration tests that exercise a large chunk of the software at once.



        E.g. I've found it useful to extract integration test cases from real-world logfiles. Of course running such tests can take a lot of time, which is why you might want to set up an automated server that runs the tests regularly (e.g. a Jenkins server triggered by commits). The cost of setting up and maintaining such a server is very small compared to not running tests regularly, provided that any test failures actually get fixed quickly.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited yesterday









        Peter Mortensen

        1,11621114




        1,11621114










        answered Nov 25 '18 at 21:01









        amon

        85.3k21163250




        85.3k21163250












        • "more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software." This sounds a bit off to me, FWIW, I would prefer having 30% coverage consisting of mostly edge cases than 90% coverage consisting entirely of expected path behavior (which is easy for manual testers to do); I recommend thinking "outside the box" when writing tests and basing them off of (unusual) test cases discovered manually whenever possible.
          – jrh
          Nov 26 '18 at 16:53




















        • "more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software." This sounds a bit off to me, FWIW, I would prefer having 30% coverage consisting of mostly edge cases than 90% coverage consisting entirely of expected path behavior (which is easy for manual testers to do); I recommend thinking "outside the box" when writing tests and basing them off of (unusual) test cases discovered manually whenever possible.
          – jrh
          Nov 26 '18 at 16:53


















        "more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software." This sounds a bit off to me, FWIW, I would prefer having 30% coverage consisting of mostly edge cases than 90% coverage consisting entirely of expected path behavior (which is easy for manual testers to do); I recommend thinking "outside the box" when writing tests and basing them off of (unusual) test cases discovered manually whenever possible.
        – jrh
        Nov 26 '18 at 16:53






        "more than 90% coverage (aka most of the components have superficial unit tests): if your tests pass, the software is likely mostly fine. The untested parts are edge cases, which is fine for non-critical software." This sounds a bit off to me, FWIW, I would prefer having 30% coverage consisting of mostly edge cases than 90% coverage consisting entirely of expected path behavior (which is easy for manual testers to do); I recommend thinking "outside the box" when writing tests and basing them off of (unusual) test cases discovered manually whenever possible.
        – jrh
        Nov 26 '18 at 16:53













        5














        Don't write tests for existing code. It's not worth it.



        What you made is already somewhat tested in a completely informal way -- you tried it out by hand constantly, people did some nonautomated testing, it's being used now. That means that you won't find many bugs.



        What's left are the bugs you didn't think about. But those are exactly the ones you won't think to write unit tests for either, so you probably still won't find them.



        Also, a reason for TDD is to get you thinking about what the exact requirements of a bit of code are before writing it. In whatever different way, you already did that.



        In the meantime, it's still just as much work to write these tests as it would have been to write them beforehand. It'll cost a lot of time, for little benefit.



        And it's extremely boring to write lots and lots of tests with no coding in between and finding hardly any bugs. If you start out doing this, people new to TDD will hate it.



        In short, devs will hate it and managers will see it as costly, while not many bugs are found. You will never get to the actual TDD part.



        Use it on things that you want to change, as a normal part of the process.






        share|improve this answer

















        • 1




          I disagree strongly with "Don't write tests for existing code. It's not worth it." If the code is functioning reasonably properly tests may be the only specification in existence. And if the code is in maintenance, adding tests is the only way to ensure those functions which do work aren't broken by seemingly unrelated changes.
          – Julie in Austin
          Nov 26 '18 at 17:34






        • 3




          @JulieinAustin: On the other hand, without a spec, you don't know exactly what the code is supposed to do. And if you don't already know what the code is supposed to do, you may well write useless tests -- or worse, misleading ones that subtly change the spec -- and now accidental and/or wrong behavior becomes required.
          – cHao
          Nov 26 '18 at 20:21
















        5














        Don't write tests for existing code. It's not worth it.



        What you made is already somewhat tested in a completely informal way -- you tried it out by hand constantly, people did some nonautomated testing, it's being used now. That means that you won't find many bugs.



        What's left are the bugs you didn't think about. But those are exactly the ones you won't think to write unit tests for either, so you probably still won't find them.



        Also, a reason for TDD is to get you thinking about what the exact requirements of a bit of code are before writing it. In whatever different way, you already did that.



        In the meantime, it's still just as much work to write these tests as it would have been to write them beforehand. It'll cost a lot of time, for little benefit.



        And it's extremely boring to write lots and lots of tests with no coding in between and finding hardly any bugs. If you start out doing this, people new to TDD will hate it.



        In short, devs will hate it and managers will see it as costly, while not many bugs are found. You will never get to the actual TDD part.



        Use it on things that you want to change, as a normal part of the process.






        share|improve this answer

















        • 1




          I disagree strongly with "Don't write tests for existing code. It's not worth it." If the code is functioning reasonably properly tests may be the only specification in existence. And if the code is in maintenance, adding tests is the only way to ensure those functions which do work aren't broken by seemingly unrelated changes.
          – Julie in Austin
          Nov 26 '18 at 17:34






        • 3




          @JulieinAustin: On the other hand, without a spec, you don't know exactly what the code is supposed to do. And if you don't already know what the code is supposed to do, you may well write useless tests -- or worse, misleading ones that subtly change the spec -- and now accidental and/or wrong behavior becomes required.
          – cHao
          Nov 26 '18 at 20:21














        5












        5








        5






        Don't write tests for existing code. It's not worth it.



        What you made is already somewhat tested in a completely informal way -- you tried it out by hand constantly, people did some nonautomated testing, it's being used now. That means that you won't find many bugs.



        What's left are the bugs you didn't think about. But those are exactly the ones you won't think to write unit tests for either, so you probably still won't find them.



        Also, a reason for TDD is to get you thinking about what the exact requirements of a bit of code are before writing it. In whatever different way, you already did that.



        In the meantime, it's still just as much work to write these tests as it would have been to write them beforehand. It'll cost a lot of time, for little benefit.



        And it's extremely boring to write lots and lots of tests with no coding in between and finding hardly any bugs. If you start out doing this, people new to TDD will hate it.



        In short, devs will hate it and managers will see it as costly, while not many bugs are found. You will never get to the actual TDD part.



        Use it on things that you want to change, as a normal part of the process.






        share|improve this answer












        Don't write tests for existing code. It's not worth it.



        What you made is already somewhat tested in a completely informal way -- you tried it out by hand constantly, people did some nonautomated testing, it's being used now. That means that you won't find many bugs.



        What's left are the bugs you didn't think about. But those are exactly the ones you won't think to write unit tests for either, so you probably still won't find them.



        Also, a reason for TDD is to get you thinking about what the exact requirements of a bit of code are before writing it. In whatever different way, you already did that.



        In the meantime, it's still just as much work to write these tests as it would have been to write them beforehand. It'll cost a lot of time, for little benefit.



        And it's extremely boring to write lots and lots of tests with no coding in between and finding hardly any bugs. If you start out doing this, people new to TDD will hate it.



        In short, devs will hate it and managers will see it as costly, while not many bugs are found. You will never get to the actual TDD part.



        Use it on things that you want to change, as a normal part of the process.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 26 '18 at 14:53









        RemcoGerlich

        2,8101117




        2,8101117








        • 1




          I disagree strongly with "Don't write tests for existing code. It's not worth it." If the code is functioning reasonably properly tests may be the only specification in existence. And if the code is in maintenance, adding tests is the only way to ensure those functions which do work aren't broken by seemingly unrelated changes.
          – Julie in Austin
          Nov 26 '18 at 17:34






        • 3




          @JulieinAustin: On the other hand, without a spec, you don't know exactly what the code is supposed to do. And if you don't already know what the code is supposed to do, you may well write useless tests -- or worse, misleading ones that subtly change the spec -- and now accidental and/or wrong behavior becomes required.
          – cHao
          Nov 26 '18 at 20:21














        • 1




          I disagree strongly with "Don't write tests for existing code. It's not worth it." If the code is functioning reasonably properly tests may be the only specification in existence. And if the code is in maintenance, adding tests is the only way to ensure those functions which do work aren't broken by seemingly unrelated changes.
          – Julie in Austin
          Nov 26 '18 at 17:34






        • 3




          @JulieinAustin: On the other hand, without a spec, you don't know exactly what the code is supposed to do. And if you don't already know what the code is supposed to do, you may well write useless tests -- or worse, misleading ones that subtly change the spec -- and now accidental and/or wrong behavior becomes required.
          – cHao
          Nov 26 '18 at 20:21








        1




        1




        I disagree strongly with "Don't write tests for existing code. It's not worth it." If the code is functioning reasonably properly tests may be the only specification in existence. And if the code is in maintenance, adding tests is the only way to ensure those functions which do work aren't broken by seemingly unrelated changes.
        – Julie in Austin
        Nov 26 '18 at 17:34




        I disagree strongly with "Don't write tests for existing code. It's not worth it." If the code is functioning reasonably properly tests may be the only specification in existence. And if the code is in maintenance, adding tests is the only way to ensure those functions which do work aren't broken by seemingly unrelated changes.
        – Julie in Austin
        Nov 26 '18 at 17:34




        3




        3




        @JulieinAustin: On the other hand, without a spec, you don't know exactly what the code is supposed to do. And if you don't already know what the code is supposed to do, you may well write useless tests -- or worse, misleading ones that subtly change the spec -- and now accidental and/or wrong behavior becomes required.
        – cHao
        Nov 26 '18 at 20:21




        @JulieinAustin: On the other hand, without a spec, you don't know exactly what the code is supposed to do. And if you don't already know what the code is supposed to do, you may well write useless tests -- or worse, misleading ones that subtly change the spec -- and now accidental and/or wrong behavior becomes required.
        – cHao
        Nov 26 '18 at 20:21











        2














        A test is a means to communicate understanding.



        Therefore only write tests for what you understand should be true.



        You can only understand what should be true when you work with it.



        Therefore only write tests for code that you are working with.



        When you work with the code you will learn.



        Therefore write and rewrite tests to capture what you have learnt.



        Rinse and repeat.



        Have a code-coverage tool run with your tests, and only accept commits to the mainline that do not reduce coverage. Eventually you'll reach a high level of coverage.



        If you haven't worked with the code in a while, a business decision needs to be made. It is now quite possibly so legacy that no one on your team knows how to work with it. It probably has out-of-date libraries/compilers/documentation which is a massive liability in just about every way.



        Two options:




        1. Invest the time to read it, learn from it, write tests for it, and refactor it. Small sets of changes with frequent releases.

        2. Find a way to ditch that software. You could not possibly make a modification to it when asked to anyway.






        share|improve this answer




























          2














          A test is a means to communicate understanding.



          Therefore only write tests for what you understand should be true.



          You can only understand what should be true when you work with it.



          Therefore only write tests for code that you are working with.



          When you work with the code you will learn.



          Therefore write and rewrite tests to capture what you have learnt.



          Rinse and repeat.



          Have a code-coverage tool run with your tests, and only accept commits to the mainline that do not reduce coverage. Eventually you'll reach a high level of coverage.



          If you haven't worked with the code in a while, a business decision needs to be made. It is now quite possibly so legacy that no one on your team knows how to work with it. It probably has out-of-date libraries/compilers/documentation which is a massive liability in just about every way.



          Two options:




          1. Invest the time to read it, learn from it, write tests for it, and refactor it. Small sets of changes with frequent releases.

          2. Find a way to ditch that software. You could not possibly make a modification to it when asked to anyway.






          share|improve this answer


























            2












            2








            2






            A test is a means to communicate understanding.



            Therefore only write tests for what you understand should be true.



            You can only understand what should be true when you work with it.



            Therefore only write tests for code that you are working with.



            When you work with the code you will learn.



            Therefore write and rewrite tests to capture what you have learnt.



            Rinse and repeat.



            Have a code-coverage tool run with your tests, and only accept commits to the mainline that do not reduce coverage. Eventually you'll reach a high level of coverage.



            If you haven't worked with the code in a while, a business decision needs to be made. It is now quite possibly so legacy that no one on your team knows how to work with it. It probably has out-of-date libraries/compilers/documentation which is a massive liability in just about every way.



            Two options:




            1. Invest the time to read it, learn from it, write tests for it, and refactor it. Small sets of changes with frequent releases.

            2. Find a way to ditch that software. You could not possibly make a modification to it when asked to anyway.






            share|improve this answer














            A test is a means to communicate understanding.



            Therefore only write tests for what you understand should be true.



            You can only understand what should be true when you work with it.



            Therefore only write tests for code that you are working with.



            When you work with the code you will learn.



            Therefore write and rewrite tests to capture what you have learnt.



            Rinse and repeat.



            Have a code-coverage tool run with your tests, and only accept commits to the mainline that do not reduce coverage. Eventually you'll reach a high level of coverage.



            If you haven't worked with the code in a while, a business decision needs to be made. It is now quite possibly so legacy that no one on your team knows how to work with it. It probably has out-of-date libraries/compilers/documentation which is a massive liability in just about every way.



            Two options:




            1. Invest the time to read it, learn from it, write tests for it, and refactor it. Small sets of changes with frequent releases.

            2. Find a way to ditch that software. You could not possibly make a modification to it when asked to anyway.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited yesterday









            Peter Mortensen

            1,11621114




            1,11621114










            answered Nov 26 '18 at 7:51









            Kain0_0

            1,885110




            1,885110























                0















                (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)?

                (2) it's better to wait for something bad happen and then during the fix write new unit tests




                One of the main purposes of tests is to ensure that a change didn't break anything. This is a three step process:




                1. Confirm that the tests succeed

                2. Make you changes

                3. Confirm that the tests still succeed


                This means that you need to have working tests before you actually change something. Should you choose the second path, that means that you're going to have to force your developers to write tests before they even touch the code. And I strongly suspect that when already faced with a real-world change, the developers are not going to give the unit tests the attention they deserve.



                So I suggest splitting the test-writing and change-making tasks to avoid developers sacrificing the quality of one for the other.




                even though everything is working completely OKAY (yet!)?




                Just to point this out specifically, it's a common misconception that you only need tests when the code isn't working. You need tests when the code is working too, for example to prove to someone that [newly occurring bug] is not due to your part because the tests are still passing.
                Confirming that everything still works like it did before is an important benefit of testing that you're omitting when you imply that you don't need tests when the code is working.




                (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor




                Ideally, all of the existing source code should now get unit tests. However, there is a reasonable argument that the time and effort (and cost) needed to do so is simply not relevant for certain projects.

                For example, application that are no longer being developed and are not expected to be changed anymore (e.g. the client no longer uses it, or the client isn't a client anymore), you can argue that it's not relevant to test this code anymore.



                However, it's not as clear cut where you draw the line. This is something that a company needs to look at in a cost benefit analysis. Writing tests costs time and effort, but are they expecting any future development on that application? Do the gains from having unit tests outweigh the cost of writing them?



                This is not a decision you (as a developer) can make. At best, you can offer an estimate for the needed time to implement tests on a given project, and it's up to management to decide if there is a sufficient expectation of actually needing to maintain/develop the project.




                and postpone everything to the next major refactor




                If the next major refactor is a given, then you do indeed need to write the tests.



                But don't put it off until you're faced with major changes. My initial point (not combining the writing of tests and updating the code) still stands, but I want to add a second point here: your developers currently know their way around the project better than they will in six months if they spend that time working on other projects. Capitalize on periods of time where the developers are already warmed up and don't need to figure out how things work again in the future.






                share|improve this answer


























                  0















                  (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)?

                  (2) it's better to wait for something bad happen and then during the fix write new unit tests




                  One of the main purposes of tests is to ensure that a change didn't break anything. This is a three step process:




                  1. Confirm that the tests succeed

                  2. Make you changes

                  3. Confirm that the tests still succeed


                  This means that you need to have working tests before you actually change something. Should you choose the second path, that means that you're going to have to force your developers to write tests before they even touch the code. And I strongly suspect that when already faced with a real-world change, the developers are not going to give the unit tests the attention they deserve.



                  So I suggest splitting the test-writing and change-making tasks to avoid developers sacrificing the quality of one for the other.




                  even though everything is working completely OKAY (yet!)?




                  Just to point this out specifically, it's a common misconception that you only need tests when the code isn't working. You need tests when the code is working too, for example to prove to someone that [newly occurring bug] is not due to your part because the tests are still passing.
                  Confirming that everything still works like it did before is an important benefit of testing that you're omitting when you imply that you don't need tests when the code is working.




                  (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor




                  Ideally, all of the existing source code should now get unit tests. However, there is a reasonable argument that the time and effort (and cost) needed to do so is simply not relevant for certain projects.

                  For example, application that are no longer being developed and are not expected to be changed anymore (e.g. the client no longer uses it, or the client isn't a client anymore), you can argue that it's not relevant to test this code anymore.



                  However, it's not as clear cut where you draw the line. This is something that a company needs to look at in a cost benefit analysis. Writing tests costs time and effort, but are they expecting any future development on that application? Do the gains from having unit tests outweigh the cost of writing them?



                  This is not a decision you (as a developer) can make. At best, you can offer an estimate for the needed time to implement tests on a given project, and it's up to management to decide if there is a sufficient expectation of actually needing to maintain/develop the project.




                  and postpone everything to the next major refactor




                  If the next major refactor is a given, then you do indeed need to write the tests.



                  But don't put it off until you're faced with major changes. My initial point (not combining the writing of tests and updating the code) still stands, but I want to add a second point here: your developers currently know their way around the project better than they will in six months if they spend that time working on other projects. Capitalize on periods of time where the developers are already warmed up and don't need to figure out how things work again in the future.






                  share|improve this answer
























                    0












                    0








                    0







                    (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)?

                    (2) it's better to wait for something bad happen and then during the fix write new unit tests




                    One of the main purposes of tests is to ensure that a change didn't break anything. This is a three step process:




                    1. Confirm that the tests succeed

                    2. Make you changes

                    3. Confirm that the tests still succeed


                    This means that you need to have working tests before you actually change something. Should you choose the second path, that means that you're going to have to force your developers to write tests before they even touch the code. And I strongly suspect that when already faced with a real-world change, the developers are not going to give the unit tests the attention they deserve.



                    So I suggest splitting the test-writing and change-making tasks to avoid developers sacrificing the quality of one for the other.




                    even though everything is working completely OKAY (yet!)?




                    Just to point this out specifically, it's a common misconception that you only need tests when the code isn't working. You need tests when the code is working too, for example to prove to someone that [newly occurring bug] is not due to your part because the tests are still passing.
                    Confirming that everything still works like it did before is an important benefit of testing that you're omitting when you imply that you don't need tests when the code is working.




                    (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor




                    Ideally, all of the existing source code should now get unit tests. However, there is a reasonable argument that the time and effort (and cost) needed to do so is simply not relevant for certain projects.

                    For example, application that are no longer being developed and are not expected to be changed anymore (e.g. the client no longer uses it, or the client isn't a client anymore), you can argue that it's not relevant to test this code anymore.



                    However, it's not as clear cut where you draw the line. This is something that a company needs to look at in a cost benefit analysis. Writing tests costs time and effort, but are they expecting any future development on that application? Do the gains from having unit tests outweigh the cost of writing them?



                    This is not a decision you (as a developer) can make. At best, you can offer an estimate for the needed time to implement tests on a given project, and it's up to management to decide if there is a sufficient expectation of actually needing to maintain/develop the project.




                    and postpone everything to the next major refactor




                    If the next major refactor is a given, then you do indeed need to write the tests.



                    But don't put it off until you're faced with major changes. My initial point (not combining the writing of tests and updating the code) still stands, but I want to add a second point here: your developers currently know their way around the project better than they will in six months if they spend that time working on other projects. Capitalize on periods of time where the developers are already warmed up and don't need to figure out how things work again in the future.






                    share|improve this answer













                    (1) Is it still okay or good idea to stop most of the development and start writing whole possible test cases from the beginning, even though everything is working completely OKAY (yet!)?

                    (2) it's better to wait for something bad happen and then during the fix write new unit tests




                    One of the main purposes of tests is to ensure that a change didn't break anything. This is a three step process:




                    1. Confirm that the tests succeed

                    2. Make you changes

                    3. Confirm that the tests still succeed


                    This means that you need to have working tests before you actually change something. Should you choose the second path, that means that you're going to have to force your developers to write tests before they even touch the code. And I strongly suspect that when already faced with a real-world change, the developers are not going to give the unit tests the attention they deserve.



                    So I suggest splitting the test-writing and change-making tasks to avoid developers sacrificing the quality of one for the other.




                    even though everything is working completely OKAY (yet!)?




                    Just to point this out specifically, it's a common misconception that you only need tests when the code isn't working. You need tests when the code is working too, for example to prove to someone that [newly occurring bug] is not due to your part because the tests are still passing.
                    Confirming that everything still works like it did before is an important benefit of testing that you're omitting when you imply that you don't need tests when the code is working.




                    (3) even forget about previous codes and just write unit tests for the new codes only and postpone everything to the next major refactor




                    Ideally, all of the existing source code should now get unit tests. However, there is a reasonable argument that the time and effort (and cost) needed to do so is simply not relevant for certain projects.

                    For example, application that are no longer being developed and are not expected to be changed anymore (e.g. the client no longer uses it, or the client isn't a client anymore), you can argue that it's not relevant to test this code anymore.



                    However, it's not as clear cut where you draw the line. This is something that a company needs to look at in a cost benefit analysis. Writing tests costs time and effort, but are they expecting any future development on that application? Do the gains from having unit tests outweigh the cost of writing them?



                    This is not a decision you (as a developer) can make. At best, you can offer an estimate for the needed time to implement tests on a given project, and it's up to management to decide if there is a sufficient expectation of actually needing to maintain/develop the project.




                    and postpone everything to the next major refactor




                    If the next major refactor is a given, then you do indeed need to write the tests.



                    But don't put it off until you're faced with major changes. My initial point (not combining the writing of tests and updating the code) still stands, but I want to add a second point here: your developers currently know their way around the project better than they will in six months if they spend that time working on other projects. Capitalize on periods of time where the developers are already warmed up and don't need to figure out how things work again in the future.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 26 '18 at 9:28









                    Flater

                    6,75521221




                    6,75521221























                        0














                        My two cents:



                        Wait to for a major technical upgrade to the system and write the tests then... officially with the support of the business.



                        Alternatively, let's say you're a SCRUM shop, your workload is represented by capacity and you can allocated a % of that to unit testing, but...



                        Saying you're going to go back and write the tests is naive, what you're really going to do is write tests, refactor, and write more tests after the refactor has made the code more testable, which is why it's best to start with tests as you are already aware, and...



                        It's best for the original author to write tests for and refactor the code they wrote previously, it's not ideal, but from experience you want the refactor to make the code better not worse.






                        share|improve this answer


























                          0














                          My two cents:



                          Wait to for a major technical upgrade to the system and write the tests then... officially with the support of the business.



                          Alternatively, let's say you're a SCRUM shop, your workload is represented by capacity and you can allocated a % of that to unit testing, but...



                          Saying you're going to go back and write the tests is naive, what you're really going to do is write tests, refactor, and write more tests after the refactor has made the code more testable, which is why it's best to start with tests as you are already aware, and...



                          It's best for the original author to write tests for and refactor the code they wrote previously, it's not ideal, but from experience you want the refactor to make the code better not worse.






                          share|improve this answer
























                            0












                            0








                            0






                            My two cents:



                            Wait to for a major technical upgrade to the system and write the tests then... officially with the support of the business.



                            Alternatively, let's say you're a SCRUM shop, your workload is represented by capacity and you can allocated a % of that to unit testing, but...



                            Saying you're going to go back and write the tests is naive, what you're really going to do is write tests, refactor, and write more tests after the refactor has made the code more testable, which is why it's best to start with tests as you are already aware, and...



                            It's best for the original author to write tests for and refactor the code they wrote previously, it's not ideal, but from experience you want the refactor to make the code better not worse.






                            share|improve this answer












                            My two cents:



                            Wait to for a major technical upgrade to the system and write the tests then... officially with the support of the business.



                            Alternatively, let's say you're a SCRUM shop, your workload is represented by capacity and you can allocated a % of that to unit testing, but...



                            Saying you're going to go back and write the tests is naive, what you're really going to do is write tests, refactor, and write more tests after the refactor has made the code more testable, which is why it's best to start with tests as you are already aware, and...



                            It's best for the original author to write tests for and refactor the code they wrote previously, it's not ideal, but from experience you want the refactor to make the code better not worse.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Nov 26 '18 at 21:01









                            RandomUs1r

                            18010




                            18010






























                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Software Engineering Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.





                                Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                Please pay close attention to the following guidance:


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381996%2fis-it-a-good-idea-to-write-all-possible-test-cases-after-transforming-the-team-t%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown











                                Popular posts from this blog

                                Plaza Victoria

                                In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

                                How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...