I’m not particularly a big evangelist of code testing, and I don’t care if you
have “unit tests” or “integration tests”. I do think tests are great, and I’m
actually a strong advocate of having 100% (or very close to that) test coverage,
but I don’t have particularly strong opinions about how your test suites work.
As long as the test suite reasonably matches what happens in production, I’m
cool with it, regardless of what steps it took to get working. Pretty much all
of my projects that are of my provenance at work have 100% test coverage
(including branches) or at least 99%+ coverage.
But; If you don’t have tests at all—that is unacceptable. If you have
tests that are like “does this thing even build” that is also unacceptable. I
should be able to make any change to your code repo, and be reasonably sure that
it will at least work in some sense of that word. If I add a new feature, it’s
reasonable and expected that I’ll add new tests. In fact, nearly all non-trivial
changes should have tests associated with them.
The number of times I’ve made a “trivial” change to a code repository (in my
opposition, w/o adding new tests [which sometimes is not possible]) and then
have been blamed for the resulting horkage drives me crazy. All code should be
tested. If I land a “trivial” change that completely breaks some project, even
if I did not add new tests, I blame that on poor test coverage, regardless of
what steps I could have taken to add new tests for my esoteric/trivial change.
Also, for the love of god, write your code to be testable in the first place.
This is a specific code engineering quality that is generally learned via
practice, but once you’ve been in the field a few years you should be writing
code that is easily testable. I totally get why new engineers fresh out of
school write code that is hard to test, but in my opinion it’s unacceptable for
10+ year developers to be writing code that can’t be easily tested.