Not every product has smoke tests (a series of tests you can run after each build to make sure the product works well enough to continue development and testing). Smoke tests provide early feedback to developers about their work. So, for the last several years, I've been suggesting to my clients that as they develop a feature, they include one short automated test for that feature in the smoke test. This test allows the developers to know their changes didn't break the product.
Most of the time, that suggestion works. Some groups have found that just the discussion of what they wanted to insert into a smoke test was a great way to discuss the exceptions and make sure they'd handled the exceptions. And building a smoke test this way seemed relatively painless.
But one group has had trouble with this suggestion. A few of their developers are quite happy to create lots of automated tests per feature. And, their automated tests carefully tiptoe around the feature, avoiding anything that anyone would remotely call testing of the feature. The result is they have a large automated smoke test that tells them nothing. So for them, this is not a useful practice–as it stands.
I've suggested that each developer limit the number of smoke tests he or she develops to one per use case. Each developer gets to choose, and presents that choice, along with the smoke test to the rest of the group during a walkthrough.
This isn't optimal, but the people in the group are becoming accustomed to receiving feedback about their work. With the way they'd been implementing “smoke” tests, they prevented themselves from seeing any feedback about their work. This technique might help.
So consider creating one automated test per feature as you implement. It might be a useful practice. But if people are preventing themselves from seeing feedback on their code, consider some other practice.