One reason to fail
August 18, 2015
Resisting the temptation of using utility methods in a test.
Clunky mechanic
The restriction system the Restricted Content framework plugin puts in place is based around taxonomy terms applied to posts.
I've taken care of finding posts that have not a default restriction applied and am developing the class that's responsible for the application of those default terms.
In so doing I've written this test code
/**
* @test
* it should apply default term restriction to posts
*/
public function it_should_apply_default_term_resriction_to_posts() {
register_post_type( 'post_type_1' );
register_taxonomy( 'tax_1', 'post_type_1' );
wp_insert_term( 'term_1', 'tax_1', [ 'slug' => 'term_1' ] );
$user_slug_provider = Test::replace( 'trc_Public_UserSlugProviderInterface' )
->method( 'get_default_post_terms', [ 'term_1' ] )->get();
$this->sut->set_user_slug_provider_for( 'tax_1', $user_slug_provider );
$posts = $this->factory->post->create_many( 5, [ 'post_type' => 'post_type_1' ] );
$this->sut->apply_default_restrictions( [ 'tax_1' => $posts ] );
foreach ( $posts as $id ) {
$terms = wp_get_object_terms( $id, 'tax_1', [ 'fields' => 'names' ] );
Test::assertEquals( [ 'term_1' ], $terms );
}
}
the final foreach
loop is checking that each post got the default restriction term, term_1
, applied.
I have a class method that does that
The task the final loop is executing is making sure that there are no unrestricted posts remaining: I have just implemented a class that does that along with its tests.
While using the trc_Core_PostDefaults::get_unrestricted_posts()
method would be easier it would also introduce one more reason for the above test to fail: if I broke the class while refactoring or adding functions to it I could get false negatives in the class tests or, even worse, false positives.
One reason to fail
And I've just done the above.
The test code I've pasted is a second and wiser iteration over previous code that was, in fact, using the trc_Core_PostDefaults::get_unrestricted_posts()
method.
Because it was convenient and made for the task.
Sadly I had to run several debug sessions to find out that the passing test was, in fact, a false positive caused by the "helper" class code: I had not written a single line of code at that point and the test was supposed to fail.
The TDD tradition sees a first failing test as a standard opening of the "red to green light" game and that struck me but if the case had been of a false positive on a hundreds tests test suite I would not have noticed.
In this case there is little trade-off in checking the post-conditions of the subject-under-test actions "manually" but that might not always be the case.
Probably doing too much
I'd say that if post-conditions checking cannot be broken down to at most 20 lines of code then the problem lies in the subject-under-test either doing too much, affecting too many post-conditions, not playing by the underlying framework rules or not returning meaningful values.
The concept of "testing in isolation" is, in my opinion, more tied to being able to isolate failures rather than isolating objects.
I've struggled to do the latter for a long time before understanding that pursuing the first will lead to the second in a less convoluted road.