WPBrowser scaffold WP-CLI command 04

Mocking the user input while testing a WP CLI command with Behat.

Not everything can be an option

The wp-cli tool allows for a degree of interactivity that goes beyond passing everything as an option.
A command like wp core install will require some options to work and those will have to be specified inline with the command like this:

wp core install --url="http://example.com" --title="Example Site" \
    --admin_user=admin --admin_password=admin --admin_email=admin@example.com

Entering options inline is welcome when the command main use will be in scripts like those used by Travis CI but is not the most accessible way to use it when the command is meant to be user-friendly and interactive as wpcli-wpbrowser-tests aims to be.
The command is meant to be an easy entry point for WordPress developers to scaffold and set up Codeception and wp-browser based tests for a WordPress theme or plugin.
Furthermore with an options only based command line interface the user should know the answer to any question beforehand: if that was the case the user would not be using an interactive mode.

Interactivity is built-in

I use wp-cli quite a lot but never took the time to look into its inner workings as I did while developing my own package for it; the amount of interactivity I’ve experienced so far amounts to the confirmation prompt presented by commands like wp db reset; if not called using the --yes parameter the command will ask (thankfully) for a confirmation before resetting the database.
That’s but just a glimpse of what wp-cli commands can provide in terms of interactivity and I want the wp wpb-scaffold plugin-tests command to offer as much interactivity as possible.
The real problem proved to be testing and mocking the user interaction using Behat.

Testing my stuff

Developing a package for wp-cli starts installing the wp scaffold package package from its repository.
After that it’s a matter of registering the commands and, in my case, adding as many Behat tests for the new commands as I can think of.
I’ve added some features already and got to the point where the user can decide to update Composer immediately or not and it made sense to ask for a confirmation.

The method in which I’m prompting the user for a confirmation is in the PluginTests class:

/**
 * @param array $args
 * @param array $assocArgs
 *
 * @throws BadArgumentException
 */
public function scaffold( array $args, array $assocArgs ) {
    $this->args      = $args;
    $this->assocArgs = $assocArgs;
    $this->dryRun = isset( $assocArgs['dry-run'] );

    $targetDir = $this->getScaffoldTargetDir( $args, $assocArgs );

    $this->setTargetDir( $targetDir );

    if ( $this->dryRun ) {
        return;
    }

    $this->scaffoldOrUpdateComposerFile( $assocArgs );

    $shouldGoOn = $this->promptForComposerUpdate();

    if ( ! $shouldGoOn ) {
        $this->end();
    }
}

private function promptForComposerUpdate() {
    return \cli\confirm( 'Do you want to update Composer dependencies now', true );
}

Where the function \cli\confirm will print to the STDOUT and wait for a user answer on STDIN; in code terms an “answer” is any string followed by newline on the STDIN stream.
If I was developing a command for the Symfony Console I would be using the test support classes provided by the framework but there is not much like that here.

Mocking user inputs

The Behat based tests scaffolded along with the boilerplate command code give much in terms of testing and I wanted to be able to validate the scenario below from the wpcept-launch.feature file:

Feature: Test that the command will optionally launch Composer and wpcept after the scaffold

  Background:
    Given a WP install

  Scenario: the command will end if the user wants to manually update composer dependencies
    Given I will answer 'n' to the 'composer update' question
    When I run `wp scaffold plugin some-plugin --plugin_name="Some Plugin" --plugin_description="Description of the plugin." --plugin_author="Your Name" --plugin_author_uri="http://example.com"`
    When I run `wp wpb-scaffold plugin-tests some-plugin` with input
    Then STDOUT should contain:
    """
    All done
    """
    Then STDOUT should contain:
    """
    Run `composer update` to install or update wp-browser
    """
    Then STDOUT should contain:
    """
    Run `./vendor/bin/wpcept bootstrap --interactive-mode` to start wp-browser interactive test setup
    """

The key to the “input mocking” will lie in the Given I will answer... step before and the When I run...with input step later.
The wp-cli scaffold package command will create a custom feature context extending the Behat base one and a reference to it will be passed to step methods in the $world variable (file given.php):

$steps->Given( '/^I will answer \'([^\']*)\' to the question$/', function ( $world, $answer ) {
    if ( ! isset( $world->variables['input'] ) ) {
        $world->variables['input'] = array();
    }

    /** @var FeatureContext $world */
    $world->variables['input'][] = $answer;
} );

Registering an answer simply means adding it to a stack of answers that will be sent on the process input when running it (file when.php ):

$steps->When( '/^I run `([^`]+)` with input$/', function ( $world, $cmd ) {
    /** @var FeatureContext $world */
    $mockInput     = implode( "\n", $world->variables['input'] ) . "\n";
    $world->result = $world->proc( $cmd )->run_with_input( $mockInput );
} );

The WP_CLI\Process::run_with_input method is a rewriting of the run one intended to replace the process input pipe (file Process.php):

public function run_with_input( $input ) {
    $cwd = $this->cwd;

    $descriptors = array(
        0 => array( 'pipe', 'r' ),
        1 => array( 'pipe', 'w' ),
        2 => array( 'pipe', 'w' ),
    );

    $proc = proc_open( $this->command, $descriptors, $pipes, $cwd, $this->env );

    fwrite( $pipes[0], $input );

    $stdout = stream_get_contents( $pipes[1] );
    fclose( $pipes[1] );

    $stderr = stream_get_contents( $pipes[2] );
    fclose( $pipes[2] );

    $r = new ProcessRun( array(
        'stdout'      => $stdout,
        'stderr'      => $stderr,
        'return_code' => proc_close( $proc ),
        'command'     => $this->command,
        'cwd'         => $cwd,
        'env'         => $this->env
    ) );

    if ( $r->return_code || ! empty( $r->STDERR ) ) {
        throw new \RuntimeException( $r );
    }

    return $r;
}

This will amount to the scenario failing on the next step and this, in testing terms, is a huge progress. Behat interactive scenario failing one step later