How to use Behat for BDD in Windows with Selenium in Docker

A few definitions before diving in (the title is a mouthful):

  • BDD = Behavior Driven Development, which is a more top down approach to Test Driven Development (TDD)
  • Behat = Tool for implementing BDD written in PHP
  • Selenium = open source browser testing tool, which can be used with behat as an addon through the Mink extension
  • Docker = open source virtual machine tool to emulate other operating systems
  • xampp = open source web hosting software for any (X) operating system containing Apache, Mysql (switched to MariaDB now, but at least it still starts with M), Perl, and Php

Nobody wants to be surprised by bugs after launch and manual testing is a painfully tedious process. Fully automated testing of web applications is like the holy grail. Unit testing can be useful, but ideally integration and even performance and user testing are automated as much as possible. I was excited to learn about BDD as a top down approach to test development as opposed to traditional TDD which is more of a bottom up approach.

I found behat while looking for a way to do BDD in php. I first tried following the behat quick start but I had to modify some steps. First, I had composer installed on my pc already (from using the windows installer) but I had to change step 1 from:

php composer.phar require --dev behat/behat #maybe this works in linux but not on my pc


#make sure composer is in your path so this will find it - the windows installer does that for you
composer require --dev behat/behat

Whenever I saw “php composer.phar” in the quick start for behat I replaced it with “composer” and it worked for me.

The basic quick start example got me started with behat, and I found the PHPUnit appendix on assertions to be helpful in order to create my own tests. But I wanted to control a web application, so I then reviewed the documentation about using Mink with Behat.

I was successful in getting the behat feature/scenario driven testing to work with the Goutte driver, but for some reason (actually several errors I could not get past easily) I was not able to get selenium to connect properly on my pc using the latest jar file for chrome or firefox webdrivers (and trying several suggestions on stack overflow for people with similar challenges). That means I could only test using the headless driver. I needed to use both browser and headless browser testing because there are always javascript events and actions which occur on the page, and what seemed to be the problem is I was using windows locally instead of linux.

While researching how to get the chrome driver to work with selenium I saw a mention of using docker to avoid environment and software version mismatch issues. That led me to an article explaining how to use docker with selenium. This article was very valuable for me (I just followed the windows instructions, ignoring the ubuntu steps), as it gave me a way to get past the confusing driver downloads and behat configuration to try and get them working with selenium in order to test a website with Chrome or Firefox.

In order to control the browser in the docker container (and watch it using TightVNC as mentioned in the docker article) I setup the following in the MinkExtension section of my behat.yml file (port 4445 is mapped to docker following the article instructions, and the ip is from the docker-machine ip command as mentioned in the article):

wd_host: ''

The next challenge was how to connect to localhost (I do my testing in xampp on my PC first because that’s where I develop) from within docker. Using ipconfig on my pc I was able to find the internal network IPV4 address for my PC on my wifi network (if you are hard wired look for the ethernet connection address). Then I updated this line in my behat.yml file to use it for testing within docker:

base_url: #this ip is from the wifi ipv4 address from ipconfig

My full behat.yml file then became this (I initially had ssl connection issues with goutte so I had to turn off the verification):

browser_name: chrome #I want to test with chrome for now, I'm using that docker image
# base_url: #this is an example of an external address to test behat with
# base_url: http://localhost/dsdistribution/ #this doesn't work from within docker container
base_url: #this ip is from the wifi ipv4 address from ipconfig on my PC
verify: false
wd_host: ''

The dsdistribution folder is the webroot of the new application I was developing in my local environment.

My next challenge was to connect to the database on localhost from within docker. I first updated the host to be, but then my root with no password user did not have permission to connect. I found a recommendation for uncommenting a line in c:/xampp/mysql/bin/my.ini and setting it to allow remote connections:

# Change here for bind listening
bind-address='' #this allows any address to connect to mysql, only safe if network is internal

I just confirmed that my public IP does not have port 3306 open using a port forwarding tester at, this is standard for home networks on a PC but it’s good to check anyway.

Actually the new bind address alone did not fix my connection to the database on localhost from within docker (even after restarting apache, mysql, and even xampp itself). There were too many rules in the mysql.user table for the root user so for some reason it was denying access using the ‘remote’ docker host. So the next step was to create a new user using the mysql command line interface (after logging in as the admin user I named root) like this:

CREATE USER 'behat'@'%' IDENTIFIED BY 'behat';
GRANT ALL PRIVILEGES ON * . * TO 'behat'@'%';

I planned to use this new behat user for testing all my applications locally so I gave it all privileges on every database. Once I followed that last step I just had to update my db connection file in my web application to have a clause like this:

if ($_SERVER['HTTP_HOST']==''){
$servername = '';
$username = 'behat';
$password = 'behat';

Then I was able to open TightVNC (as directed in the docker article) and watch my behat test scenarios execute after running a command like this in a dos prompt (running from where I installed behat, pointing to a specific feature file to test):

c:\xampp\htdocs\behat>vendor\bin\behat features\productType.feature

I was doing this with @javascript added above my rather large testing scenario (which logs in and interacts with a form), I don’t think goutte will work within docker (at least not with any image I see in the list at, so I’ll need to change the base_url parameter back to localhost for the headless testing to work again. I don’t know if I can set the base_url within a driver section yet. I will update this post when I figure that out, but please leave a comment if you know already.

How to install phantomjs and casperjs on bluehost linux vps

Last week I needed to automate an internet lookup in real time which required a several step login process (no API available). I was having trouble getting it to work with Curl. I had heard good things about CasperJS and PhantomJS so I figured I would try it out. I decided to use my own bluehost vps server to start with because it is always easier to develop a program which you have full control over.

CasperJS logo

I started with PhantomJS by itself, because I figured if I could get that working I didn’t need casper. However, after attempting the follow the installation instructions and using Phantom directly, I was getting errors that the client class was not found. I tried manually including the class file but that just led to more and more missing class and object errors because the whole library is setup to autoload things. After manually including the vendor/autoload.php file I at least didn’t get any more errors, but the simple examples were giving me responses of 0 so I decided I needed a different approach.

Installing CasperJS was relatively easy, but let me share the steps I followed to actually get phantom installed (it could be there is something I missed which prevented it from working by itself, but since I got casper working later I was satisfied):

  1. Login to the server through SSH (I use putty for this) as root (this is needed later. If you have a non privileged user who can login through ssh then you can start with that. Or if you’ve disabled root logins then login with the user who can become root)
  2. Become a non privileged user (type ‘sudo su [username]’, where username is the owner of your existing web files – the specific user here is important to avoid permission errors later).
  3. Create a directory for casper and phantom, like automation or browsercontrol, in a directory above an existing domain or subdomain so it’s not accessible in a browser (for security reasons)
  4. CD to the new directory and install composer there (even if composer is already a recognized command, do this anyway): curl -s | php
  5. Create a file in that directory called composer.json with these contents (this is straight from the installation guide):
            "require": {
                "jonnyw/php-phantomjs": "4.*"
            "config": {
                "bin-dir": "bin"
            "scripts": {
                "post-install-cmd": [
                "post-update-cmd": [
  6. Try this command to install phantomjs:
    php composer.phar install
  7. If that doesn’t work (for example there is no bin folder created, and/or phantomjs is not created anywhere locally), then go to and pick a link to manually download to your server (in your new folder, for example this is the command I used from my new directory):
  8. Extract the downloaded file (x means extract, j means it’s a bz2 file, and f means the file name is coming next):
    tar xjf phantomjs-2.1.1-linux-x86_64.tar.bz2
  9. Use this command to figure out what folders are currently in your path:
    echo $PATH
  10. Pick one like /usr/bin which is not for system programs (/bin and /sbin are, so don’t use those to avoid confusion later)
  11. Figure out the absolute path to phantomjs in your system now by finding it and then using pwd to get the path (likely ending with something like phantomjs-2.1.1-linux-x86_64/bin/phantomjs)
  12. Become root (if you logged in as root you can be root again by typing ‘exit’ at the prompt)
  13. Create a symbolic link to phantomjs inside the folder you picked in step 10 (like /usr/bin). Something like this:
    ln -sf /full/path/to/phantomjs /usr/bin/phantomjs
  14. Validate it worked by becoming the non-privileged user again and typing “phantomjs –version”. You should see a version number, not a complaint about no command found

Then for CasperJS:

  • Use slightly modified instructions from to install casper from git and put a link to it into your path (as the non privileged user, starting in the new folder you created):
    git clone git://
    cd casperjs
    ln -sf `pwd`/bin/casperjs /usr/bin/casperjs

    If you picked a different folder in step 10, use it in the second part of the third command instead of /usr/bin.

  • Validate casper install by typing casperjs from the command line from any folder. You should see info about casperjs, not a complaint about a missing command.
  • In order to use casper within a PHP file, you will need to use the exec command (make sure that is allowed in your php.ini file). Here is a test php file you can use to make sure it is setup fully:

    if(function_exists('exec')) {
    	echo "exec is enabled<br>";
    } else echo "exec is not enabled<br>";
    exec("phantomjs --version", $outArr1);
    exec("casperjs", $outArr2);
    echo "<br>PhantomJS version output:<br>";
    echo "<br>CasperJS output:<br>";

    If you see a version number like 2.1.1 for phantom and a bunch of info about casper (after the CasperJS output line) you are good to go. The next step is to follow the casper instructions to create your javascript logic and then change your casper exec command to be something like this:

    exec("casperjs casperLogic.js --getParam1=\"This is passed to the script\" --getParam2=\"This is also passed to the script\"", $outArr);

    Happy automating!

    How to Add a Form in a WordPress Post

    This is a simple step by step guide to creating a form that will capture data and store it in a mysql database on your server from within a wordpress post

    Note: the below example is a live, working form on this page

    Step 1:

    Create a simple form for a petition, contact request, or registration:


    Here is the html code for it:

    <tr><th>Name:</th><td><input id="userName" /></td></tr>
    <tr><th>Address:</th><td> <input id="userAddress" /></td></tr>
    <tr><th>City:</th><td> <input id="userCity" /></td></tr>
    <tr><th>State:</th><td> <input id="userState> /></td></tr>
    <tr><th>Zip:</th><td> <input id="userZip" /></td></tr>
    <tr><th>Email:</th><td> <input id="userEmail" /></td></tr>
    <tr><th>Comment:</th><td> <textarea id="userComment" ></textarea></td></tr>
    <tr><th><td colspan=2><button type="button" onclick="saveUser();">Save</button></td></tr>

    Step 2:

    Create a javascript function to process the form and send it to the server. The following is a javascript function added to the page using the CSS & Javascript Toolbox plugin (it uses jquery because wordpress already has that available):

    //Process the form and save the data record
    function saveUser() {
      //First gather the form parameters and make sure name and email at least are populated
      var name = jQuery("#userName").val();
      var address = jQuery("#userAddress").val();
      var city = jQuery("#userCity").val();
      var state = jQuery("#userState").val();
      var zip = jQuery("#userZip").val();
      var email = jQuery("#userEmail").val();
      var comment = jQuery("#userComment").val();
      if (name.length<1 || email.length<1) {
        alert("Please at least enter your name and email.");
        return false;
      } else {
        //Now send the data to a server side function to really validate it and save it.
          type: "POST",
          url: "/ajax/saveUser.php",
          data: { name:name,address:address,city:city,state:state,zip:zip,email:email,comment:comment }
        }).done(function( results ) {
          if(results.length<1){ // network error
            alert("There was a network error, please try again or contact support and tell them what you are trying to do.");
          } else { // this is a successful interaction
            var resultObj = jQuery.parseJSON(results);
            if (resultObj.errorMsg.length>0) {  
            } else {
        	  //Record save successful
              alert("Thanks for your information, it was saved successfully!");
              //Show the user what they have entered:

    Step 3:

    Make sure you have a database table ready to store the information. Below is a simple table used to store the info in this example:

    CREATE TABLE IF NOT EXISTS `user_info` (
      `userID` int(11) NOT NULL AUTO_INCREMENT,
      `userName` varchar(80) DEFAULT NULL,
      `address` varchar(100) DEFAULT NULL,
      `city` varchar(80) DEFAULT NULL,
      `state` varchar(40) DEFAULT NULL,
      `zip` varchar(5) DEFAULT NULL,
      `email` varchar(80) DEFAULT NULL,
      `comment` text DEFAULT NULL,
      `userIP` VARCHAR( 30 ) DEFAULT NULL,
      `dateAdded` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
      PRIMARY KEY (`userID`),
      KEY `userIP` (`userIP`)

    Step 4:

    Create the server side code to accept the form data, really validate it, and put it in the database or send back an error. Here is the contents of a server side code which will do all that for the example data above. Since this form is “in the wild”, I’m capturing the IP address because I will only show results to people that they entered themselves:

     * File: saveUser.php
     * Description: This file takes the passed in user information and validates it before
     * 	saving it to the database and returning content to show on the page. 
     * Inputs (in POST): 
     * 	userName, address, city, state, zip, email, comment
     * Outputs:
     * 	either an error or nothing
    include("db.php"); // define your database credentials in another file
    #connect to the database
    $db = mysqli_connect($servername, $username, $password, $dbname);
    mysqli_select_db($db,$dbname) or die('Could not select database');
    #Get the passed in parameters into local variables, escaped for database input
    $userName = empty($_POST['userName'])?'':addslashes($_POST['userName']);
    $address = empty($_POST['address'])?'':addslashes($_POST['address']);
    $city = empty($_POST['city'])?'':addslashes($_POST['city']);
    $state = empty($_POST['state'])?'':addslashes($_POST['state']);
    //only accept 5 numbers for zip
    $zip = empty($_POST['zip'])?'':substr(preg_replace('/\D/','',$_POST['zip']),0,5);
    $email = empty($_POST['email'])?'':addslashes($_POST['email']);
    $comment = empty($_POST['comment'])?'':addslashes($_POST['comment']);
    $userIP = $_SERVER['REMOTE_ADDR'];
    #This is an array used for gathering all the outputs of this file
    $jsonObj = array();
    $jsonObj['errorMsg'] = "";
    #Validate inputs
    if (empty($userName) or empty($email)) {
    	$jsonObj['errorMsg'] = "Please at least enter your name and email.";
    } else if (strpos($email,'@')===false) {
    	//there are many more validations that can be made for emails but this is a start
    	$jsonObj['errorMsg'] = "Please enter a valid email.";
    #Enter the data record
    if (empty($jsonObj['errorMsg'])) {	
    	$sql = "insert into user_info (userName,address,city,state,zip,email,comment,userIP) 
    	if ($debug) echo "about to run $sql ";
    	mysqli_query($db, $sql); $err = mysqli_error($db); if (!empty($err)) $jsonObj['errorMsg'] = $err;
    #Now get the list of data entered by this IP, without any script tags to prevent XSS:
    if (empty($jsonObj['errorMsg'])) {	
    	$sql = "select userName,city,state from user_info where userIP='".$userIP."'";
    	$rs = mysqli_query($db, $sql); $err = mysqli_error($db); if (!empty($err)) $jsonObj['errorMsg'] = $err;
    	while($row = mysqli_fetch_assoc($rs)) {
    		$jsonObj['userList'].= "<tr><td>".strip_tags($row['userName'])."</td><td>".strip_tags($row['city']).
    #Now send back the data in json format
    echo json_encode($jsonObj);

    Step 5:

    Do something with the results! You could export them, feed them into another system using an API, send them in an email to someone, or just display them like so:

    Name City State
    You have not entered any user records yet, try it out!

    Google Maps API Version 3 Geocoding Example

    Today I realized that my location markers were not being displayed in all the google maps implementations I had put into place online, and after digging into it (trying to figure out what a code 610 response meant) I realized that I was using version 2 of the google maps api for retrieving latitude and longitude based on a given address (ie “geocoding”).

    My actual maps were being displayed using version 3 of the API but the example I found a couple years ago and followed for geocoding used version 2, which was deprecated and will no longer be supported sometime in 2013 (I’ve heard both March and September, but since my code isn’t working in April I suppose the March date was more accurate).

    Since I wasn’t able to find an example for just what I needed (getting the latitude and longitude based on an address, so I could show markers on a google map) I read through the docs and updated my code.  See below for my implementation (I use this as an included file wherever I need to pull latitude/longitude, it could be put into a function if you want). The address, city, state, and zip should be stored in the $address, $city, $state, and $zip params, and I am also saving the latitude and longitude in both a local database and a markers array to be used in a google map later:

     * File: incGetGeoCodes.php
     * Description: this file pulls the lat/long coordinates using Version 3 of the google maps api
     * 	The key parameter caused it to fail so I removed that and it works, but we may be under a 
     * 	lower daily usage limit because of that
    // Initialize delay in geocode speed
    $delay = 0;
    $geoaddress = $address . ", " . $city . ", " . $state . " " . $zip;
    $request_url = $base_url . "&amp;address=" . urlencode($geoaddress);
    $status = $resultArr['status'];
    if (strcmp($status, "OK") == 0) {
    	// Successful geocode
    	$markers[$shop_id]['lat'] = $resultArr['results'][0]['geometry']['location']['lat'];
    	$markers[$shop_id]['long'] = $resultArr['results'][0]['geometry']['location']['lng'];
    	$markers[$shop_id]['html'] = $name."<br>".$address."<br>".$city." ".$state." ".$zip."<br>".$phone;
    	#Now update the db so we don't have to pull this again
    	$query = "update entities ".
    		"set shop_latitude=".addslashes($markers[$shop_id]['lat']).", ".
    		"shop_longitude=".addslashes($markers[$shop_id]['long'])." ".
    		"where EntityID=".intval($EntityID);
    	mysql_query($query); $err=mysql_error(); if (!empty($err)) echo "query:$query, error: ".$err."<br>";
    } else if (strcmp($status, "620") == 0) {
    	// sent geocodes too fast
    	$delay += 100000;
    } else {
    	// failure to geocode
    	$error .= urlencode("Address " . $geoaddress . " failed to be geocoded. ");
    	$error .= urlencode("Received status " . $status . "%0D%0A");

    I was surprised to see no mention of an API key parameter to be used, if someone knows if that is an option (to increase the daily quota of geocoding api calls that can be made) please leave a comment and let me know. I’m just happy to get it working again for now. 🙂

    How to integrate existing website into Amazon Simple Email Service

    I was preparing to integrate a client’s business to send emails through AWeber last week when I realized that their api does not support sending transactional emails to one or more people on an as needed basis. I confirmed with AWeber’s tech support that their API is read only, it does not allow the sending of emails at all (they have a web interface for sending emails). I asked them what they would use in my situation and they said that the other big newsletter names I had heard of before (MailChimp, Constant Contact, etc.) also only supported newsletter type messages.

    What my client needed was a more robust email system because every week they had situations where people were not getting emails (sent from their own server using the php mail command). I recommended AWeber because I knew their deliverability was very high and they would keep up with any changes in email standards. I figured since they had an API that I could specify email addresses and email contents to send using it but after looking for that functionality I came up empty handed.

    The Aweber tech I spoke to mentioned the possibility of using SalesForce for this type of account message emailing, but I knew that would be overkill and overpriced for just sending emails. After a quick search I was happy to find out that Amazon provides an email service called “Simple Email Service” (SES) that allows for a certain number of emails for free if you already have an Amazon Web Services (AWS) account. Since my client had signed up for the Amazon S3 service (a storage solution) a few months prior in order to save backups of their weblogs they already had an AWS account.

    After reading a few documents about different ways to use the Amazon Simple Email Service I decided that it would be simplest for me to integrate using an authenticated SMTP connection. Since there were only half a dozen files that use the mail command (found by using ‘grep “mail(” *php’ and ‘grep “mail(” *\*php’ and so on in the webroot), I only needed to update those files after getting the Pear modules installed.

    I started using just the pear Mail module but then when I tried to send html emails they showed up as source code and not rendered, so I added the Mail_Mime files too. The way I did it was to first try installing it using “pear install Mail” (and pear install Mail_Mime) as root, but after a bunch of trouble with include paths I ended up downloading the tarballs to the server using wget, then extracting them into a subdirectory under the webroot (which I later protected using htaccess to deny all connections directly to the files). Next I tried including the main Mail.php file with error printing on and updated several of the pear files to refer to the right relative path for inclusion. I did the same thing with the Mail/mime.php file, adjusting paths as needed until the errors were all gone.

    I had a common included file at the top of each of my php files so inside that common file I included the pear files and defined a few constants for connecting to the smtp server at amazon (pear is the name of the folder in my webroot where I put the pear files):

    #Show errors - after I got the paths right I commented this section
    ini_set('display_errors', true);
    ini_set('html_errors', true);
    #Pear email modules
    require_once "pear/Mail.php";
    require_once "pear/Mail/mime.php";

    Then in each file where I wanted to send email, I used this framework to do it:

    # Constructing the email
    $sender = "Sender Name <>";                              
    $recipient = ";                           // The Recipients name and email address
    $text = "this will be sent as text;                                  // Text version of the email
    $html = "<h1>This will be rendered as html</h1>";  // HTML version of the email
    $crlf = "\n";
    $headers = array(
            'From'          => $sender,
            'To'          => $recipient,
            'Subject'       => $subject
    # Creating the Mime message
    $mime = new Mail_mime($crlf);
    # Setting the body of the email
    if (!empty($text)) {
    } else if (!empty($html)){
    #Get the header and body into the right format
    $body = $mime->get();
    $headers = $mime->headers($headers);
    $headers['From'] = $sender;  //I heard some people had trouble with this header getting messed up
    #Setup the connection parameters to connect to Amazon SES
    $smtp_params["host"]     = MAILHOST;
    $smtp_params["port"]     = MAILPORT;
    $smtp_params["auth"]     = true;
    $smtp_params["username"] = MAILUSER;
    $smtp_params["password"] = MAILPWD;
    # Sending the email using smtp
    $mail =&amp;amp; Mail::factory("smtp", $smtp_params);
    $result = $mail->send($recipient, $headers, $body);		
    #Below is only used for debugging until you get it working
    if (PEAR::isError($result)) {
       echo("<p>" . $result->getMessage() . "</p>");
    } else {
       echo("<p>Message successfully sent!</p>");


    Amazon doesn’t put your emails into a queue if you send them too fast, so in order to stay under their sending limits when sending batches of messages you can use the php usleep command to delay execution. I found that this delay didn’t actually work until I added “set_time_limit(0);” to the top of the file sending the batch of emails, however. Test everything, different server environments will respond differently (just like Browsers are like Churches). I used an echo date(‘h:i:s’) command between delays to see whether the delay worked or not.