How to use Behat for BDD in Windows with Selenium in Docker

A few definitions before diving in (the title is a mouthful):

  • BDD = Behavior Driven Development, which is a more top down approach to Test Driven Development (TDD)
  • Behat = Tool for implementing BDD written in PHP
  • Selenium = open source browser testing tool, which can be used with behat as an addon through the Mink extension
  • Docker = open source virtual machine tool to emulate other operating systems
  • xampp = open source web hosting software for any (X) operating system containing Apache, Mysql (switched to MariaDB now, but at least it still starts with M), Perl, and Php

Nobody wants to be surprised by bugs after launch and manual testing is a painfully tedious process. Fully automated testing of web applications is like the holy grail. Unit testing can be useful, but ideally integration and even performance and user testing are automated as much as possible. I was excited to learn about BDD as a top down approach to test development as opposed to traditional TDD which is more of a bottom up approach.

I found behat while looking for a way to do BDD in php. I first tried following the behat quick start but I had to modify some steps. First, I had composer installed on my pc already (from using the windows installer) but I had to change step 1 from:

php composer.phar require --dev behat/behat  #maybe this works in linux but not on my pc


#make sure composer is in your path so this will find it - the windows installer does that for you
composer require --dev behat/behat  

Whenever I saw “php composer.phar” in the quick start for behat I replaced it with “composer” and it worked for me.

The basic quick start example got me started with behat, and I found the PHPUnit appendix on assertions to be helpful in order to create my own tests. But I wanted to control a web application, so I then reviewed the documentation about using Mink with Behat.

I was successful in getting the behat feature/scenario driven testing to work with the Goutte driver, but for some reason (actually several errors I could not get past easily) I was not able to get selenium to connect properly on my pc using the latest jar file for chrome or firefox webdrivers (and trying several suggestions on stack overflow for people with similar challenges). That means I could only test using the headless driver. I needed to use both browser and headless browser testing because there are always javascript events and actions which occur on the page, and what seemed to be the problem is I was using windows locally instead of linux.

While researching how to get the chrome driver to work with selenium I saw a mention of using docker to avoid environment and software version mismatch issues. That led me to an article explaining how to use docker with selenium. This article was very valuable for me (I just followed the windows instructions, ignoring the ubuntu steps), as it gave me a way to get past the confusing driver downloads and behat configuration to try and get them working with selenium in order to test a website with Chrome or Firefox.

In order to control the browser in the docker container (and watch it using TightVNC as mentioned in the docker article) I setup the following in the MinkExtension section of my behat.yml file (port 4445 is mapped to docker following the article instructions, and the ip is from the docker-machine ip command as mentioned in the article):

    wd_host: '' 

The next challenge was how to connect to localhost (I do my testing in xampp on my PC first because that’s where I develop) from within docker. Using ipconfig on my pc I was able to find the internal network IPV4 address for my PC on my wifi network (if you are hard wired look for the ethernet connection address). Then I updated this line in my behat.yml file to use it for testing within docker:

base_url: #this ip is from the wifi ipv4 address from ipconfig

My full behat.yml file then became this (I initially had ssl connection issues with goutte so I had to turn off the verification):

            browser_name: chrome  #I want to test with chrome for now, I'm using that docker image
#            base_url:  #this is an example of an external address to test behat with
#            base_url: http://localhost/dsdistribution/  #this doesn't work from within docker container
            base_url:  #this ip is from the wifi ipv4 address from ipconfig on my PC
                    verify: false
                wd_host: '' 

The dsdistribution folder is the webroot of the new application I was developing in my local environment.

My next challenge was to connect to the database on localhost from within docker. I first updated the host to be, but then my root with no password user did not have permission to connect. I found a recommendation for uncommenting a line in c:/xampp/mysql/bin/my.ini and setting it to allow remote connections:

# Change here for bind listening
bind-address='' #this allows any address to connect to mysql, only safe if network is internal

I just confirmed that my public IP does not have port 3306 open using a port forwarding tester at, this is standard for home networks on a PC but it’s good to check anyway.

Actually the new bind address alone did not fix my connection to the database on localhost from within docker (even after restarting apache, mysql, and even xampp itself). There were too many rules in the mysql.user table for the root user so for some reason it was denying access using the ‘remote’ docker host. So the next step was to create a new user using the mysql command line interface (after logging in as the admin user I named root) like this:

CREATE USER 'behat'@'%' IDENTIFIED BY 'behat';
GRANT ALL PRIVILEGES ON * . * TO 'behat'@'%';

I planned to use this new behat user for testing all my applications locally so I gave it all privileges on every database. Once I followed that last step I just had to update my db connection file in my web application to have a clause like this:

if ($_SERVER['HTTP_HOST']==''){
    $servername = '';
    $username = 'behat';
    $password = 'behat';

Then I was able to open TightVNC (as directed in the docker article) and watch my behat test scenarios execute after running a command like this in a dos prompt (running from where I installed behat, pointing to a specific feature file to test):

c:\xampp\htdocs\behat>vendor\bin\behat features\productType.feature

I was doing this with @javascript added above my rather large testing scenario (which logs in and interacts with a form), I don’t think goutte will work within docker (at least not with any image I see in the list at, so I’ll need to change the base_url parameter back to localhost for the headless testing to work again. I don’t know if I can set the base_url within a driver section yet. I will update this post when I figure that out, but please leave a comment if you know already.

How to install phantomjs and casperjs on bluehost linux vps

Last week I needed to automate an internet lookup in real time which required a several step login process (no API available). I was having trouble getting it to work with Curl. I had heard good things about CasperJS and PhantomJS so I figured I would try it out. I decided to use my own bluehost vps server to start with because it is always easier to develop a program which you have full control over.

CasperJS logo

I started with PhantomJS by itself, because I figured if I could get that working I didn’t need casper. However, after attempting the follow the installation instructions and using Phantom directly, I was getting errors that the client class was not found. I tried manually including the class file but that just led to more and more missing class and object errors because the whole library is setup to autoload things. After manually including the vendor/autoload.php file I at least didn’t get any more errors, but the simple examples were giving me responses of 0 so I decided I needed a different approach.

Installing CasperJS was relatively easy, but let me share the steps I followed to actually get phantom installed (it could be there is something I missed which prevented it from working by itself, but since I got casper working later I was satisfied):

  1. Login to the server through SSH (I use putty for this) as root (this is needed later. If you have a non privileged user who can login through ssh then you can start with that. Or if you’ve disabled root logins then login with the user who can become root)
  2. Become a non privileged user (type ‘sudo su [username]’, where username is the owner of your existing web files – the specific user here is important to avoid permission errors later).
  3. Create a directory for casper and phantom, like automation or browsercontrol, in a directory above an existing domain or subdomain so it’s not accessible in a browser (for security reasons)
  4. CD to the new directory and install composer there (even if composer is already a recognized command, do this anyway): curl -s | php
  5. Create a file in that directory called composer.json with these contents (this is straight from the installation guide):
            "require": {
                "jonnyw/php-phantomjs": "4.*"
            "config": {
                "bin-dir": "bin"
            "scripts": {
                "post-install-cmd": [
                "post-update-cmd": [
  6. Try this command to install phantomjs:
    php composer.phar install
  7. If that doesn’t work (for example there is no bin folder created, and/or phantomjs is not created anywhere locally), then go to and pick a link to manually download to your server (in your new folder, for example this is the command I used from my new directory):
  8. Extract the downloaded file (x means extract, j means it’s a bz2 file, and f means the file name is coming next):
    tar xjf phantomjs-2.1.1-linux-x86_64.tar.bz2
  9. Use this command to figure out what folders are currently in your path:
    echo $PATH
  10. Pick one like /usr/bin which is not for system programs (/bin and /sbin are, so don’t use those to avoid confusion later)
  11. Figure out the absolute path to phantomjs in your system now by finding it and then using pwd to get the path (likely ending with something like phantomjs-2.1.1-linux-x86_64/bin/phantomjs)
  12. Become root (if you logged in as root you can be root again by typing ‘exit’ at the prompt)
  13. Create a symbolic link to phantomjs inside the folder you picked in step 10 (like /usr/bin). Something like this:
    ln -sf /full/path/to/phantomjs /usr/bin/phantomjs
  14. Validate it worked by becoming the non-privileged user again and typing “phantomjs –version”. You should see a version number, not a complaint about no command found

Then for CasperJS:

  • Use slightly modified instructions from to install casper from git and put a link to it into your path (as the non privileged user, starting in the new folder you created):
    git clone git://
    cd casperjs
    ln -sf `pwd`/bin/casperjs /usr/bin/casperjs

    If you picked a different folder in step 10, use it in the second part of the third command instead of /usr/bin.

  • Validate casper install by typing casperjs from the command line from any folder. You should see info about casperjs, not a complaint about a missing command.
  • In order to use casper within a PHP file, you will need to use the exec command (make sure that is allowed in your php.ini file). Here is a test php file you can use to make sure it is setup fully:

    if(function_exists('exec')) {
    	echo "exec is enabled<br>";
    } else echo "exec is not enabled<br>";
    exec("phantomjs --version", $outArr1);
    exec("casperjs", $outArr2);
    echo "<br>PhantomJS version output:<br>";
    echo "<br>CasperJS output:<br>";

    If you see a version number like 2.1.1 for phantom and a bunch of info about casper (after the CasperJS output line) you are good to go. The next step is to follow the casper instructions to create your javascript logic and then change your casper exec command to be something like this:

    exec("casperjs casperLogic.js --getParam1=\"This is passed to the script\" --getParam2=\"This is also passed to the script\"", $outArr);

    Happy automating!

    How to integrate existing website into Amazon Simple Email Service

    I was preparing to integrate a client’s business to send emails through AWeber last week when I realized that their api does not support sending transactional emails to one or more people on an as needed basis. I confirmed with AWeber’s tech support that their API is read only, it does not allow the sending of emails at all (they have a web interface for sending emails). I asked them what they would use in my situation and they said that the other big newsletter names I had heard of before (MailChimp, Constant Contact, etc.) also only supported newsletter type messages.

    What my client needed was a more robust email system because every week they had situations where people were not getting emails (sent from their own server using the php mail command). I recommended AWeber because I knew their deliverability was very high and they would keep up with any changes in email standards. I figured since they had an API that I could specify email addresses and email contents to send using it but after looking for that functionality I came up empty handed.

    The Aweber tech I spoke to mentioned the possibility of using SalesForce for this type of account message emailing, but I knew that would be overkill and overpriced for just sending emails. After a quick search I was happy to find out that Amazon provides an email service called “Simple Email Service” (SES) that allows for a certain number of emails for free if you already have an Amazon Web Services (AWS) account. Since my client had signed up for the Amazon S3 service (a storage solution) a few months prior in order to save backups of their weblogs they already had an AWS account.

    After reading a few documents about different ways to use the Amazon Simple Email Service I decided that it would be simplest for me to integrate using an authenticated SMTP connection. Since there were only half a dozen files that use the mail command (found by using ‘grep “mail(” *php’ and ‘grep “mail(” *\*php’ and so on in the webroot), I only needed to update those files after getting the Pear modules installed.

    I started using just the pear Mail module but then when I tried to send html emails they showed up as source code and not rendered, so I added the Mail_Mime files too. The way I did it was to first try installing it using “pear install Mail” (and pear install Mail_Mime) as root, but after a bunch of trouble with include paths I ended up downloading the tarballs to the server using wget, then extracting them into a subdirectory under the webroot (which I later protected using htaccess to deny all connections directly to the files). Next I tried including the main Mail.php file with error printing on and updated several of the pear files to refer to the right relative path for inclusion. I did the same thing with the Mail/mime.php file, adjusting paths as needed until the errors were all gone.

    I had a common included file at the top of each of my php files so inside that common file I included the pear files and defined a few constants for connecting to the smtp server at amazon (pear is the name of the folder in my webroot where I put the pear files):

    #Show errors - after I got the paths right I commented this section
    ini_set('display_errors', true);
    ini_set('html_errors', true);
    #Pear email modules
    require_once "pear/Mail.php";
    require_once "pear/Mail/mime.php";

    Then in each file where I wanted to send email, I used this framework to do it:

    # Constructing the email
    $sender = "Sender Name <>";                              
    $recipient = ";                           // The Recipients name and email address
    $text = "this will be sent as text;                                  // Text version of the email
    $html = "<h1>This will be rendered as html</h1>";  // HTML version of the email
    $crlf = "\n";
    $headers = array(
            'From'          => $sender,
            'To'          => $recipient,
            'Subject'       => $subject
    # Creating the Mime message
    $mime = new Mail_mime($crlf);
    # Setting the body of the email
    if (!empty($text)) {
    } else if (!empty($html)){
    #Get the header and body into the right format
    $body = $mime->get();
    $headers = $mime->headers($headers);
    $headers['From'] = $sender;  //I heard some people had trouble with this header getting messed up
    #Setup the connection parameters to connect to Amazon SES
    $smtp_params["host"]     = MAILHOST;
    $smtp_params["port"]     = MAILPORT;
    $smtp_params["auth"]     = true;
    $smtp_params["username"] = MAILUSER;
    $smtp_params["password"] = MAILPWD;
    # Sending the email using smtp
    $mail =&amp;amp; Mail::factory("smtp", $smtp_params);
    $result = $mail->send($recipient, $headers, $body);		
    #Below is only used for debugging until you get it working
    if (PEAR::isError($result)) {
       echo("<p>" . $result->getMessage() . "</p>");
    } else {
       echo("<p>Message successfully sent!</p>");


    Amazon doesn’t put your emails into a queue if you send them too fast, so in order to stay under their sending limits when sending batches of messages you can use the php usleep command to delay execution. I found that this delay didn’t actually work until I added “set_time_limit(0);” to the top of the file sending the batch of emails, however. Test everything, different server environments will respond differently (just like Browsers are like Churches). I used an echo date(‘h:i:s’) command between delays to see whether the delay worked or not.

    How to Transfer a Folder Full of Files and Sub-Directories to an FTP Server

    Today I had to figure out how to transfer 65GB of data (including dozens of files in separate subfolders) from a linux server (where it was stored after being pulled from a FilesAnywhere account using a recursive wget command, “wget -r -l 0”) to another storage account at

    I first tried tarring up all the files and folders and transferring that as one file, but I couldn’t get the support people at egnyte to untar it for me. So I began creating a perl script to do this task. After awhile of reading about how to make multidimensional arrays in perl for the file structure (technically this is not possible, but interacting with arrays of references in perl is very similar to multidimensional arrays in php) I decided to look for a simpler solution.

    What I found was a CPAN module that allows for recursive gets and puts, (rget and rput, respectively). The details of the module are described at So after a bunch of trial and error I came up with the following script, which you can download here or copy from below.

    All you need to do (once you have perl and the Net::FTP::Recursive cpan module installed-see links below for help with that) is put your ftp credentials into the file, put the perl script in the directory where you have files and folders to transfer (I actually put the perl file in the folder above where I wanted to transfer from so I wouldn’t transfer the perl file too), then execute the file by typing ‘perl “”‘ (or in my case I typed perl ../ “”).

    # Filename: - this script will recursively transfer files in a directory tree to an ftp server
    # Inputs:
    #	$host is an ip address of the ftp server to connect to
    # Configurable parameters:
    #	$user is the ftp username
    #	$password is the ftp password
    #	$remoteDirectory to transfer files and folders of files to (defaults to top level directory)
    # Example usages:
    #	perl ""
    #	perl ""
    use strict;
    use Net::FTP::Recursive;
    use FileHandle;
    my $numParams = scalar(@ARGV);
    if ($numParams<1) {
    	print "Usage:\n";
    	print 'perl "ftphost"'."\n";
    	print "ftphost is an ip address or domain name of the ftp server to connect to\n";
    my $host=$ARGV[0];
    my $user='ftpuser'; #ftp username, use single quotes so special characters are not interpreted
    my $password='ftppassword'; #ftp password, use single quotes so special characters are not interpreted
    my $remoteBaseDir='/path/to/sub/directory'; #ftp remote base directory where to put all the files under
    my $ftp = Net::FTP::Recursive->new("$host", Debug => 1); #use Debug=>0 if you dont want to see what is happening or detailed error messages
    if (length($remoteBaseDir)>0) {
    print "Done transferring files\n";

    In order to install the Net::FTP::Recursive CPAN module you can follow the instructions at Alternatively, if you don’t even have CPAN yet you can start with

    How to Export One Table Structure in Mysql and Infobright

    I have a database with tables in it which have over 300 fields each. I wanted to create a new table like the others but I didn’t have the table structure saved in a file. Due to the number of fields it was impractical to copy the screen after executing a “show create table tablename” command in that database, so I had to find another way.

    I found that I could use the mysqldump command to export just the table structure, and this command would probably have worked if my database engine was regular mysql:

    # /usr/local/infobright/bin/mysqldump -d -u adminUserName -p'adminPassword' dbName tableName > file.sql
    mysqldump: Got error: 1031: Table storage engine for 'BRIGHTHOUSE' doesn't have this option when doing LOCK TABLES

    So I looked up how to avoid the lock tables error and found the single transaction flag which made it work in Infobright (which is based on mysql but has some differences):

    # /usr/local/infobright/bin/mysqldump -d -u adminUserName -p'adminPassword' --single-transaction dbName tableName > file.sql

    How to enable SOAP on a LAMP server

    originally posted at
    I wanted to use the MSN search API on a VPS (Virtual Private Server) I have setup for a client, but the existing CentOS 5 operating system only had PHP5.1.6 and I needed PHP5.2.1 (at least) with SOAP enabled for the search API to work. So I downloaded all my code files, created a dump of the MySQL database (I had to use mysqldump from the command line for this, as the DB was 147Mb, way over the limit for phpmyadmin to export) and downloaded the db dump. Then I re-imaged the server using the OpenSUSE 10.3 (with plesk) operating system, which I was told would have at least PHP5.2.4 on it by the 1and1 server support person I spoke with.

    After the re-imaging process was done (it takes about an hour), I used plesk to easily recreate my accounts, domain, database, and scheduled job. Then I used winscp to upload my gzipped database dump and the code files, logged into the newly created db from the command line through ssh and used the command “source dbdump.sql” from a mysql prompt (and from the same directory as where I gunzipped the dbdump.sql file) to populate the large database in under a minute. I also had to change the ownership of my webroot and everything in it to “default”, since that’s the name of the user I have running my scheduled job which reads and writes to the server. One requirement of running php in safe mode is that scripts can only read and write to files/directories owned by the same user who is running the script. There’s probably a way for me to turn off the PHP safe mode but the workaround was easy so I haven’t messed with it.

    Once I got everything put back together I checked the phpinfo page and I was pleased to see that I now had php5.2.6 running. But when I tried using MSN search it still didn’t work, because SOAP wasn’t enabled. Then I did a bunch of searching and found several people in similar situations but I didn’t find any good answers. I tried to update the php configuration using yum and apt-get, but those commands were not on my system. Then I found an RPM file (from that contained a file in it, but when I tried to install it I got the message “rpmlib(PayloadIsLzma) <= 4.4.2-1”, which I learned was some kind of dependency that wasn’t met by my system. I was starting to get frustrated and the kids were playing kind of loud behind me, so I decided it would be best to take a break and try to feel better about it.

    So I turned around and saw all 5 of my kids jumping on the couch. It was our basement couch so I didn’t mind, but I began to feel better and grabbed my camera. I started taking pictures of them and showing them the pictures, which they enjoyed, so we all had a good time. Here’s one with all of them jumping on the couch:

    It reminded me of the children’s book, Eight Little Monkeys Jumping on the Bed. The last page (after they all fall off and bump their head and the doctor says “No more monkeys jumping on the bed!”) has them all on the couch and says “But no one said anything about jumping on the couch!”. So later after putting the children to bed I was feeling better and got an idea.

    I remembered reading a post which suggested going to the specific operating system home page to get the appropriate version of the rpm file needed, so I searched for the opensuse home page and found, where I just chose my operating system from the dropdown and searched for soap. Actually they have many operating systems there, including RHEL, CentOS, Debian, Ubuntu, openSUSE, Fedora, Mandriva, and SLES. From the results page I just chose the php5-soap rpm for my version of php (5.2.6) and my system (I’m using a 64 bit system so I chose the x86_64 link, I assumed the other link was for 32 bit systems), downloaded it to my PC, then uploaded it to my server, used gunzip to decompress it, and ran the “rpm -i php5-soap-5.2.6-35.6.x86_64.rpm” command to install soap. This file passed all the dependency checks, then put the file in the right place on my system, and modified my php.ini file. Then all I had to do was restart the apache service and my LiveSearch sample page (using SOAP to interact with the MSN search engine) worked! By the way, for dedicated servers and virtual private servers I’ve had good experiences with 1and1, their server department is usually very helpful and quick to respond to questions or concerns. For shared hosting, I prefer Blue Host (which I use for this site).