Slowly developing Costs to Expect

I’m developing Costs to Expect in a slightly different manner to any of my previous large personal projects. The process is slightly different for two reasons, firstly, the design, secondly, I’m deliberately developing at a slower, more iterative pace.


There are two parts to Costs to Expect, the Open Source API and the website, the website will include the secret sauce for the service.

The API has not been explicitly written for the Costs to Expect website; the intention has always been to use it for separate projects and idealy give back to the community, hence being Open Source. So far, I have one other project that uses the API, and I have plans for another rattling around my head. I’m confident that eventually, tomorrow maybe, I will add a feature to the API specifically for the website. I intend to try and ensure features are only added to the API if they make sense in the context of the API.

Slow, iterative development.

I released the API during the summer and developed a simple app to allow my wife to enter expenses for our children; the API is not anywhere near complete, I am continually fleshing out the features priotising the features I need to develop the website, the latest example is sorting. The web app my wife uses does not require sorting, the public Costs to Expect website does; that is why I’m adding it now.

There are three initial phases of the development of the website; once all three are complete, the real project begins.

Phase 1: The website consumes the API and shows the current expenses for our children in an engaging way for public consumption.
Phase 2: My Wife is able to transition from using the web app I developed in the summer; she should be able to do the expenses management for our children using the website.
Phase 3: I develop the forecasting and budgeting systems and start opening up the website to a restricted number of test users.

I’m very much in Phase 1. I envision I have at least another four weeks of development before I can progress to Phase 2.

Phase 2 isn’t complicated; I need to ensure I make the correct decisions, I need to work out my authentication, probably oAuth, how I’m going to handle users, groups and then develop the management pages.

Phase 3 will be when the fun begins and the part I’m most looking forward to reaching. Phases 2 and 3 require time; I mostly know what is needed; I need to design and then code the systems.

The majority of my year off will be in Phase 3, I know what I need to end up with, however, I’m currently not quite sure how I’m going to get there.

For each Phase, I’m developing very slowly; the visual design is mostly solved thanks to a highly skilled designer I employed late last year, I mostly only need to think about what I need and how to develop it.

As a single developer with limited development time and resource, I can’t afford to waste any of my development time. I’m prototyping every feature and then slowly developing them, as an example, the website won’t be connecting to the API until at least v1.04.0, up until then I’ll be working on the responsive layout, building the view components I require and experimenting with the UX.

API Summary routes

In a previous blog post, I outlined my solution to summary routes, in short, every route in my API will have a matching summary route, the summary route is simply the route with a summary/ prefix.

If the GET endpoint of a route supports parameters the summary route should support the same parameters, in my experience, it will need additional parameters; as always, best shown with an example.

The summary route lives at summary/resource-types/[resource-type]/resources/[resource]/items, the GET endpoint has eight parameters, below I’ll show the expected output for each parameter.

In my API I have the following route /resource-types/[resource-type]/resources/[resource]/items. The GET endpoint returns a collection of the items assigned to the resource; it has four parameters for filtering, year, month, category and subcategory.

No parameters | Total sum of items
years | Summary for each year
year | Summary for the requested year
months | Summary for each month
month | Summary for the requested month
categories | Summary for each category
category | Summary for the requested category
subcategories | Summary for each subcategory
subcategory | Summary for the requested subcategory

So far I’ve not come across any negatives to this structure, yes, there are numerous parameters for the GET endpoint, I don’t consider that an issue, I’d rather have fewer routes than fewer GET parameters.

Dart Sass with Windows Linux Subsystem

Follow the steps below to setup sass, specifically dart-sass in the Windows Linux Subsystem.

  1. Download and extract the dart-sass release you are going to use, at the time of this post I opted for dart-sass-1.18.0-linux-x64.tar.gz
  2. Find your bash.rc, if you setup WLSS after the Fall Creators update it will be located at C:\Users\USERNAME\AppData\Local\Packages\{CanonicalGroupLimited.UbuntuonWindows_...}\LocalState\rootfs\home\LINUXUSER\ or similar.
  3. Add export PATH="/path/to/dart-sass:$PATH" to the end of the file, in my case export PATH="/mnt/c/Users/USERNAME/Documents/GitHub/dart-sass:$PATH"

Open bash, type sass --version to see if everything worked.

Where to put summary routes in your API?

I believe the response to the question is going to be different for every API, in my case I initially added them where I thought it made sense, at the level I thought I wanted to summarise.

To view the list of items assigned to a resource in the Costs to Expect API you browse as below;
To view the TCO (Total cost of ownership) for a resource, I added a summary route at /resource-type/[id]/resource/[id]/summary/tco.

This initially made sense, however, later, as I was adding year and month summaries my solution began to become unwieldy, longterm, I would end up with two mismatching trees, the main tree for the API and then another summary tree, secondly, using this structure, where would I put a summary for multiple resources?

I’ve come up with a solution that I think will solve my problems, there should be a summary route for every API endpoint, the summary routes are then merely the route prefixed with /summary.

In the case of the TCO for a resource, the summary route would be summary/resource-type/[id]/resource/[id]/items. No TCO in the URI, it is not necessary, you are summarising the items collection so you should expect a total.

My solution doesn’t fix all the issues, presently the annual summary for a resource lives at /resource-type/[id]/resource/[id]/summary/years. There is no matching endpoint for this summary route. The solution, GET parameters. The items collection has four filtering parameters, year, month, category and subcategory; the summary route should support the same parameters.

I’m confident that if I had spent a little more time researching I would have been able to find this solution in an article somewhere online, I didn’t and unfortunately, it took me a little while to realise. Hopefully, this blog post will help at least one other person.

Do not change URIs, oops

A core rule of the Internet, don’t change a URI. I’m going to change some, why, we all make mistakes.

I released the initial version of the Costs to Expect API during the summer of ’18. It turns out when you review your code/API after a short break you spot all the issues you were oblivious to during development.

Over the next 12 months, I’m going to extend the Costs to Expect API, an API that I intend on maintaining for a significant period; I need to record data for at least the next 13 years.

If there were one small issue with the URIs, I’d deal with it, change a couple of URIs and add redirects. There are numerous issues; I am not happy with the initial summary routes. I favour dashes in the URIs over underscores, some of the words are incorrect and other minor issues.

We haven’t pushed the service; it doesn’t exist. As far as I am aware I am the only consumer of the API. I believe it is OK to modify the URIs, as long as I do not modify them again they should remain the same for twenty times longer then they have existed.

Self-documenting APIs

During my professional career I’ve worked on many APIs, however, I’ve never been responsible for the design, development and support of an API of more than trivial scale, when designing my own I wanted to do ‘as good of a job’ as I possibly can.

I’ve heard the term self-documenting APIs and never really looked into it, regardless, I’m going to describe what it means to me, hey, it is the internet, we all have opinions.

To me, there are five points.

  • The initial entry route or an obviously named route should display all the endpoints.
  • The initial entry route or an obviously named route should show the current version of the API as well as provide links to the README, CHANGELOG and anything else useful.
  • There should be a changelog route to describe all changes and updates to the API.
  • An OPTIONS request should exist for every route. The OPTIONS request should detail the purpose of the route, all possible verbs, as well as any fields and parameters.
  • No redundant information in either the payload or the response, for example, the payload should not be wrapped in an envelope, the verb is the envelope.

As a bonus, API versioning should be controlled via the route, any payloads or responses should be free of any version information.

OPTIONS requests

When I published the Costs to Expect API I was initially bemused, none of my OPTIONS requests worked, they all return “405 method not allowed”.

After searching, I diagnosed the problem. The HTTP OPTIONS request fails because the default PHP-CGI handler does not handle the “OPTIONS” verb.

The fix is simple, update or add the handler in web.config.

<remove name=”PHP72_via_FastCGI” />
<add name=”PHP72_via_FastCGI” path=”*.php” verb=”GET,PUT,POST,DELETE,HEAD,OPTIONS” modules=”FastCgiModule” scriptProcessor=”PATH\To\File\php-cgi.exe” resourceType=”Either” requireAccess=”Script” />


I have a long-term project, Dlayer, that I very much believe in, but after many false starts, it still doesn’t exist, it will though, I’m due to restart it later this year with a slightly different focus 🙂

Why did I never finish or release the project? I never managed to hit what I considered to be the minimum viable product (MVP), my idea of the MVP for Dlayer was too vast, and far too much for one developer, I never managed to get enough done.

Early last year I needed to develop a small library, my PHP Quill renderer, I developed just what I needed at the time and quickly released it, within six months I bumped it to v1.00. I knocked it to v1.00 because I was confident in the library, it had warts, but people seemed to be using it, it wasn’t just solving my personal problem.

I’m not suggesting tagging it with a v1.00 release number made it slightly popular, I’m saying that because I was confident enough to add the v1.00 release number it became more popular.

Tagging the library with v1.00 meant I had to handle it differently, I needed to ensure the README and CHANGELOG were accurate, and up to date, tests are required, and I needed to learn how to handle the pull requests I started receiving. All of this made the library better, as I’ve said in the past, nothing makes your code better than knowing there is a minuscule chance other developers will look at your library.

During the summer I released two apps, both v1.00, the REST API for Costs to Expect and the companion Web app. Neither of these apps is complete. The REST API is missing quite a few features; however, it supports enough to get the job done, replace my spreadsheet and allow my wife to submit data.

I’m not saying anything new; there is a blog post by Jeff Atwood referencing a blog post by Chuck Jazdzewski stating our job is to ship. Knowing that is one thing, doing it is another, I wish I had had the confidence to ship earlier, and I hope I keep it up.

Dlayer, vNext

This is a cross post from the blog, it is relevant here because their is a good chance I will be releasing new open source libraries related to the tools development.

Development work on Dlayer vNext is deliberately slow; I’m attempting to ensure that any and all of the core architecture issues I noticed developing version 1 get resolved.

I spent a considerable amount of time on the UI and UX for version 1 of Dlayer, very little of that will change for vNext. My issues with version 1 can be summed up in one word, modular, as in, the app wasn’t.

There are two parts to the problem; the designers were not modular, in Zend Framework 1 (modules were a bit of a hack), and two, the tools were not plug-and-play.


This one is easy, by switching to Zend Framework 3, and being careful as I develop, the modules can behave as real modules, turn them on or off, transfer to different apps, all possible


The tools in version 1 of Dlayer had too many hooks into the system; they could be disabled dynamically, but you could not drop a tool into another module or quickly remove the code for a disabled tool.

On paper, I have a new plan; and I am in the middle of prototyping to see if it solves the core issues.

The solution is complicated; it will take time to develop examples of each tool type. However, as soon as I am sure the design is right, it simplifies the rest of the app.

Simpler app

The tools in version 1 were solely data management within the designers, in vNext, this changes.

The tools in vNext are responsible for everything relating to the content item, including, how the content is displayed in the WYSIWYG designer. The app at this point is merely a vehicle to provide access to the tools.



Ubuntu: Install Apache, PHP, MySQL, SASS, Bower and PhpStorm for local development

Whenever I need to set up a Linux machine or VM for local PHP development I either recall what I need to do from memory or cobble together what I need from the web, years ago it was very simple, Apache, MySQL and PHP, that is no longer the case. These days a simple setup could be Apache, MySQL, PHP, Git, Node, Bower, Ruby and SASS. Rather than have to recall from memory again I decided to document it when I installed Ubuntu on my laptop.

After the steps below the following will be installed and configured.

  • Apache, two sites, one for a Bootstrap HTML site, the other for a site using Zend Framework 1
  • MySQL
  • PHP
  • Git
  • Bower, via Node
  • SASS, via Ruby
  • And just for good measure PhpStorm


We need to install Apache and configure two websites, one will be a simple PHP website and the other will be include rewrites for Zend Framework 1. The sites are going to be created in /var/www/html but we will create symbolic links in our Documents folder that point to the directories.

Install Apache

$ sudo apt-get install apache2

Configure Apache

Create directories for sites and set permissions, replace site1 and site2 with whatever names you want to use for your sites, in my case dlayer and site.

$ sudo mkdir –p /var/www/html/site1
$ sudo mkdir –p /var/www/html/site2
$ sudo chown –R $USER:$USER /var/www/html/site1
$ sudo chown –R $USER:$USER /var/www/html/site2
$ sudo chown -R 755 /var/www/html

Create conf files for the new sites, as above replace site1 and site2 with whatever you used previously.

$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/site1.conf
$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/site2.conf

Edit the conf files

$ sudo gedit /etc/apache2/sites-available/site1.conf

You need to set ServerName and update DocumentRoot, in my case for site1 which is a Zend Framework site I have the below.

DocumentRoot /var/www/html/dlayer/public

Because this site is a Zend Framework base website we need to add in the rewrite rules, the below needs to be added, I add it just before the closing VirtualHost tag.

<Location />
   RewriteEngine On
   RewriteCond %{REQUEST_FILENAME} -s [OR]
   RewriteCond %{REQUEST_FILENAME} -l [OR]
   RewriteCond %{REQUEST_FILENAME} -d
   RewriteRule ^.*$ - [NC,L]
   RewriteRule ^.*$ /index.php [NC,L]

Set the ServerName and update the DocumentRoot for the second site, my second site is a simple Bootstrap based site, the settings are as below.

$ sudo gedit /etc/apache2/sites-available/site1.conf
DocumentRoot /var/www/html/site

We can now enable the sites and disable the default site

$ sudo a2dissite 000-default.conf
$ sudo a2ensite site1.conf
$ sudo a2ensite site2.conf

Enable the rewrite module and restart Apache

$ sudo a2enmod rewrite
$ sudo service apache2 restart

We can update our host files so we can reach the sites

$ sudo gedit /etc/hosts

Add the ServerNames you defined in the conf files, in my case I added the below

Create the symbolic links to each of the sites, you can the point your IDE or editor at you projects folder rather than /var/www/html/…., replace site1 and site2 with whatever directory names you created in the first step and Site1 and Site2 with whatever you want to directory names to be in your projects directory.

$ sudo ln –s /var/www/html/site1 ~/Documents/Projects/Site1
$ sudo ln –s /var/www/html/site2 ~/Documents/Projects/Site2


Install MySQL and MySQL Workbench, if you don’t want to use MySQL workbench simply doesn’t include it in the install command

$ sudo apt-get install mysql-server mysql-workbench

Follow all the prompts, make sure to set a password for the root user, if you have problems copying a password from your password manager into the password field just press return at the prompts setting the password to nothing and update the password afterwards following the below steps.

$ mysql –u root –p
$ use mysql;
$ UPDATE user SET Password=PASSWORD('PASSWORD you want to use') WHERE User='root';
$ flush privileges;
$ quit;


Install PHP, we are also going to include the GD library and the MySQL PDO driver.

$ sudo apt-get install php5 libapache2-mod-php5 php5-gd php5-mysql


Install Git and set your name and email address

$ sudo apt-get install git
$ git config --global "Your Name"
$ git config --global "Your email"

If you need to generate a SSH key for GitHub check their website or execute the below.

$ ssh-keygen –t rsa –b 4096 –C "Your Email"
$ ssh-add ~/.ssh/id_rsa

To add the key to your account do the below

$ sudo apt-get install xclip
$ xclip -sel clip < ~/.ssh/
$ copy the contents of the clipboard to the relevant section in your GitHub profile settings


Install Bower

$ sudo apt-get install npm
$ sudo npm install –g bower
If the above doesn't work enter the below and try again
$ sudo ln –s /usr/bin/nodejs /usr/bin/node


Install SASS

$ sudo apt-get install ruby
$ gem install sass


On my Linux machines I use PhpStorm, I have used PHPed on my Windows machines for the last 12 years but can see myself moving to PhpStorm on both platforms if only for consistency.

Download PhpStorm and unpack

$ tar –xvf [package name]

Move it install /opt and execute the shell script

$ sudo mv [Directory to move] /opt
$ cd /opt/[PhpStorm directory/bin
$ ./

You should now have a functional development server and IDE, checkout your sites into your Project directories and start coding,