Side projects of side projects are taking over!

This month I released a Yahtzee game scorer and a Yatzy game scorer, both a simple Apps which use the Costs to Expect API, they are a little bit of fun and allowed me to test some ideas.

As per a lot of the Apps in the Costs to Expect service, the game scorers are Open Source, you are free to play around with them.

I have shipped lots of updates in the last few weeks and learned a lot about the new user experience, with something as simple as the scorers it is very easy to focus on the initial experience for new users.

The lessons I have learned are already starting to improve Budget, the next App in the Costs to Expect service.

Budget is not a simple App like Yahtzee, it is going to be quite involved, no ETA yet but I would like to have a Beta out well before the end of the year, it very much depends on how much time I have outside of freelance work and contracts.

Side project of a side project

I’ve been working away building the Costs to Expect Service for a while and we are starting to make some real progress, before I started work on the next app for the service I wanted to test a couple of small ideas, enter my Yahtzee game scorer.

I released v1.00.0 of the scorer on the 3rd of August, the initial usable version was released on the 21st July. I worked on it for an hour here and there and quite quickly got it to a point where I deemed it featured enough to get a v1 label, it isn’t done but when is anything?

Building the scorer was freeing, I quickly tested some ideas and designs without having to worry about conforming to the design on the main Costs to Expect App. I discovered a couple of bugs within the API and also created new routes on the API to deal with requirements unique to the Yahtzee app, I suspect if I had ‘added’ the scorer to the existing App I wouldn’t have come up with the same design.

Looking at the API from a different perspective has been worthwhile, it forced us to rethink, rather than have a monolith App that is capable of everything, we decided to break everything up. The API is the core of the service and we are now going to have smaller focused Apps that can more easily be designed for there intended purpose.

The current App is going to be slimmed down to expenses only, a budgeting app is in the works and Pro versions of both will come along some time next year.

There are going to be more side projects of side projects, we have a few in the works. They have to take a back-seat for now as I need to get on with building the Budgeting app and I’ve a freelance project or two coming up that will steal some of my focus.

Returning to Costs to Expect, finally!

My own projects have taken a back seat whilst I’ve been busy working on a freelance project which scaled well beyond the initial scope, that project is almost over and I suspect I’m going to be free for a little while to work on my own projects again. I can start pushing towards the soft release of the Costs to Expect API and App.

The API

The API has been feature ready for a while, its issues are unnecessary complexity. When I started adding new “item-types” I opted for a little too much abstraction. I’ve been designing the budgeting system (on paper) for a while, it is not going to work in the same way as the existing “item-types” so I’ve been busy removing all the abstractions and in general refactoring the shit out of the API.

I didn’t look at the API code for a year, that time gave me a fresh perspective, I’m simplifying the majority of the “item-type” code, there is no need for everything to extend from a base class if there is going to very little overlap in functionality.

In addition to the refactoring I’m moving our tests local, rather than relying on a Postman collection I’m moving all the tests to PHPUnit. I can replicate every test in the Postman collection locally as well as test things that could never be caught by only looking at responses. I expect I will keep the Postman collection around because there are tests that a simpler to do with it but having everything local is going to be a major benefit and of course necessary for the official release.

I have 11 stories left in my tracker, the API will then hit our soft release milestone. This milestone has been a long time coming, I still can’t quite believe we are this close.

The App

The App needs more work, there are 24 stores in my tracker, my wife and I are yet to pick the App apart, so, more stories will get added. App development tasks are simpler, the API does the heavy lifting, the majority of the work on the App is presentation.

The App is closed source, the API is open source, we handle each slightly differently, with the App we are focused on the features and not so much the overall design, assuming it works, with the API, the design takes a front seat as it has to potentially be maintainable by more than just us. The reality is, I develop everything so I’m not really switching styles, we just might approach the backlog in a different way.

Official release

Once the App and API both hit their soft release milestones, I’ll cry, no really, this has been such a long time coming, I’ve been planning the budgeting and forecasting system, forever.

There might be a slight detour whilst I develop another App rather than get on with the Budgeting and Forecasting. As mad as that sounds given how long it has taken to get here, it can be thought of as an experiment/proof of concept. The budgeting and forecasting will not behave as per anything else in the App so I need to test the API and my front-end skills.

The next four months are going to be full-on, all we can do is see where we are in September and go from there.

Jobs and tasks gamble paid off

I have spent the last 12 months developing a product for one of my long-term clients, I’m going to refer to the product as EWAQO for the rest of this post.

I initially thought EWAQO would be three to four months of work, lockdowns, homeschooling, the pandemic in general and feature creep put paid to that.

EWAQO is far from the largest product I’ve worked on, I’ve been a professional developer for over twenty years and have worked at scale, it is however the largest product I’ve created single-handled. The Costs to Expect API, Website and App are big when combined, EWAQO is in a world of its own, it has so many different systems and is immense.

I decided early on to go deep with queues and scheduled tasks, I wanted to keep the App fast for the user and do as much of the heavy lifting as possible behind the scenes. Typically, I’m using queues for complex user-initiated tasks/actions and scheduled tasks are for processing.

It is easy to handle the complex stuff behind the scenes, but there is a problem, you need some visibility. Before I wrote the first job I created the interface, that way the client can see what is happening, is a job running? Did it fail? What was the output? Who initiated it? etc.

This was the masterstroke and made so much of the complexity possible, the client could see what is happening, job classes and tasks are simpler to create because you don’t need to worry about the front-end, it all came together brilliantly.

I’m not done with EWAQO, as of writing, there are 27 jobs and 26 scheduled tasks and I have visibility of each one, I can see when they are running, what is due, when and why they fail, it all just works.

We are working toward the soft-release of Costs to Expect this year, part of that is developing two smallish apps that act as companions. Both apps will borrow heavily from what I learned developing EWAQO.

Delegate non-core tasks

I’ve decided to simplify my life as a Developer. Up until last weekend, I managed my servers using one of the big providers, no more!

As simple as web server management is these days, there is always something to do; installing that extension you need, setting up SSL certificates, the list goes on.

As a solo developer, time is far more precious to me than money. I can always earn more money. I will never get back all the time it took to install the first certificate and setup up the web job to renew it.

Enter a service like Laravel Forge.

I click a button, Forge, instructs Linode to setup up a server and it starts provisioning. I click another button, automatic deployments from GitHub. I click yet another button, Lets Encrypt Certificate installed.

This post isn’t about Forge; I opted for Forge, I’m sure there are thousands of similar services for whatever stack you use.

The point is simple; it is always a good idea to delegate non-core tasks to someone else.

These days I use a service for server provisioning, another for web site monitoring, another for x, you get the idea.

Spend your time doing what you do best, working towards your goal. Let someone else deal with the day-to-day tasks that help but don’t directly contribute to that goal.

We all need a Development Manager

As a solo developer, it is easy to get lost in what you are doing and not necessarily working towards where you should be going. An excellent example of this is refactoring; “If I refactor this method/class, it will be easier to maintain.”

The chances are, if you smell an issue with a class or method, it needs to go through your refactoring process, it isn’t as though anyone else is going to do it, you are just one person.

The question is, does it need to be dealt with now? As developers, we are all guilty of wanting to refactor, and then persuading ourselves that the task is more urgent than it is.

I’m working towards releasing our SaaS product, mid-way through last year I decided I needed a little help to ensure I get to the release as efficiently as possible and not stray too far off target.

I decided to recruit the Wife.

Every Sunday, I gather all the tasks I’m planning to work on over the next week, the Wife and I then have a short meeting.

My Wife is not a Developer and has never worked in a field related to development. That doesn’t matter, all that matters is that your sounding board knows you.

I start by stating what I’m aiming to achieve for the week and then go through each of my planned tasks. For each task, I give a one-line explanation of what it is and how it will help achieve my goal. If I’m unable to provide a reasonable justification, or my Wife questions the validity of the task, it gets moved off the list.

Your sounding board needs to be someone that understands you. Your partner isn’t deciding whether you are working on the correct tasks, they are there to listen, you’ll both know if a task needs to go to the back of the list.

The longer you keep this up, the better the result.

Dart Sass with Windows Subsystem for Linux (WSL)

This is a minor update to an earlier post; the setup is simpler than previously.

  1. Download and extract the dart-sass release you are going to use, at the time of this post I opted for https://github.com/sass/dart-sass/releases/download/1.26.10/dart-sass-1.26.10-windows-x64.zip
  2. Open your .bashrc file in WSL – vi ~/.bashrc
  3. Add export PATH="/path/to/dart-sass:$PATH" to the end of the file, in my case export PATH="/mnt/c/Users/USERNAME/Documents/GitHub/dart-sass:$PATH"

To test everything is working, open WSL and enter sass --version; you should see the version number.

Connect multiple Docker compose apps

Much of the time I’m working on the Costs to Expect App and Website I code against the live Costs to Expect API. I’m reading data, so it makes sense to read live data, in my case the expenses for our children.

This approach isn’t suitable when I’m working on editing, creation, and deletion tasks; I don’t want lots of test data appearing on the API, and I don’t want to modify or remove live data.

In this instance, I want a local version of my Apps to use my local development instance of the API; this is simple to set up with docker networks.

I use Docker for development, the App, Website and API development environments are all Docker compose.

To connect two or more, compose apps, you need to create a shared network and then update each docker-compose file to use the newly created network.

To create a network, in your CLI enter.

docker network create <network-name>

Now, update your docker-compose files, we need to add a networks section.

  networks: 
    default: 
      external: 
        name: <network-name>

Now, when you bring your apps up, they will be able to communicate. Assuming your apps connect over HTTP, your apps can talk to each other using this format, http://:. In my case, my Website and App connect to the API using http://api:.

Website caching, simple now, later, dependant caches.

Two weeks ago, I quickly added caching to the Costs to Expect Website. My goal was to reduce the number of unnecessary API requests the Website made to the Costs to Expect API, mission accomplished.

The problem, I did a quick job, improvements needed.

The Costs to Expect Website is a long term social experiment, my wife and I are tracking the total cost of raising humans; we want to know how far off the £250,000 per child figure we will be when our sons reach the age of 18.

Back to the caching, issue one, I added caching to the top four or five pages; the rest of the Website needs some love. Issue two, for some unknown reason, I decided 15 mins was a sensible cache lifetime.

Solution one

I hacked in the caching; I added a simplified version of the much more featureful caching in the Costs to Expect App. I need to refactor the ‘request’ code and add caching for all requests.

Solution two

I set the cache lifetime at 15 minutes, why I don’t know. The content on the Website changes at most daily and additionally there is no need for the data to be live; people are not going to ‘throw a fit’ if they can’t see the latest expense we have added for Jack or Niall.

I am going to set the cache lifetime to four hours.

Four hours you say, why not 24? Well, I figured it is a sensible limit to ensure there isn’t too much of a mismatch between cached data while still dramatically reducing API requests.

Imagine a scenario whereby a user browses to the site and visits the summary page; the results are cached; they never, however, make it to the lists for that summary. If a second user comes along three hours later and views the listings, there is a good chance the data will mostly match the cached summary. If I set the cache lifetime at 24 hours, a value that initially seems reasonable, I am increasing the chance of the summaries and data mismatching.

There is a solution to the inconsistent data problem, dependant caches.

I need to add support for linking cached data, for example, a summary and the related lists, and more importantly, controlling the maximum period allowable between dependant cache item creation.

With the current implementation, there can be a difference of up to four hours between summary and list cache creation, realistically, the limit for dependant data should be closer to five minutes.

I will eventually update the caching system for the Costs to Expect App and at some point, trickle the implementation down to the Costs to Expect Website.

Costs to Expect App: v1.00.0

I released the alpha of the Costs to Expect App yesterday. It is later than planned, but I don’t want to dwell on that, I have other posts that explain the delay.

I am now going to work towards the public alpha. I’m not going to adjust the release date; I am still hoping to have it ready for the 1st of April, as we get closer to the release I will update the roadmap accordingly.