Delegate non-core tasks

I’ve decided to simplify my life as a Developer. Up until last weekend, I managed my servers using one of the big providers, no more!

As simple as web server management is these days, there is always something to do; installing that extension you need, setting up SSL certificates, the list goes on.

As a solo developer, time is far more precious to me than money. I can always earn more money. I will never get back all the time it took to install the first certificate and setup up the web job to renew it.

Enter a service like Laravel Forge.

I click a button, Forge, instructs Linode to setup up a server and it starts provisioning. I click another button, automatic deployments from GitHub. I click yet another button, Lets Encrypt Certificate installed.

This post isn’t about Forge; I opted for Forge, I’m sure there are thousands of similar services for whatever stack you use.

The point is simple; it is always a good idea to delegate non-core tasks to someone else.

These days I use a service for server provisioning, another for web site monitoring, another for x, you get the idea.

Spend your time doing what you do best, working towards your goal. Let someone else deal with the day-to-day tasks that help but don’t directly contribute to that goal.

We all need a Development Manager

As a solo developer, it is easy to get lost in what you are doing and not necessarily working towards where you should be going. An excellent example of this is refactoring; “If I refactor this method/class, it will be easier to maintain.”

The chances are, if you smell an issue with a class or method, it needs to go through your refactoring process, it isn’t as though anyone else is going to do it, you are just one person.

The question is, does it need to be dealt with now? As developers, we are all guilty of wanting to refactor, and then persuading ourselves that the task is more urgent than it is.

I’m working towards releasing our SaaS product, mid-way through last year I decided I needed a little help to ensure I get to the release as efficiently as possible and not stray too far off target.

I decided to recruit the Wife.

Every Sunday, I gather all the tasks I’m planning to work on over the next week, the Wife and I then have a short meeting.

My Wife is not a Developer and has never worked in a field related to development. That doesn’t matter, all that matters is that your sounding board knows you.

I start by stating what I’m aiming to achieve for the week and then go through each of my planned tasks. For each task, I give a one-line explanation of what it is and how it will help achieve my goal. If I’m unable to provide a reasonable justification, or my Wife questions the validity of the task, it gets moved off the list.

Your sounding board needs to be someone that understands you. Your partner isn’t deciding whether you are working on the correct tasks, they are there to listen, you’ll both know if a task needs to go to the back of the list.

The longer you keep this up, the better the result.

Dart Sass with Windows Subsystem for Linux (WSL)

This is a minor update to an earlier post; the setup is simpler than previously.

  1. Download and extract the dart-sass release you are going to use, at the time of this post I opted for https://github.com/sass/dart-sass/releases/download/1.26.10/dart-sass-1.26.10-windows-x64.zip
  2. Open your .bashrc file in WSL – vi ~/.bashrc
  3. Add export PATH="/path/to/dart-sass:$PATH" to the end of the file, in my case export PATH="/mnt/c/Users/USERNAME/Documents/GitHub/dart-sass:$PATH"

To test everything is working, open WSL and enter sass --version; you should see the version number.

Connect multiple Docker compose apps

Much of the time I’m working on the Costs to Expect App and Website I code against the live Costs to Expect API. I’m reading data, so it makes sense to read live data, in my case the expenses for our children.

This approach isn’t suitable when I’m working on editing, creation, and deletion tasks; I don’t want lots of test data appearing on the API, and I don’t want to modify or remove live data.

In this instance, I want a local version of my Apps to use my local development instance of the API; this is simple to set up with docker networks.

I use Docker for development, the App, Website and API development environments are all Docker compose.

To connect two or more, compose apps, you need to create a shared network and then update each docker-compose file to use the newly created network.

To create a network, in your CLI enter.

docker network create <network-name>

Now, update your docker-compose files, we need to add a networks section.

  networks: 
    default: 
      external: 
        name: <network-name>

Now, when you bring your apps up, they will be able to communicate. Assuming your apps connect over HTTP, your apps can talk to each other using this format, http://:. In my case, my Website and App connect to the API using http://api:.

Website caching, simple now, later, dependant caches.

Two weeks ago, I quickly added caching to the Costs to Expect Website. My goal was to reduce the number of unnecessary API requests the Website made to the Costs to Expect API, mission accomplished.

The problem, I did a quick job, improvements needed.

The Costs to Expect Website is a long term social experiment, my wife and I are tracking the total cost of raising humans; we want to know how far off the £250,000 per child figure we will be when our sons reach the age of 18.

Back to the caching, issue one, I added caching to the top four or five pages; the rest of the Website needs some love. Issue two, for some unknown reason, I decided 15 mins was a sensible cache lifetime.

Solution one

I hacked in the caching; I added a simplified version of the much more featureful caching in the Costs to Expect App. I need to refactor the ‘request’ code and add caching for all requests.

Solution two

I set the cache lifetime at 15 minutes, why I don’t know. The content on the Website changes at most daily and additionally there is no need for the data to be live; people are not going to ‘throw a fit’ if they can’t see the latest expense we have added for Jack or Niall.

I am going to set the cache lifetime to four hours.

Four hours you say, why not 24? Well, I figured it is a sensible limit to ensure there isn’t too much of a mismatch between cached data while still dramatically reducing API requests.

Imagine a scenario whereby a user browses to the site and visits the summary page; the results are cached; they never, however, make it to the lists for that summary. If a second user comes along three hours later and views the listings, there is a good chance the data will mostly match the cached summary. If I set the cache lifetime at 24 hours, a value that initially seems reasonable, I am increasing the chance of the summaries and data mismatching.

There is a solution to the inconsistent data problem, dependant caches.

I need to add support for linking cached data, for example, a summary and the related lists, and more importantly, controlling the maximum period allowable between dependant cache item creation.

With the current implementation, there can be a difference of up to four hours between summary and list cache creation, realistically, the limit for dependant data should be closer to five minutes.

I will eventually update the caching system for the Costs to Expect App and at some point, trickle the implementation down to the Costs to Expect Website.

Costs to Expect App: v1.00.0

I released the alpha of the Costs to Expect App yesterday. It is later than planned, but I don’t want to dwell on that, I have other posts that explain the delay.

I am now going to work towards the public alpha. I’m not going to adjust the release date; I am still hoping to have it ready for the 1st of April, as we get closer to the release I will update the roadmap accordingly.

Rotator cuff and programming

At the start of the year, I had an accident, the result, a rotator cuff injury that is slowly healing. I still have a long way to go before my shoulder is back to normal strength and mobility.

A shoulder injury doesn’t sit well with programming, my productivity for the first three weeks after my injury dropped to zero. I’m back at work; however, I am not yet at 100%, I’m working in bursts.

I initially planned to release the private alpha of the Costs to Expect App in November. Minor delays meant I pushed it back to December. Due to injury and typically development delays, I released the alpha yesterday.

Delays and setbacks with development are common. I use Pivotal Tracker; after years of use, my iterations are typically spot-on. In my case an iteration is two weeks long, it might contain 60 points worth of work, I’ll create ‘n’ new tickets and ‘m’ bugs. By the end of the iteration, I’ve mostly completed everything I had planned.

There are weeks when I work less, and I’ll adjust the team strength value to take into account absence, but when you don’t work for three weeks, the impact is enormous. It would be easy to think I’m three weeks behind, that isn’t true, I didn’t do three weeks of development however in that period I would typically create ‘n’ new tickets and ‘m’ bugs.

I can never recover the lost time, that is gone, I’m going to use this as an opportunity to review the ‘plan’ and re-prioritise. As good as the ‘plan’ is I’m reasonably confident there are some ‘likes’ I can move further down the list and ‘needs’ that can move up the list.

How we expose Open Source REST API usage within our app

The Costs to Expect App is not an Open Source product; we intend to create a viable service; for now, we are keeping our secret sauce secret.

The App is built upon the Costs to Expect API, the API is Open Source and technically, anything we can do with the API, you can too. We aren’t gatekeeping your data; you can access your data through our App with the UI/UX we are creating, or, you can use other tools to fetch the data directly from the API.

To this end, we expose all GET, HEAD and OPTIONS requests, we hide POST, PUT, and PATCH requests as you can’t review them after the fact without unnecessary data being cached.

At the bottom of every page within the App, there is a table showing the required API requests. The table shows the request URI, the request METHOD, the response time, as well as was the request asynchronous or did we fetch it from our cache.

I appreciate that to the majority of users this data is redundant, yes there will be a visibility toggle; however, for the minority of users that are interested, I think the extra effort will be appreciated.

API requests table for the Costs to Expect App

Costs to Expect API v2.04.0 in development

House decorating continues, we expect to be finished sometime within the next two weeks. Decorating is limiting my development time; as such, I have decided it would be a bad idea to release v2.03.0.

If I release v2.03.0 now, it will probably be two weeks before v2.04.0 is ready. These releases are interim releases, combined they complete a significant feature. A two-week gap between the versions could cause issues so I’m releasing v2.03.0 internally and will make another public release when v2.04.0 is ready.

We aren’t due to start another round of decorating until February 2020, so typical development cadence should return in November.

v2.03.0 of the API is on the way.

The development of v2.03.0 is almost complete; I expect to have it out before the end of the week.

For this release, I am refactoring the multiple item types code. Adding support for simple expenses increased the item code significantly. New item types are due to be added to the API, so it makes sense to refactor before the complexity increases additionally.

We are decorating our house, and the scope of our plans increased, there is going to be a negative impact on my development time.

Although the pace of development has slowed, it is going to pay dividends in the future. I expect the pace to speed up once we complete decorating and I have pushed out v2.04.0 of the API.