Value-Based Governance in the Impact Sector: A Collection of Essays
Part 1: Foundations
1. The Case Against Traditional Governance
Why Project Governance Fails So Often
- Traditional governance is broken: It prioritizes
scope over actual business outcomes, causing misaligned
priorities and chaotic execution.
- Key failures: No continuity between planning and
delivery phases, bloated scopes, and no mechanism to connect
decisions with original goals.
- Key failures: No continuity between planning and
delivery phases, bloated scopes, and no mechanism to connect
decisions with original goals.
- Value-based governance is the fix: It bridges the gap and ensures every decision supports clear, measurable results.
Traditional project governance is so broken, I’m surprised projects only fail 52% of the time (according to the Project Management Institute 2024 report).
I’m an unapologetic fan of the alternative value-based governance model because it prioritises commercial outcomes from the start of a project, rather than waiting until everything019s done and dusted. I credit Bjarte Bogsnes and his "Beyond Budgeting" approach as a standout in this space.
But some argue that traditional project governance is also focused on value delivery 014 the 01cbenefit01d side of the 01ccost/benefit01d equation or the 01cbusiness case.01d
Theoretically, that019s true. Practically, it019s far from the truth.
Let019s dissect the traditional model to highlight how flawed it is in real-world scenarios.
The Two Phases of All Projects
No matter the delivery approach (traditional, agile, cowboy…), projects have two broad phases: pre-delivery and delivery.
Pre-delivery : This is where we develop the business case and secure funding.
Delivery : After funding is approved, a team is assembled to deliver on the promises (in theory).
However, there019s often a significant gap between these
phases due to annual budgeting cycles. This delay is precisely
what Bogsnes challenges in "Beyond Budgeting.
In this edition, I019m arguing two key points:
Clarity of the business benefit (or commercial outcome) rarely survives the gap between pre-delivery and delivery phases.
Even if it does, the traditional delivery model lacks mechanisms to govern the delivery phase in alignment with value. Instead, it relies on scope, project plans, or timelines.
Pre-Delivery: The Lack of Clear Purpose
Having worked on 300+ projects 014 many of them rescues or turnarounds 014 I019ve seen recurring patterns when it comes to clarity of purpose:
Benefit statements vs. funding criteria : Benefit statements in business cases are often misaligned with the funding justification. For instance, a project may claim to 01cimprove customer satisfaction,01d but the funding hinges on a hard savings promise, like cutting call centre costs by $80 million annually through a 30% reduction in average call handling time.
Too many objectives : More than one objective leads to chaos. Different objectives are often owned by different stakeholders. For example, customer satisfaction may be owned by marketing, while efficiency improvements belong to the CFO. Projects with multiple objectives inevitably suffer from competing priorities. In turnarounds, my first step is often splitting these into smaller, sequenced projects.
Pre-delivery and delivery teams are separate : Pre-delivery is handled by part-time resources juggling other responsibilities. By the time funding is secured, those people are busy elsewhere or no longer available. Consequently, the delivery team lacks continuity and critical knowledge.
Business benefit models aren019t shared : Often, benefit models (usually spreadsheets) are treated as commercially sensitive or simply overlooked. This means delivery teams don019t know the actual goals they019re supposed to achieve.
The Result?
Ask a delivery team, 01cWhat are we trying to do here?01d and
they won019t tell you 01creduce average call handling time by
30%.01d Without clarity of purpose, a project has no hope of
success. As Lewis Carroll019s Cheshire Cat said: "If you don’t
know where you are going, any road will get you there.
Delivery: No Link Between Benefits and
Decisions
Let019s imagine the pre-delivery team perfectly articulated the business goal: 01creduce average call handling time by 30%.01d Everyone on the delivery team knows this is the aim.
Now comes the next problem: traditional delivery provides no way to connect day-to-day decisions with the stated goal.
Scope-driven delivery : Decisions in traditional delivery are dictated by scope. If 01cthe business01d deems a feature 01cin scope,01d it019s included. There019s no mechanism to challenge requirements based on their impact on business outcomes. In our example, no one pushes back on a scope item by asking, 01cHow does this reduce call handling time?01d
Kitchen sink mentality : To avoid post-project criticism, stakeholders include every conceivable requirement, relevant or not. This results in a bloated scope that detracts from the original objective.
Delayed validation : Features are typically released all at once, leaving no room to test whether the project is delivering value incrementally. By the time the project is complete, it019s too late to fix mistakes or pivot.
In the end, the connection between the original business benefit and delivery decisions is completely severed. We are driving a car with no compass, map, or steering wheel.
Silly, No?
Based on my experience with turnarounds, I believe the true failure rate of traditional projects is closer to 75% (as suggested by older Standish Group and US Department of Defence studies), not the PMI019s 52%. While the PMI’s definition of success 014 "Was it worth the money and effort?" 014 is more meaningful than "Was it on time and on budget?", traditional project governance still falls short.
The good news? There019s a better way. Value-based governance eliminates the pre-delivery/delivery gap and focuses on ensuring that every decision supports the project019s commercial outcomes.
Mapping Current and Future State Business Processes Is a Waste of Money
Business process mapping is a relic of the past that squanders resources on elaborate documentation rather than on creating tangible solutions. In today’s agile world, it019s more effective to prioritize and rapidly implement solutions based on user feedback, bypassing the costly and often unnecessary steps of diagramming current and future states.
Back in my EY days, I was responsible for requirements documents that were several hundred pages long. These things were a work of art! To be fair, in the 80s and 90s, we didn’t have any other options because these documents formed the basis of our promise to the governing bodies who trusted us with their precious funds to deliver a solution. But even back then, I didn’t really understand why we were mapping today’s processes. Sure, it provided interesting context at a summary level014but anyone reading through pages of diagrams is going to glaze over pretty quickly.
Somewhere between then and now, writing code became cheaper than writing documents. This change happened around 2010 with the advent of what I callcloud V2 technologies. Coupled with the already mature plethora of open-source tools, this means we can write code and deploy it without any lead time to consider infrastructure at all. More tangibly, I can deploy real solutions within hours or a couple of days of starting a project.
Despite this silent revolution, every project rescue I’ve worked on in the past decade was accompanied by volumes of current and future state process diagrams. In all cases, the research needed to produce these diagrams cost more than building a solution after restarting the project. It is no longer necessary to map out current or future states in detail to make promises to governing bodies, so seeing these documents irks me a little. This is especially true in a sector where every $100,000 of waste could have gone towards making a bigger impact.
It’s also interesting that such process documents delve into the same level of detail for all processes, with no connection to the business case. In a recent example, a transformation program considered 70 business processes in its initial investigations. No filter was applied to ask, "Does it make sense to re-engineer this process?". A cursory scan showed at least 20 processes where the cost of building a digital solution couldn’t deliver a commensurate benefit in less than 20 years because it affected a small group of users (typically administrators) and the process was conducted weekly, monthly, or quarterly, rather than hourly or daily. It’s difficult to build a benefits case for such processes.
The earlier you apply this type of filter, the more money you save by excluding low-priority processes from the scope. There’s no need to map those current or future state processes at all—it’s just a waste of money. On the other end of the spectrum, a few of these 70 processes have an impressive benefits case because they affect the efficiency of the largest group of users almost every day throughout the year. By not identifying this before conducting process mapping activities, we incur a massive opportunity cost. We delay implementing solutions for the most valuable five or ten processes until we’ve completed mapping all processes, including the lower-value ones.
Creating a list of processes (as opposed to mapping the processes themselves) takes only a couple of days. However, analysing and mapping the processes in detail (both current and future states) takes months. So it makes sense to do everything possible to prioritize that list roughly (according to benefit) before we do any significant work, including building a solution. Even if you believe in mapping these processes before delivering a solution, there is a massive benefit in tackling them one at a time rather than all at once in a single document. Handling them individually allows you to start building solutions before finishing analyzing all the relevant processes.
And, of course, I challenge the value of documenting these processes at all. I would rather invest the same amount of money into building a solution and getting people to use it. The reality is, 99 out of 100 solutions are obvious if you simply talk to the users themselves—drawing diagrams is often just a waste. I have been challenged on this many times, usually by IT or program team members, and I have yet to encounter a situation where creating current or future state diagrams made sense—at least not from the perspective of the sponsor funding the project.
- [New essay needed: “The Hidden Costs of Traditional Governance in Charities”]
2. Defining Value-Based Governance
#### 5 Tips for Successfully Stating Project Goals
To ensure project success, craft a short, memorable, and problem-focused success statement that is easy to remember, can be incrementally tested and easily assessed. This single, clear objective will keep your team aligned and significantly boost project completion rates.
For over two decades, my most important tool for rescuing or successfully delivering technology projects has been reworking the project’s objective into a high-quality success statement. This technique is broadly applicable to technology projects. For instance, I attribute the success of a 0a370 million project rescue to this method. The project involved redeveloping the hardware, software, and network for a point-of-sale system (till) for one of the UK’s largest retailers, with over 1100 stores.
The reason is simple: if the team’s goal isn’t on everyone’s mind every day, the project cannot achieve its goal because the destination is unclear. As I’ve said before, I believe this is the single biggest contributing factor to what research shows: over 50% of projects fail. I can’t over-stress the power of a well-formed success statement. While many of my other techniques are specific to digital builds, this one is broadly applicable to technology projects.
- Make It Memorable
Firstly, the success statement needs to be part of the whole team’s DNA, so it must be short and memorable, without jargon. Remember, this phrase is for everyone: users, stakeholders, management, and the delivery team. One of my favourites is "Judy goes home on time." I love this because it is particularly short and punchy, but it is possible to have something slightly longer and still memorable.
The key is that anyone should be able to hear the sentence once or twice and retain its core meaning, even if they use different words or order. For example, "dramatically reduce the number of inbound contacts to a valid student application" is a longer statement, but the extended team reliably replicated it because the core words and concept were easy to understand.
- Focus on One Thing
The tendency to use commas or the word "and" brings us to the second important attribute of a good success statement 013 it should focus on one thing, not many. The whole team must concentrate on one problem, not several. In my experience, the business case for most projects lists many objectives without highlighting the most important one. This was something I looked for in my project rescue days because I learned early on that multiple objectives just don’t work.
Different objectives typically align with separate sponsors, which means having more than one boss. In practice, this meant the team was pulled in different directions depending on the day and the sponsors’ personalities. Often, their objectives were opposing. For example, in a call centre project, the objectives of "increase customer satisfaction" and "reduce cost through self-service" were at odds. The goal of increasing customer satisfaction was owned by the marketing department, while the goal of reducing cost was owned by the call-centre manager. The project I took over was tasked with building a solution for both, which is often impossible in practical situations.
The solution is to break these into separate initiatives. In almost all situations, better results are achieved by making these sequential, not parallel, because they often impact the same systems and people. In most cases, as long as the objectives are not directly opposing, solving one problem often positively impacts solving others. Thus, a follow-on project dealing with problem number two is sometimes avoided because 80% has been addressed by project number one. The key is that removing problem number two from project number one dramatically increases focus and the likely success of project number one.
- Problem Focus
The third thing we’re looking for in a good success statement is (ironically) a problem focus, rather than a solutions focus. When I first ask, "What is our objective?" on a new project, I typically get solutions-focused responses like "we need to replace the student application form," "we need a like-for-like replacement of the call centre application," "we need a workflow management system," "we need a CRM," or "we need Dynamics 365."
The problem with solutions-focused goals is twofold: firstly, they are ambiguous; secondly, they cannot be tested on a small scale before the project is complete. By "ambiguous," I mean there is no clear finish line. For example, how do we know when we have finished "replacing the student application form"? We could go on forever and never finish, with new features being suggested every week. This is the reason for scope creep in all projects. We simply haven’t drawn a clear line in the sand.
A problem-focused goal, like "Have I recorded attendance before the class has finished?" has a black-and-white feel to it. In practice, I may include "… for 90% of classes" to be pragmatic, but even in its first form, it’s immediately more useful as a yardstick. I can talk to the program coordinators, ask that question, and get a yes/no answer. If the answer is affirmative, then I am done.
- Test Incrementally
This approach can also be tested on a small scale, which is the second important aspect. In all the solutions-focused examples I gave earlier, the criteria can only be fulfilled once the whole project is completed. I cannot test "replacing the student application form" until it has been replaced for everyone, everywhere, and for all cases.
However, "dramatically reducing inbound contacts from agents" can be tested immediately with one agent out of several hundred without making any changes to internal systems or committing to new technologies. Even better, we can test it effectively with any number of agents, from one to hundreds. This is revolutionary because it allows us to govern success incrementally and even fund projects incrementally. I can ask for 20% of the expected full budget and be expected to deliver at least 10% of the benefit to 10% of the agents before coming back and asking for more money.
- Define Clear Assessment
The fourth tip in coming up with a success statement is to be clear on who should be answering the question or assessing success. In almost all cases, this should be the primary group of internal users. It should be the group of people who are experiencing the pain today due to not having an adequate solution. For ministerial briefings, am I asking the paper’s authors, the coordinators, the approvers, the ministers’ aides, or the ministers?
In this case, it was the Judies of the organisation (the coordinators) who were feeling the pain of today’s broken process, which resulted in extensive stress and negative impacts on their personal lives due to working till midnight for two nights of every fortnightly sitting of Parliament.
Bonus Tip: Pose the Statement as a Yes/No Question
Tip number 5 is to pose the success statement as a question with a yes or no answer. Regarding ambiguity, there is nothing inspiring about getting to the end of a 12-month program of work and asking, "Have we been successful?" only to get responses like "I guess so" or "meh." We are looking for a resounding "hells yeah!" A sub-tip here is that for an ambiguous question like "Did we reduce…?" simply adding the word "dramatically" at the beginning converts it from a potentially grey answer to something clear-cut, like "Did we dramatically reduce the number of inbound contacts?"
A good success statement does not need to be quantitative. In fact, numbers are typically less emotive than qualitative statements, and setting a quantitative question often requires an existing tracking mechanism or baseline against which to measure an uplift or reduction. The only common exception is measures involving time.
For example, "Can we offer a place in less than three weeks?" (where the current average is around 13 weeks) is clear and time-bound. On the other hand, "Increase customer satisfaction by 20%" is a poor success statement if you are not already tracking customer satisfaction reliably. In my experience, these tips are easy to train and easy to retain. I was using these as step one in my project rescue handbook before I came across the idea of Flintstones pilots, and as I say, the benefits of a good success statement outweigh the benefits of a pilot.
That said, a good success statement paired with the Flintstones pilot approach is a match made in heaven.
[New essay needed: “What is Value-Based Governance?”]
[New essay needed: “Why Value-Based Governance Matters More in the Impact Sector”]
Part 2: The Flintstones Method
3. Starting Small, Learning Fast
Pilot-first Transformations (aka "Flintstones")
I specialize in the for-purpose sector, and I’ve yet to hear a positive story of a big consultancy coming in and running a multi-year transformation program. Even with heavy discounts and pro bono work from well-intentioned folks like KPMG, I hear stories of big budgets, long projects, 6-month stalls, failed attempts, and disillusioned staff014not a single example of "hells yeah, they’ve transformed my life."
To be honest, this isn’t unique to the charity sector; I saw the same thing in big corporates for a decade with EY.
I’d like to share the main tool in my kitbag from my digital project rescue days, my secret sauce for turning failed digital programs into "hells yeah" successes in short order. That trick is really simple: don’t do a transformation program at all. Do a bunch of quick-wins, one business process at a time. Thanks to advances in hosting technologies and the widespread adoption of free 01copen source01d software libraries, this has been possible since around 2010 without the limitations we suffered before that (in architecture, primarily).
Actually I was first introduced to the idea by a customer back in 2001.
Let’s call it pilot/scale, pilot/scale. Or as one of my clients used to call it, "a bunch of Flintstones projects". For those who didn019t watch The Flintstones, it019s the metaphor of a stone-age, foot-driven car

First, we don’t spend 6 months cataloguing all our "as-is" and "to-be" business processes because we’re delivering nothing of value014just documents. Instead, we hold a one-day workshop to brainstorm a list of the most costly and risky process issues, then narrow down to the worst one. It’ll likely be a labour-intensive manual process propped up by spreadsheets with a bunch of fragile macros.
And we don’t get sucked into analysing the whole list; we get stuck into the first cab off the rank, and carry on prioritising the rest in parallel (not analysing or designing them though).
Then we set a 1-month timebox to build a pilot solution for the worst problem, using free technologies on almost-free cloud services. Hit me up for a list of those.
The process will start with a bullet-point list of five high-level things that this system can do, just to give the developers a running start. We won’t be speccing, scoping, defining, or doing UX for this project before you start. The developers will work directly with the people who will eventually use the system, in this example, workers. The other stuff will happen as a step 2 (because the risk of deploying on a tiny scale is, by definition, tiny).
Within the first week, the developers will be able to get something up and running that will tick off each of those five bullet points in a basic way. They’ll be demoing that stuff, and depending on how fast they can work, we may have a working solution that one worker can use for a day before putting it back down and returning to the original process.
The rest of that month will be rinse and repeat. There are some tips and tricks to avoid, and I can provide a list of those if you hit me up via email. I’ll also be writing an article on some of that stuff.
It’s crucial to understand that we’re trying to demonstrate we can solve the problem, not that we can solve it at a human-supportable scale.
So during the pilot, it’s normal and acceptable to re-key the information captured in the system into downstream systems. This is fine because it’s a pilot, not the scale phase.
What’s interesting is that within the first week, or certainly by the end of the second week, we have almost certainly solved the problem we019re trying to fix on a small scale. This creates a huge amount of excitement within a small group of people without setting broader community expectations.
We’re running a small, temporary pilot with a limited blast radius. As I like to say, 01cno humans were hurt in the filming of this pilot01d.
In 15 years of using and refining this approach on over 250 small and large project in govername, corporates and scaleups, I’ve never had a single example where we couldn’t solve the business problem at this point.
I used to think failing quickly would be common and cheaper than failing after six months of documentation, but I’ve not seen a situation where we failed to build technology to solve a business problem, which was unexpected for me. In the early days I assumed that there would be more failures - but that the failures would be detected super early (according to The Lean Startup). Not so, at least for my projects - in every case we were able to build something useful that solved the problem.
At the end of the pilot, we have a small group of people with huge smiles on their faces because they’ve had developers building a solution that works for them. We then get those people to present back to a committee, management, or team leaders to get the funding to scale that solution.
Depending on the organization’s size, budgets, and problems, a fully outsourced pilot will typically cost between 60 and 120k over two to four weeks. We’re talking relatively small amounts of money for a potentially massive upside when you multiply out the numbers.
After the pilot, people might go back to work as normal, so we can think about what we’ve learned. We’ve built a custom solution that we could throw away if we find an off-the-shelf solution with a strong fit. But even in this case we019ve saved time and money by taking the 01cflintstones01d approach because we019ve seen what 01cgood01d looks like in detail and this confidence leads to a shorter, lower-cost software selection process by focusing on the features and functions that have actually worked, not the 90% that are 01cnice to haves01d.
In many cases though, it019s faster, chapear and better to simply connect up the user experience we019ve created in the pilot with the various existing back-end 01csystems of record01d.
We can firm up a budget for this and ask for the money to scale it out in a project that looks pretty traditional. I would still use many of the concepts from the pilot, but it’s not necessary. You can run it as a traditional project, spec it up, and scope it up.
If you have to go out to tender (which is often a zero sum game) then it still makes sense to start with a pilot for the aforementioned reasons.
I’ve found that the pilot, plus the time it takes to request and get the money, and the time to deliver the project for your top business problem, is always shorter than any project where we’ve defined as-is, gone to market, bought software, implemented it, modified it, and changed it. Typically, the smallest, most conservative project duration and cost is half the length of its equivalent traditional product-first process.
What’s next? We go back to the list from our first workshop and spend two hours in a follow-up workshop. Eventually, you’ll get to the point where you can do that follow-on workshop in 30-45 minutes and bang out quite a few of those.
We can set up two or three teams, especially if we have many business processes. One of my recent clients had 70. We might want to split it into teams that do pilots with the same group of pilot communities all the time and teams that come in behind them to scale.
There are many ways to slice and dice it, but the point is to use excitement to generate more excitement and kick off the next program. We’re never looking for funding for a whole business transformation program, only a fast payback for a small investment in a pilot and then again to scale it out based on demonstrable, real-world success.
From Flintstone to Full Solution in 9 Weeks (Offer Letters)
Reduced offer letter generation time from 13 weeks to 3 days, reclaiming hundreds of thousands in revenue.
Initially launched with a basic pilot, later developed into a fully integrated system for all administrators.
Fostered widespread enthusiasm and demand across the organization through a user-driven, streamlined solution.
In the charity sector, large programs are quite rare, so let’s look at the most common type of digital project, which usually lasts about 9 to 12 weeks. For example, an education charity needed a solution to generate offer letters for successful applicants of a specific, high-value course. Like many projects I’ve worked on, this problem had existed for years. Several solutions had been attempted, but none were completed or adopted. Initially, the goal was described to me as, "we need a system for offer letters." Previous attempts involved extending the student administration system, using low-code technologies, and customizing their SaaS CRM product. Despite these efforts, none succeeded, and all took more than a year.
Using aFlintstones approach, we rephrased the goal before starting the project to "we want to get offer letters out within three weeks of accepting the applicant." The existing process took an average of 13 weeks to generate an offer letter, causing applicants to accept offers from other institutions before receiving theirs from this establishment. To put this in context, the lost revenue for each missed opportunity was around $80,000.
Rephrasing our success criteria in business terms gave us the opportunity to build and test a small solution with a subset of internal users to prove we could issue offer letters faster014without committing to the full project spend. We quickly identified one of about 10 internal administrators who was keen to try something new. It was difficult to find time with Judy because she was under a huge amount of stress most of the day, chipping away at an impossible backlog.
To avoid adding to her problems, we recorded the screens on the two PCs she used for an hour or so. From this, we saw that a fairly simple solution would solve the problem. We didn’t scope or specify the solution014it was just obvious that a very simple application would dramatically reduce the 80 to 100 clicks and copy-paste actions necessary for Judy to produce and email one letter to a successful applicant. We also developed a proxy measure for success for the pilot, which was to cut the minutes required to administer one offer letter by 80%.
Within seven business days, Judy had tested a better solution on about ten of her applications. For all other applicants, she reverted to the normal process. Even this rudimentary solution, which was not integrated into any backing systems, had a dramatic impact. Of course, Judy was not responsible for managing the temporary (spreadsheet-style) integrations during the pilot 013 that was our job. Judy used the system as if it were fully integrated, just like its namesake, a Flintstones car.
By the end of the three-week Flintstones Pilot, Judy was using the new solution full-time and churning through her backlog at a great pace. Because the success was so obvious, the sponsor approved our work to continue and scale the solution by integrating it into backend systems. We had planned for this eventuality before we started the project, so we did not have to wait for those integrations to be created for us. By the end of the fifth week, Judy was using the system full-time and was reliably getting her quota of offer letters out in less than three days.
But the system wasn’t complete yet. There were still several back-office tasks, like editing offer letter templates and email templates, that the project team had been handling on Judy’s behalf until now. Obviously, to extend the solution to all administrators, we needed to complete this work; otherwise, the project team would still be needed to do it. Weeks 6 to 9 of the project saw us gradually building out these administrative backend user experiences so the administration team could be self-sufficient. As we did this, we were able to onboard more administrators, one by one. By the end of week 7, the new solution was in use by all administrators, so the final couple of weeks were spent on further enhancements – all focused on throughput and efficiency.
The sponsor estimated that the whole project was paid for during the phase when only Judy was using the solution because so many more applicants had accepted their offers than would normally be the case, resulting in hundreds of thousands of dollars in retained revenue. As you can imagine, the administrators were stoked. It’s also hard to overstate the impact of early success in a project like this because it creates excitement around watercooler conversations and lunch chats. Rather than foisting an ivory tower solution upon all users at the end of the project, the users were demanding access to the solution based on Judy’s feedback during the first few days. This is a much healthier dynamic and much more enjoyable for all concerned, including sponsors and the delivery team itself.
The solution was hosted on cloud v2 technologies, as I call them, for around $30/month all up. As a result, no IT staff were required to monitor, upgrade, or patch the solution. Additionally, because we avoided building on vendor technologies, there were no licence fees to pay either.
The Successful US Government Project That Started With a "Flintstones" Pilot
The IRS’s new "Direct File" tool lets taxpayers file directly, bypassing third-party vendors.
Using private-sector methods like user testing and iterative development, the project met early success with a targeted pilot group.
Positive feedback and successful scalability point to a promising expansion across more states.
Thanks to Adrian Furby (shared with permission) for highlighting this gem:.
It’s refreshing to see a government project014a US one, no less014achieve such positive impact.
Highlights of the "Direct File" pilot:
Direct Filing for All: Eventually, "Direct File" will allow all taxpayers to file their returns independently, which isn’t currently an option.
Breaking Tradition: The IRS abandoned thetypical approach of outsourcing to vendors.
Adopting Private-Sector Methods: They adopted best practices from the private sector, such asuser testing, iterative development and ongoing issue resolution.
Pilot Success with Small Scale: An internal team launched aninitial pilot with less than 1% of the final target user base.
User-Centric Design: The solution was developed withextensive user consultation.
Simplified Pilot Group: The pilot was targeted to agroup with straightforward needs, allowing the team to avoid building a full-scale system initially.
Successful Rollout: The pilot met user traffic demands and received strong positive feedback.
Scaling Up: The next phase will expand the programme to include users from an additional 12 states.
This is a great example of a"Flintstones" pilot, a simple yet powerful method I first encountered through a client in 2001.
4. Managing Risk and Resources
Don’t Wait
One of my customers asked me a common question this week: should I build an automation now that we’re planning, or wait until after I’ve done this other thing that will affect it and then do the automation? The automation in question was between the client management system and payroll, involving a whole bunch of spreadsheets. This setup poses significant business continuity and cybersecurity risks.
However, we were also questioning whether the client management system was fit for purpose. There019s a chance we might build an integration for the current system only to replace it later. Naturally, the concern is whether this would be a waste of money014throwing good money after bad.
Another version of this question involves creating joined-up user experiences around core systems. For example, should I build that joined-up experience around my client management system and my expense management system now? Or, if my client management system might be replaced, should I wait and build it later?
These are essentially the same question with the same answer for the same reason. So, I’ll generalise them as: should I build my thing now and adjust it later, or wait and build it then? Anyone who’s read my previous articles can probably guess the answer: build it now and adjust it later.
The first reason is that the cost of reworking something isn’t as high as you might think. About 80% of the cost of building integrations or joined-up experiences is reusable. The additional cost of making adjustments is only around 20%. For an integration, most of the expense comes from analysing the problem, examining the core systems, and understanding how data flows between them.
While the data formats and structures can change between systems, this is usually a minor part of the coding and thinking challenges. Most of the effort you put into building the integration remains relevant, especially if you’re replacing it within a 12-month period. If documented well, it’s not as complex as it sounds.
When building a joined-up user experience or interface in a user-first fashion, that experience remains consistent regardless of the system behind the scenes. If you replace a system, like a client management system, it will be with something that has similar flows, data models, and structures. The problem and the experience stay the same; you019re just rewiring that experience to a different backend system later. This task is closer to trivial than to overwhelming.
In my experience, both with integrations and user experiences, the cost of rework is minor compared to what people often think. Many misconceptions arise from using the wrong technologies. When using the right technologies, like serverless solutions and ubiquitous free tools, the effort to rebuild or rework and reintegrate them into different systems is minimal compared to the original work.
My second argument is that even if you end up discarding the code, the benefits are usually realised within a few months. This could be from reducing cyber risks, addressing business continuity risks associated with running things on spreadsheets, or simply improving time efficiencies. The benefits often outweigh the costs of building these integrations and experiences sooner than people expect.
We’re focusing on integrations and experiences used frequently, like daily, weekly, or biweekly, or those accessed by large audiences. The more often they019re used and the larger the audience, the quicker the return on investment. This logic applies to any question of whether to do something now or wait to combine tasks for perceived efficiency. In almost every case, doing things now is better. Making a project bigger increases risk, likelihood of failure, and delays business benefits far more than anticipated. Small projects succeed; big projects fail. Therefore, do small things more often and sooner.
Perishable Software
Software isn019t a long-term asset; it019s a tool with a short shelf life.
Build fast, get returns, and don019t fear replacing outdated systems.
Clinging to "big iron" mindsets in the digital age stifles growth.
In the 60s and 70s, computers were a massive investment, and so were the associated projects. I remember in the late 90s, a major Australian telecommunications company considered its billion-dollar investment in a CRM (Siebel) a capital investment to be depreciated over 20 years. Today, thinking of computer systems as acapital investment to be depreciated over long periods is a misnomer.
Even then, the status quo was absurd. The telcom’s system was so bad that only a select few (highly trained, with years of experience) could navigate it to record the sale of a new phone to a new customer. Anyone who went into the store in the mid-2000s to buy a mobile phone will remember that the system was already overdue for replacement. I used to offer a $200 cash bonus to anyone who could complete my order in less than 45 minutes. I did this at least five times, and no one ever managed to get it done.
But because the retailer had committed to a 20-year depreciation schedule, replacing the system would have been political suicide 014 they’d have to write off a massive asset and take a loss. Career-ending stuff.
There’s a lesson here for all of us. The pace of change in business systems has accelerated dramatically over the past few decades, partly due to thenear-zero cost of computing. In this context, viewing digital solutions as long-term assets is no longer appropriate. Thinking this way holds us back, as it did for the telco.
Some of the old terminology is an interesting lens on this. For example, "Big iron" is 70’s-era slang for an extremely large, expensive, and fast computer. It often referred to oversized machines like Cray’s supercomputers or IBM’s mainframes. The phrase "big iron" was used to differentiate those large computers from the new, smaller minicomputers that came on the scene at that time. But "big iron" is also a mentality that has outlasted the physical reality of those old computers. I still see too many organisations thinking of their digital systems as if it were "big iron".
My client at Monash University summed it up perfectly in a recent talk at UniMutual this year. The talk is short and worth watching; I’ve fast-forwarded to the. Trevor makes a few strong points:
Many solutions have a much shorter shelf life than in the past.
Building quickly and replacing or rebuilding makes sense if you get a return on investment from the temporary solution (i.e., it pays for itself).
Short timeframes (days, not months) and immediate ROI are often possible.
Modern digital technology changes so fast, there’s no guarantee a particular tool will still be around in ten years.
I like this response to "this was built in a hurry years ago and now we have to fix it":
"[This was a] big problem. Would have been millions of dollars, multiple years, committees, everyone and their dog trying to get involved. And it probably wouldn’t have ended up a better project.
Instead:Modern technologies, modern approaches, just a few people, ripped it out.
ROI on that for six years!
And now another quick rebuild to get it going in the next
technology.
Systems are now temporary stepping stones to better business. As
long as they give a return on investment at each stage, it makes
perfect sense to throw them away. The alternative is
nonsense.
Big Projects Need Full-Time Team Members
Full-time team members prevent delays and wasted time on big projects.
Part-time roles create bottlenecks and derail timelines in complex organisations.
Hybrid roles and full-time staffing are crucial to keep projects on track.
In smaller organisations, I rarely see the need for a project to last more than 4 to 6 weeks if the right decisions are made upfront. In larger organisations, however, bigger projects with larger teams often stretch across months, sometimes even a year or two. This is largely due to the increased complexity of managing change across a bigger team and organisation. Additionally, a larger budget tends to attract more scrutiny, which leads to more controls and, consequently, more overhead.
Based on my experiences with large corporate digital project rescues in the ’90s and early 2000s, I developed a policy that my project teams should consist only of full-time members. This doesn’t necessarily mean employees; full-time contractors are equally suitable.
This is so important because people in a typical project often come from different parts of the organisation and are managed by leaders with varying014and sometimes competing014priorities. This setup works if individuals are seconded for a continuous period, as they are effectively managed by the project manager during that time, rather than their original manager.
However, I frequently encounter situations where resources like business analysts, testers, and developers are spread thin across two or three projects014sometimes even more. This issue is especially common in teams still using cloud V1.0 technologies, which require a range of internal resources to architect, design, procure, and provision computing and networking equipment.
In these cases, project teams are often left waiting for critical personnel to finish tasks on other projects. Even scheduling meetings with these part-time team members becomes a chore each time, often resulting in longer lead times just to have conversations and make decisions.
These frequent, seemingly minor hold-ups (which are often not minor at all) are rarely tracked by project managers because the overhead of monitoring each delay is often greater than the value of managing it.
My commitment to full-time resources emerged after conducting a retrospective analysis of two large projects I ran, where I struggled to control the timeline and budget. It quickly became clear that, for a significant portion of the project019s duration, full-time team members were sitting idle, waiting on tasks dependent on part-time contributors.
This challenge is especially pronounced with roles that typically aren019t needed full-time, such as quality assurance personnel. In these cases, I address the issue by combining roles. For example, if I need 1.5 business analysts and 0.5 of a quality assurance person, I would hire two business analysts, with one of them assisting in quality assurance. While some people resist this type of hybrid role, my experience has generally been positive, as it provides an opportunity for cross-skilling.
I also developed several techniques for managing dependencies on part-time team members when dealing with cloud V1.0 infrastructure. However, I won019t cover those here, as these legacy technologies should no longer be used for new projects.
Part 3: Digital Implementation
5. Technology Choices
Digital: From Impossible to Possible Thanks to Cloud Services
Cloud services have revolutionised digital capabilities for charities, making previously cost-prohibitive infrastructure affordable and manageable. By transitioning from early cloud infrastructure (cloud v1) to advanced, automated services (cloud v2), organisations can eliminate hefty setup costs, enhance service quality, and allocate budgets more effectively towards solution development.
Today, it is entirely feasible for charities to have in-house digital capabilities, whereas a few years ago, this would have been cost-prohibitive. The influence of recently introduced cloud services in this revolution is significant. In 2006, Amazon offered its computing infrastructure to organisations to reduce the costs of procuring, installing, and managing the physical computing power required for modern businesses. Google and Microsoft followed suit in 2008 and 2010, respectively. Today, these services are referred to as "Infrastructure as a Service" (IaaS), or in my terms, "cloud v1."
While cloud v1 eliminated the need for physical infrastructure, organisations still required many technical resources to design, set up, administer, and operate the computers and their operating systems. These activities included post-implementation network monitoring, firewall configuration, and upgrading and patching various components of the infrastructure. All these tasks had to be completed before any work on building solutions could begin.
In reality, even in the corporate world, the resources we had for these tasks were not as qualified or certified as those available to Google, Amazon, and Microsoft. As a result, the quality of the infrastructure varied considerably in terms of uptime, security risk, and performance for end users. To address this, organisations hired specialist resources and set up various control gates and committees to mitigate the risk of poorly designed infrastructure.
A typical project during this period had a six-month ramp-up phase, requiring several team members whose sole purpose was to coordinate and manage meetings and sign-offs with over 10 individuals in 10 separate teams across the IT landscape. Each of these individuals and teams had separate management, budgets, committees, and administrative processes. Learning about the existence of each team and how to engage with them was a significant learning curve for every project.
This was infeasible for small charities. Commercially, any project below a certain size was not cost-justifiable due to the high costs of designing and setting up the infrastructure. Every project also had to account for increased IT operations costs (more staff) required to monitor, upgrade, and patch that infrastructure. This is what managers refer to as the "legacy" of having an internal digital capability.
I did a retrospective analysis of three of my projects during this period to understand the percentage of my budget spent on infrastructure. I found that an eye-watering 75% of a typical project’s budget and timeline were attributable to managing cloud infrastructure. In this context, it was impossible to build software and deploy it to users on day one of the project, and also impossible to have a one-person digital capability.
However, in 2008, Google previewed a new product called "App Engine," which kicked off a new wave of technologies referred to as "Platform as a Service" (PaaS) and "Function as a Service" (FaaS) from all three vendors. I call these cloud v2. Cloud v2 products are pre-architected, pre-configured, and fully managed by the cloud vendor. Now, we don’t need a single IT person to manage this in our organisation. For example, I have managed over 100 of these setups for one client without requiring a single IT resource.
This meant I could completely eliminate the pre-build ramp-up phase of every digital project I managed, which is significant when you consider that this phase accounted for 75% of the budget on most of my projects. The second major benefit of these newer cloud services is that even the smallest charity inherits a much higher quality of service in terms of uptime, resilience, security, and performance. Major cloud vendors can attract, pay, and retain a much higher calibre of infrastructure engineer than even the largest corporations in Australia.
I worked for many of these large corporates during the cloud v2 era, who were still emotionally tied to cloud v1 technologies. Despite their confidence in and defence of internal resources, they were never able to match the quality and reliability of the fully managed service offered by vendors. Not even close. A simple and important example of this is resilience to power outages. All cloud v2 services from major vendors have redundant computing across two power grids, which I’ve rarely seen implemented properly even in major Australian corporations.
Another way to think about cloud v2 is that it is fully automated, including provisioning and operation. This automation is developed and paid for by the cloud vendor, so you don’t need your own team to handle it. It’s now possible to have a one-person development team in a small organisation without inheriting 75% of the legacy of yesteryear. So all we have to do is avoid using cloud v1 technologies. This is easy014just hit me up and I’ll give you a list for each cloud vendor. For example, in Microsoft Azure, the three core services are App Service, SQL Server Cloud, Cosmos DB, and Blob Store.
In the digital world, there is no sensible reason to use cloud v1 technologies014at least none that withstand two minutes of questioning. Cloud v2 ensures that 100% of your budget for integration and joined-up experiences can go into the solution instead of being wasted. From my experiences over the past decade, specialising in cloud v2 technologies has been one of the biggest factors in attracting and retaining the industry’s best developers. Even better, with some simple guardrails in place, it’s feasible to take your first steps with an external organisation before committing to hiring your own resource. Contrary to industry perception, attracting and retaining talent isn’t as hard as the industry has been led to believe.
Let’s unpack this by comparing the old world to the new and explaining the difference between old and new cloud technologies from a practical and commercial perspective. The legacy we used to inherit is no longer necessary if we simply make smarter choices about which cloud technologies to use. This shift in economics can be transformative for organisations willing to jump on board, and it019s easy to dip your toe in the water before committing to hiring your first resource.
Why We Don’t Need to Architect Digital Solutions Anymore
We can ditch traditional architects because cloud V2 services are pre-architected and vastly more efficient. Small teams now outperform larger ones by leveraging high-quality, cost-effective prefab digital solutions.
In the olden days (before 2010), a small development team would be roughly 12 people. Today, I achieve more output with just 3 to 6 people, depending on the mix of senior and junior resources. Interestingly, and counterintuitively, a team of three typically produces more than double the output of a team of six… But that’s a conversation for another day.
Most of this increase in efficiency comes from removing several style roles, like enterprise architects, technical architects, network architects, security architects, and solution designers. Because I focus on building new solutions on cloud V2 services rather than inheriting old technology, I don’t need any of these architects. In other words, all the technologies I use are "pre-architected," and the quality of these pre-architected services is far superior to anything I could afford in the old infrastructure days.
The best analogy I can come up with (and it’s not amazing, I warn you) is that of buying a prefab mansion. When we think of prefab housing, we often imagine cheap, low-end options. However, there’s a massive industry in custom-made but prefabricated structures at scale, like hospitals.
The advantages of custom designing something and pre-fabricating it are numerous. The two that stand out for me are quality and cost 013 both of which resemble the use of cloud V2 technologies, which are preassembled and fully automated. Building something and then transporting it in pieces has the hidden benefit of using much higher quality designers and builders because they are centralised with their peers. In cloud terms, I’m alluding to San Francisco, Silicon Valley, and Seattle, not Sydney. Besides the resources themselves being more senior craftsmen that I can’t afford to employ directly, assembling these centrally makes use of supporting technologies in massive warehouses. The end result is a higher-quality structure made to a higher standard in terms of precision and workmanship.
The second benefit of pre-fabricating a structure is that they use common subcomponents for the more mundane elements (a bit like IKEA). So, while the overall assembly is unique, the design optimises the use of these subcomponents. Again, there’s a parallel in cloud V2. All cloud vendors rely on proprietary subcomponents to run high-efficiency data centres. They can afford to design and manage these subcomponents because of their massive scale.
Ultimately, for both physical buildings and cloud V2 services, the widespread use and scale at which they operate introduce significant cost efficiencies. Some of these efficiencies are retained by the vendor as profit, but the vast majority are passed on to customers, especially when compared with equivalent services built by customers.
Even in the IT industry, automation is gradually making some jobs redundant, just like in every other industry before it. The good news for those at risk is that their skills are easily transferable to other areas of IT. Plus, the legacy systems that still require these skills are substantial.
I will say in closing, though, that many teams are using cloud V1 technologies to build new digital solutions, which is simply wasteful. This isn’t the fault of any individual; it’s just the way the industry has evolved. In most corporate settings, project funding approval involves all of these architects, and because of this heritage, they have a blind spot. They don’t realise that building new solutions in the way I refer to is possible. If we ask an infrastructure architect what infrastructure we need for our project, they will just do what they have always done, partly because there’s a conflict of interest in making themselves redundant. However, I don’t believe this conflict of interest is the core reason for overlooking cloud V2 in most cases.
Make APIs the #1 Requirement for Any Software You Buy
Demand high-quality, well-documented, publicly testable APIs as your top criterion when evaluating software. Strong APIs are essential for effective integration, efficiency, and maximizing your organisation019s impact; skip any software that falls short in this area.
When I evaluate software for a customer, my first criterion has nothing to do with the software’s functionality. My top requirement is a well-documented, publicly testable API that provides access to all the software’s features.
I prioritise this check for two reasons. First, it’s easy to test since it doesn’t require vendor interaction and takes little time. Second, it’s the requirement most often failed, so placing it at the start of the selection process greatly improves our efficiency.
An API provides automated access to a system’s functionality for integration with other systems and the development of new user experiences, greatly increasing user efficiency for tasks spanning multiple systems. As your organisation grows, the need for new integrations and unified tasks across systems also grows. Organisations that avoid these necessities create disproportionate manual activities as a coping mechanism. This avoidance reduces client impact because funding is wasted on unnecessary human effort to work around these inefficiencies every week, fortnight, month, quarter, and so on.
There is no point in buying software that perfectly matches your needs if it has no APIs or poor APIs. Likewise, APIs that don’t provide access to all system features are useless. This partial coverage is a fail because you can’t predict which features will be needed for your integrations and custom experiences. Almost always, the features you need are the ones not provided by the vendor for some bizarre reason.
The APIs must be well-documented. Without proper documentation, the cost of creating custom integrations and experiences becomes so high that it renders the APIs effectively pointless. Defining a standard for documentation quality is straightforward014you can use an existing example. My go-to standard is the documentation provided by Stripe. I’ve never had a developer complain about it. They even use open standards that other software vendors can freely adopt.
Additionally, the APIs must be publicly testable. This ensures the efficiency of building custom integrations and experiences. If vendor interaction is required at any point, the effort to build these customisations becomes impractical. We’re talking about a difference of a few hours versus several weeks. I can provide examples where we’ve tried both approaches on the same project.
In summary, rock-solid, high-quality APIs are essential for any organisation aiming to maximise its impact within its budget. Don019t bother assessing the business fit of any software until you019ve confirmed that it has APIs of sufficient quality.
There019s a chance you won019t find any software with strong APIs. This scenario is easier to manage than buying software with low-quality or no APIs. I will write about handling this situation soon.
6. Building vs Buying
Buying Software Is Often Not the Best Solution
Don’t waste money on software when a simple, custom solution suffices. Build lightweight tech around your existing systems to save time, cut costs, and avoid operational headaches.
A friend in the industry shared a failed expense management project today. It019s the second such story I019ve heard this month. I’m recounting it here as an example of when to build a thin solution around your existing backend systems rather than buying software. The failed package selection, implementation, and training project took 12 months and wasted over $200,000. I’ve seen a similar problem solved in the corporate world in less than a month for under $40,000.
The problem in all these situations is the huge amount of manual work and triple keying of client expense information to control and approve expenses and ensure they are reimbursed in client billing. Because these processes are manual and involve several teams, there’s always a heavy tax in supplemental reconciliation work downstream. In one case, there were three sets of individual reconciliations at different steps. I estimated at least one full FTE was needed to keep this process together across a couple of programs.
Most non-technical people understandably think in terms of solutions (expense management software) rather than evaluating the complexity of the problem itself. This is often true of many people in IT as well. Jumping to software package solutions without considering other options is very costly. It’s costly in terms of time, as buying software adds 6 to 12 months to the timeframe. It’s costly financially, as package selection, implementation, and training often cost hundreds of thousands or even millions of dollars. Most importantly, it’s costly in terms of the load it puts on operations staff throughout the process.
In general, generalisations are a bad idea. This oxymoron highlights that the broad topic of expense management ranges from extremely complex situations where buying software is necessary, to more common, trivial cases where a small tech solution suffices. In the cases I mentioned today, they’re at the trivial end. All we need to do is track a date, an amount, and some text. The complexity is a three-step process that can be managed through emails, SMSs, or other messaging forms.
The most important thing is that the process must be slick for everyone involved, and the data must end up in the financial system. Essentially, this solution is just a thin, joined-up experience around the core system, which is the general ledger. Adding another oversized piece of software simply complicates things. In both build or buy scenarios, we still need to integrate into the general ledger. However, buying software adds a huge amount of work to understand its complexity, build a compatible solution, and integrate it into the general ledger. That integration becomes far more complex when involving new software, as you spend more time trying to simplify the overpowering software to fit the small overlap in the solution needed.
Using license-free, open-source software hosted on modern cloud services minimises the work and ongoing costs. Nine times out of ten, building solutions like these is much easier than achieving the same result with niche software. Of course, this doesn’t apply to core systems like client management, HR, payroll, and the general ledger. These systems are typically already in place and stable when considering satellite user experiences.
My rule of thumb is that buying software is better than building when it comes to these generic core systems. In all other cases, building lightweight, joined-up experiences (aka digital solutions) is my default, and I only revert to buying software if the scope turns out to be unusually large.
(1 of 3) Cheaper: Accelerating Customisations Off-Platform
Building new functionality around SaaS software is cheaper due to lower resource and licensing costs.
On-platform builds increase usage and licensing fees, inflating costs unnecessarily.
Cost of Implementation & Operation: On-Platform vs. Off-Platform
Let me start by saying that I019m a massive fan of products like Sitecore, Salesforce, and Microsoft Dynamics 365014when they do almost everything we need 01cout of the box.01d And what I mean by 01cout of the box01d is that we can turn it on and start using it with perhaps a bit of training and a week or so of configuration (as opposed to coding) to tweak our workflows. I019m not talking about a 6-month project to 01cconfigure01d entirely new workflows from scratch, effectively turning the SaaS product into a low-code platform or, worse, a coding tool.
Sitecore, like WordPress and Adobe, offers 01ccontent management systems01d (CMSs). These were designed to run websites and intranets, not as development tools for building secure digital solutions.
Salesforce is, at its core, a customer relationship management (CRM) system. It was designed to make sales and marketing teams more efficient and transparent, not to serve as a platform for building new digital solutions in other parts of the business.
The same applies to Dynamics 365, which started as a CRM like Salesforce and later incorporated a separate accounting product through acquisition. Again, neither was created to build entirely new modules for other areas of the business.
And yet, it019s become standard practice to build whole new modules and features on top of the SaaS software we019ve implemented. This is called "on-platform build." While this approach benefits the vendor by driving wider use (and therefore increasing licensing costs), it’s the worst place to build and operate the unique technology components needed within the organisation.
To understand why on-platform builds are such a poor choice, let019s look at it purely from a cost perspective. The next two articles will address quality and speed.
If you ask any group of businesspeople whether it019s cheaper to build new functionality on or off their shiny Sitecore, Salesforce, or Dynamics system, almost everyone will say 01con-platform.01d This is what we019ve been conditioned to believe by every software vendor since the 70s. And, in the 70s, it was true. But the economics have flipped from the last millennium to this one.
Today, we have access to license-free, cheap, and ubiquitous technologies upon which we can build new modules. These options didn019t exist in the 60s, 70s, or 80s. Additionally, if we are careful about the SaaS software we buy, we now have better (and cheaper) connection points for our customisations to use within the SaaS software to cooperate effectively. These are called APIs, and they should be a core requirement when selecting a SaaS product.
Why Is Building Around SaaS Software Cheaper?
Firstly, the resources needed to build customisations are significantly cheaper when using free and widely available technologies compared to SaaS specialists. SaaS specialists require numerous certifications and, therefore, command higher daily rates and salaries. This cost is compounded because, in most cases, you need multiple specialists for the same SaaS software to make any customisations useful. In contrast, with free and ubiquitous technologies, you can often find one person who can get the job done. With fewer proprietary specialists due to high entry barriers, the result is a low supply and higher costs.
From my experience, the human cost of building customisations on-platform is 2 to 3 times higher than building equivalent customisations off-platform.
Secondly, the cost of operating the software is much lower if we use more open, licence-free products. This lower cost partly comes from reduced human resource expenses, following the same logic as during the build phase. But the bigger impact comes from the licensing costs incurred when building on-platform.
On-platform customisations typically result in a need to extend the software to more users within the organisation. For example, where the initial team may have been using 20 licences, on-platform customisations often lead to wider adoption, requiring 100 people to use the solution. This drives up licensing costs significantly, and these costs recur year after year.
For some reason I haven019t been able to work out, these costs are rarely considered when choosing the best technology for customisations. It seems we019ve been conditioned by our vendors not to look too closely014we019ve been taught, and therefore assume, that it019s better to build on-platform.
(2 of 3) Faster: Accelerating Customisations Off-Platform
On-platform customisations drag down project speed due to specialised resource needs and platform constraints.
SaaS platforms lack local build support, adding time-consuming hassle to the most common task in a developer019s day.
Off-platform solutions can often be delivered in a fraction of the time and cost.
Speed of Implementation & Operation: On-Platform vs. Off-Platform
As discussed in my last article, it’s standard practice to build customisations on top of the SaaS software we’ve implemented. This approach is known as "on-platform customisation". While beneficial for vendors014driving higher software usage (and licensing costs)014it’s often the worst place to build and operate the unique technology our organisations need.
When considering implementation speed, it might seem logical that building ("configuring") new modules on platforms like Sitecore, Salesforce or Dynamics would be quicker than starting from scratch with open-source tools, right? Actually, no. For decades, we’ve been led to believe this, but the world has changed, upending the economic and technical assumptions that held true in the 60s and 70s.
Building customisations by configuring existing software is fundamentally different from starting fresh with powerful open-source tools. With a pre-existing SaaS product, there’s a much longer lead-in time for two key reasons:
Specialised Resource Requirements First, we need resources who understand the software and know how to leverage its modules for customisation. Sourcing and scheduling such experts can take weeks or months, while finding open-market resources skilled in open-source tech is simpler and quicker.
Solution Complexity and Constraints Second, solution design isn’t straightforward. We can’t just build from a user’s perspective; every decision must factor in how to assemble the solution using the platform’s existing tools. This drags down every conversation, making even small projects feel unnecessarily complex.
Challenges with Major SaaS Platforms
This problem isn’t limited to large SaaS products. It also affects local tools like the Power Platform. For example, discussions often start with decisions about using Canvas apps, Power Pages or Model-Driven apps. Most solutions need a mix of these, which immediately adds unnecessary complexity.
The same issues arise with platforms like Sitecore or WordPress when used for building secure digital experiences instead of simple content-focused websites. We end up working around proprietary knobs and levers, inheriting a massive amount of complexity for what should be relatively simple tasks.
Development Inefficiencies: Local Build and Deployment
Another reason on-platform development is slower is the lack of best-practice support for customisations in most SaaS software. The biggest bottleneck is the local build process. Off-platform open-source tools optimise this because it impacts productivity every minute during development. The person building the solution needs to make hundreds of small changes and see them work locally before pushing to shared environments.
Local development is crucial because deploying changes to shared environments is time-consuming. For something that happens continuously, any delay in this process results in a major productivity hit. In my project rescue days, ensuring solutions could run locally on every developer’s machine was a key focus for boosting productivity.
Why SaaS Platforms Fall Short
Few major SaaS providers allow local running of solutions, meaning even minor changes must be deployed to a common environment. These shared environments aren’t always easy to set up and require time-consuming deployments, making even small updates a hassle.
Each of these factors alone can add significant time to a project. Combined, they create massive delays014closer to 90% additional time, not just 10%.
Real-World Impact: Timeline Differences
To put this into perspective, let’s consider a small solution to automate the process of getting information from a client management system into billing. This would typically need a basic user interface for ongoing data entry or configuration. For anoff-platform solution, I’d expect the following timeline:
Initial Solution Development : A couple of days
Parallel Testing with Manual Process : Within two weeks
Full Project Completion : Four weeks in total
That’s one month from identifying the problem to having the solution fully in place.
Contrast this with the SaaS-based approach, where I’ve seen many confident assurances from teams claiming to match this timeline. In practice, it never takes less than three months014and often drags out to six or nine months due to false starts and complexities. There’s always a reason for the delays, but at the end of the day, it just takes longer. In practice.
Final Thoughts
Ultimately, building custom solutions off-platform is not only cheaper but significantly faster. The flexibility of open-source tools, combined with the ability to work locally and iterate quickly, makes off-platform development a superior choice for many organisations. As the landscape evolves, understanding these dynamics will be key to ensuring technology investments are not just cost-effective, but alsotime-efficient.
(3 of 3) Better: Accelerating Customisations Off-Platform
Decoupling code from vendor systems ensures compatibility with future updates.
Your IP remains portable and independent, reducing long-term costs.
Custom builds outside vendor platforms last longer and offer better UX.
Quality of Solution:
On-Platform vs. Off-Platform
We’ve discussed why it’s faster and cheaper to build new functionality around vendor software (rather than using it directly for those builds) – now let’s explore why it also results in a higher-quality solution.
This discussion looks at the difference between using a vendor’s proprietary tools to modify and extend their software versus relying on free and widely available technologies with cost-effective, licence-free hosting. [Reference to previous article on this topic]
To build around a vendor’s product, they need to have strong APIs. These “application programming interfaces” provide easy hooks into and out of the software, making it possible to build customisations around the software instead of inside it. Make APIs the #1 Requirement for Any Software You Buy
Under this banner of benefits, perhaps the most important is avoiding “upgrade lockout”.
Understanding Upgrade Lockout
To understand upgrade lockout, we need to distinguish between configuration and customisation of vendor software.
Most large software vendors offer great flexibility in how information is displayed and used to fit the specific practices of any organisation. These aspects are configured by users during the initial setup, without needing IT intervention. A simple example might be renaming labels to match internal jargon instead of using Americanisms.
When vendors sell the flexibility of their software, they are selling this type of configuration.
Customisations, on the other hand, involve technical changes (code) that create a tight dependency between the specific version of the vendor’s software and the added internal customisations.
For instance, building a new secure digital experience in Sitecore or WordPress will inevitably require writing code.
Unfortunately, once a SaaS solution is initially rolled out to users, it’s in the vendor’s interest to encourage on-platform builds. These new builds introduce new functionality for more users, leading to increased licence fees—hence the push for on-platform builds.
One or two years down the track, the vendor might release a new major version, incorporating big changes to their internal code.
At this point, configuration changes are usually safe and don’t disrupt the upgrade process.
Code customisations, however, often make automatic upgrades impossible. This is what we mean by upgrade lockout: the inability to automatically benefit from new versions because our customisations are deeply entangled with the vendor’s code.
This leaves us with two options: accept that we’re stuck and never upgrade, or pay hundreds of thousands (often millions) in consultant fees to reapply the customisations to the new version. Ironically, these are often built the same way, leading to another dead end when the next version is released. This scenario still happens every day.
When customisations are built outside the system, using APIs, the upgrade path is protected—similar to configuration changes. Because our customisations aren’t tightly coupled to the vendor’s code, they remain stable regardless of changes to the underlying system. This is one of the core principles behind offering APIs: they stay stable even if the internal workings change.
For some reason, upgrade lockout is both very common and yet rarely considered when tech teams decide where to build their new modules. The best guess is that the people deciding where these customisations are built aren’t the ones responsible for upgrading the vendor’s software down the track, so they go along with what the vendor sells them.
Vendor Lock-In
The second major reason on-platform builds are a bad idea is that a company’s intellectual property (customisations) becomes tied to the vendor’s proprietary technology, even though the vendor isn’t supporting it—because it’s the company’s work, not the vendor’s.
In the future, if a company wants to move off the vendor’s software, all their customisations are still completely dependent on that software.
This is called vendor lock-in because the company must keep paying for licences of the old vendor software even if they’ve moved to new software. So they’d be paying for two SaaS products, despite using only one.
Since this setup doesn’t make financial sense, companies often stick with the old software, even if it no longer meets their needs. They are locked into a commercial relationship with the vendor because of technical reliance on their software, even though they’re no longer using the core product.
User-Centric Design
The third benefit of building customisations off-platform is that we can design them in a user-centric way, rather than being constrained by the limitations of the vendor’s software.
This freedom leads to a better user experience—optimised for fewer clicks and greater convenience, without being restricted by the platform’s built-in functionality.
There’s also a second dimension to this user experience benefit: we can create seamless experiences across multiple vendor solutions when using independent software. By definition, we can’t build a unified experience across three platforms if we’re using just one of them as the build tool. This results in missed digital opportunities and often forces staff to rely on swivel chair integration—where people switch between systems and re-enter the same data multiple times. The Cost of Swivel-Chair Integration
Solutions that deliver a unified user experience across systems are the ones that provide the biggest returns in terms of efficiency and morale—and these aren’t possible when customisations are built on platform.
Longevity
The final benefit of using open-source, widely adopted tools for building customisations is longevity.
Before Salesforce, the market leader was Siebel. Good luck finding a Siebel developer today. They exist, but they’re generally paid thousands of dollars a day to jetset around the world and fix customisations from the 90s that are now falling apart.
Open-source technologies have a longer shelf life because they’re used by far more organisations than any single vendor’s software. Back in 1997, Java was the go-to technology for “off-platform” customisations. There are still millions of Java developers out there, even though it’s no longer the trendiest language.
In a nutshell, building customisations around vendor software, not within it, is smarter. It’s not just faster and cheaper—it’s better. We avoid upgrade nightmares, sidestep vendor lock-in, create user-friendly experiences, and ensure our work stands the test of time. By keeping our custom code separate and using widely-adopted tech, we’re enabling sustainability over the long term. Remember this the next time a vendor pushes their proprietary tools—your future self (and budget) will thank you.
Part 4: People and Change
7. User-Centric Transformation
Implementing SaaS is the Opposite of Human-Centred Design
SaaS implementation is fundamentally product-focused, not user-focused
True user-first design starts with understanding user problems, not product requirements
We should seperate SaaS implementation from user-centric digital solutions
This separation leads to faster implementation, lower costs, and better user experiences
I think of the world of our internal technologies in two buckets. In one bucket, we have core systems, including the SaaS products we implement, like HR, payroll, accounting, client management, and so on. In the second bucket are the automations we build between these systems and the joined-up user experiences we create around them. I call this second bucket "digital."
I find it confusing when my IT colleagues use the phrases "human-centred design" and "user-first" when they talk about package selection and implementation processes around SaaS packages. After all, if we are focused on selecting a software product and implementing it, our focus is on the product, not on the users. There is a difference between consulting users and following a user-first process.
The normal process of capturing requirements by talking to users is just consultation. Listing out the top ten must-have capabilities of a product starts by asking, "What do you need in the product?" Even though you are asking the users, the question is product-focused. A user-first process starts by sitting with users and understanding their problems in line with their current work practices. In this scenario, it’s not up to the users to come up with the solution014it’s up to the project team. This is genuinely user-first because we start by understanding their problems, not assuming they can translate those problems into requirements.
The only way to validate a potential solution is to put it in front of the users. There is no point in trying to achieve this with a document because humans react differently to a document than they do to a computer system. Spending time mapping a user-centric solution to existing systems or those we are considering in package selection gets in the way of the process. The same is true if we try to solve users’ problems with products like low-code. It’s still product-first. We spend more time thinking about the technology, which gets in the way of building the simplest thing.
In the long term, some of the digital solutions we build might end up integrating into back-end systems, but inserting those into the design process slows us down014and, in my opinion, cannot be considered, at least not cleanly, user-centric. I always separate these two into sequential phases. If there is a big capability hole in the organisation because there is no client management system, then we need to do package selection in the traditional way (well, there are some efficiency tips we can offer here). My goal in these situations is to get the package in, in its out-of-the-box form, as quickly as possible and train up the people who are going to use it. This means accepting that there are some workarounds initially. My goal here is zero customisations, for reasons I’ve explained inVendor Visits and Earplugs.
Getting the package installed quickly helps us avoid the customisation trap. Once the package has settled in, I always follow up with a user-centred approach to remove those workarounds if they affect a large number of users. This process can be (must be) user-centric because it is about creating a joined-up experience014and it’s the users who are having the experience, not the system. This joined-up experience is built with low-cost (open source, cloud V2, unlicensed) technologies that make use of the SaaS product’s APIs, hence the importance of having rock-solid APIs.
Even when we are building these thin digital solutions around our core systems, I would first build the solution standalone, and only after this is working would I look for opportunities to hook parts of the experience into the various packages we have as core systems. This is at the heart of theFlintstones approach I use everywhere. Put another way: package selection is systems-first014buy, don’t build. Everything else should be user-first014build, don’t buy. That’s true digital.
Delineating between these two things saves a huge amount of money in terms of package implementation and solution build because we don’t mix up the two modes of project (systems-first, user-first). Our ongoing maintenance costs are also much lower because we’ve avoided the package customisation trap, and all of our organisation-specific intellectual property is kept away from SaaS, where it is super expensive to operate due to licensing. Whenever these two modes are combined into one initiative, they end up being a candidate for project rescue, and someone gives me a call.
User Experience: The Missing Link in Scheduling
Scheduling inefficiencies in care organisations waste thousands of hours and necessitate excessive support staff due to poorly designed software that prioritises algorithms over user experience. By improving user interfaces and addressing practical needs, we can drastically reduce waste and enhance resource utilisation without duplicating existing systems.
If you ask an executive in a care organisation about their biggest operational challenge, the answer is usually scheduling. Based on my assessments during consulting engagements, software vendors are approaching the problem from a scientific or algorithmic perspective rather than focusing on user experience. This backwards approach has resulted in software that doesn’t address the practical realities of day-to-day scheduling and fails to leverage human input. Ironically, this failure requires more schedulers and customer support staff than necessary.
In a small operation with 100 workers, this inefficiency can waste hundreds or even over a thousand hours of worker time per month due to underutilised staff. Essentially, we019re wasting thousands of client care hours each year despite the hard-won funding. I’ve also noticed that organisations of this size need more schedulers to overcome the system’s shortcomings. Additionally, some organisations have extra call centre staff to handle exceptions, many of which are avoidable.
Complexity theory distinguishes between a complicated system and a complex one. An automatic car is complicated but relatively straightforward to operate. Each part behaves predictably, and the operation of most parts doesn019t affect others. A gearless bicycle, while uncomplicated, illustrates complexity because steering affects balance, which affects direction, and all are influenced by speed. Changing any one of these controls has complex interactions with the overall operation.
Complex systems are hard to optimise mathematically, but complicated systems are not, especially with well-tested algorithms like those taught in university operational maths. While advanced scheduling problems can theoretically be both complex and complicated, I argue that in care, the problem is neither if we consider practicalities. In my experience, vendors have built their scheduling solutions to cover theoretical edge cases, resulting in software that doesn019t work well for routine, day-to-day realities.
First, let019s address the planning challenge, then we can deal with the exceptions that cause much of the waste and work in daily operations. In theory, if I draw a circle around a client019s address, there are several workers available on a given day to schedule services. We can then look at the available times for each worker and cycle through every permutation of matching workers and clients for a particular time slot, eventually optimising resource use.
In reality, it019s not practical to have a different worker visiting a client every day or a different timeslot for each visit. This significantly simplifies the problem for a scheduling algorithm. In an organisation with 100 to 150 workers delivering services under several funding schemes, I believe schedulers could handle most of the work without an algorithm if they had a better user interface.
A team leader recently demonstrated an example of this. The graphical view of worker schedules did not distinguish between ad hoc appointments and regular scheduled appointments. The team leader wanted to isolate regular appointments to create a consistent schedule and then manage the short-term exceptions separately. If she had this information presented clearly, she could have manually allocated regular slots for up to 50 people quite easily. Instead, she had to print an A3 page for each worker, showing all their appointments, and manually sift through them for patterns014a task that took her an entire weekend.
Another example of planning shortcomings is in the visibility and configuration of worker-to-client preferences. Most applications allow for recording positive and negative preferences. A positive preference means a client gets on well with a particular worker, while a negative preference indicates a client or worker prefers not to be paired, for various reasons. Although these preferences can reduce system efficiency, they are a reality. Handling them as an afterthought adds significant work to the scheduling process.
In reality, most systems bury these settings so deeply that accessing them regularly is difficult. This information is often not included when displaying work patterns to team leaders and schedulers during the planning process. Consequently, after initial schedules are generated, team leaders and schedulers typically have to manually reassign tasks to account for practicalities.
These problems are exacerbated when support personnel deal with exceptions on the day. The most common issue is redistributing client work triggered by a worker’s short-notice absence, typically due to sickness. In theory, software offers the ability to redistribute this work with a single click. However, due to the limitations we’ve described, support staff often have to manually navigate through each remaining worker’s schedule for each appointment in question to determine the best matches and then implement the changes. I’ve seen instances where redistributing the work of one person for one day required over 100 clicks across several screens, many of which were visited multiple times.
In the longer term, the best solution is for vendors to improve their user interfaces, working forward from user needs into the product’s features rather than the current backwards approach. In the meantime, we can make dramatic improvements to utilisation and reduce the number of scheduling support staff required by building a better experience around the existing vendor platform.
While building this type of solution might seem daunting, it is quite straightforward because we’re not duplicating the functionality of the underlying system. We’re just providing a simplified interface to it. As discussed previously, this assumes you’ve purchased software with strong APIs. This is the type of project we could easily pilot in less than a month. It’s feasible that the pilot could be gradually adopted without significant additional work and deliver major savings. To be conservative, I’d allow another month for enhancements and feedback.
In an organisation of, say, 100 workers, this approach could pay for itself in less than six months. Taking control in this way typically accelerates conversations with vendors. In my corporate life, we had great success with this approach for a customer onboarding problem that had been discussed with the vendor for years. Within a year of building our one-month solution, the vendor used our system as a specification and built similar features into the base product. While this might seem like duplication, it019s highly productive because it brings forward the business benefits of removing waste from the organisation.
It’s a shame that this back-to-front approach to building off-the-shelf products is so prevalent, but if we accept this reality and take control of our own destiny, the benefits are great. This approach allows us to deliver more client care with the funding we’ve already secured.
Unleashing Team Motivation: The Power of Purpose and Autonomy
Clear purpose ignites motivation; micromanagement kills it.
Teams excel when they control the** how****and chase a meaningful**** why****.**
In my early days as a project manager, I thought my main job was to take the requirements and technical architecture documents created before the project began, break them into tasks, and assign those tasks to my team. Tracking progress and reporting to the sponsor seemed like a natural part of the process.
Only later did I realise that by taking on these responsibilities myself, I left no room for the team’s creativity, and as a result, they weren’t very motivated. They were as motivated as other teams around me, so it took a while to see there was another way.
The shift in my thinking happened over a decade of reading, experimentation and coincidences. A pivotal moment came when I watched Simon Sinek’s TED Talk, "". His core message resonated with me: "If we don’t know why we’re doing something, our project will fail".
This idea is perfectly captured in Lewis Carroll’s Alice’s Adventures in Wonderland , when the Cheshire Cat says:
"If you don’t know where you are going, any road will get you there."
I’vepreviously written about the importance of governing towards an outcome rather than scope or plans. This approach means articulating the "why" of every project so that every team member is aligned and moving in the same direction.
The other side of this is that autonomy fuels motivation. To give a team autonomy, a clear purpose must first be established. The only way to avoid micromanaging a team is to give them a goal, not a list of tasks.
Setting a scope-based goal doesn’t work because it doesn’t address the "why". In project management, the "why" should align with the business case’s benefits, not just the cost side of things.
When a team fully understands the "why" behind their work, they become highly motivated, because they’re chasing a real-world goal, not just completing a feature, phase, document or bit of infrastructure.
However, being clear about the goal and still micromanaging the team can be just as demotivating. When we dictate the steps, we block their creativity and enthusiasm. Often, the team will disagree with how we’ve broken down the tasks014and most of the time, they’re right.
So, autonomy is just as important as clear goals when building a high-performance team. An unmotivated team simply can’t perform at the same level as a motivated one.
We can set guardrails if needed, but these should be minimal. Teams should have the freedom to choose their technologies and the scope of what they build. All we need to do is hold them accountable for the outcome, not the path they take to get there.
I’ve worked this way for two decades, and the teams I’ve been part of remain largely the same, even though we’ve moved through various projects and roles over time. We keep coming back together because we know it’s going to be fun and highly motivating.
It’s theholy grail of teamwork, but it’s much simpler to achieve than you might think.
8. Managing Stakeholders
A Client Management Example of Building an Interim Solution
The new client management system is a disaster, failing to meet basic requirements and creating more work than the spreadsheet solution it was meant to replace. Instead of clinging to outdated assumptions about the cost of building interim solutions, we need to invest in a practical fix now to boost efficiency, reduce cyber risk, and save money in the long run.
I recently advised a service provider on how to address issues with their client management processes. My recommendation was to build an interim solution to replace 13 spreadsheets currently used for tracking clients and case notes. There’s a recently implemented client management system in place, which raises the question, "Why don’t we just migrate these teams onto the client management system?"
Unfortunately, there are significant issues with the new system. It’s unclear if it will remain in use, partly because it doesn’t meet some basic requirements. While the software can capture case notes, it requires separate notes for each service. However, in-home care workers often provide multiple services in one visit and prefer to record a single case note.
Moreover, the system doesn’t accommodate the typical service provider’s information capture needs. Multiple teams deliver services under various funding programs and contracts, and the system’s complexity hampers efficiency for workers and team leaders. For example, making notes at the end of a client session should take a few clicks, but here it involves over 100.
We019ve explored multiple workarounds, but these add extra burdens and reduce organisational efficiency. For instance: recording services and compliance report flags in an unstructured notes field since the product can’t capture configurable fields for each program. This approach would create a massive amount of downstream work every month to clean data, reconcile, and work around the system’s inadequacies014likely requiring more effort than the current spreadsheet solution.
There is an ongoing activity to reassess this product’s basic fit. In this context, the next obvious question is, "Why don’t we just wait for a strategic solution?" This question hints at outdated assumptions from the last millennium that haven’t been valid for almost 30 years. The core assumption is: "It’s too expensive to build something and then throw it away."
In the 60s, 70s, and 80s, this belief was generally true, making it a reasonable motto to follow instead of constantly assessing cost-benefit. However, the IT industry has changed fundamentally since then, and this motto is no longer appropriate. It is now costing us a huge amount of money in wasted human and organizational capacity.
Firstly, the pain of an ill-fitting solution is often neither clearly considered nor quantified and documented. Because we cling to that outdated motto, we don’t even think about it. This leads to hidden costs that compound over time. These costs are very real as they reduce our capacity to deliver services, but we simply choose to ignore them. In this situation, there are two serious business costs to consider: risk and actual financial cost.
Client information for 13 programs and their respective teams is currently held in spreadsheets stored on a shared drive. This is common in the industry but poses significant risks. Spreadsheets with sensitive data are an easy target for anyone who breaches the building’s physical security or compromises an employee’s credentials via phishing. Many of these spreadsheets also contain medical case notes, making the situation incredibly fragile.
There have been several breaches in the charity sector in recent years, and such organisations are increasingly seen as "soft targets" due to practices like this. Small charities often lack the same controls around bank account access and invoice payment that are common in larger organisations. Consequently, with a couple of days of work, fraudsters, including hackers, can walk away with hundreds of thousands of dollars.
In some ways, the cost side of this business problem is more tangible. My most conservative estimate suggests we waste about two FTEs on the current spreadsheet solution. Various business processes, from first client contact through assessment, onboarding, and the first visit, involve multiple people working around spreadsheets and rekeying information. Most of these people are managers, not workers. Given the salary differential, we are effectively wasting 3 FTEs of client services every day. That’s a lot of care!
Every day working around a spreadsheet-based solution should be burning a hole in our psyche. But because we don’t spend enough time quantifying this, we pootle along, oblivious. We’ve established that there’s a significant cost associated with the status quo and, therefore, a substantial business benefit in fixing it now. The next question is, "But doesn’t it cost loads to build a solution?"
In reality, this question is rarely asked. It’s another assumption based on data from the 70s and 80s. When we ask this question in the context of using cloud V2 technologies, the answer is often surprising to the uninitiated. We are not proposing to build a core system here, just a simple user experience 013 essentially replacing a couple of spreadsheets. It requires just a few pages (client, case note) and somewhere to store some data. We’re talking about 2 to 4 weeks of time and between $20k and $40k if a small dev shop like mine does it for you. It would likely be even less if you have an internal developer.
Even with the most conservative benefit estimate of $120k per year (through increased capacity by lowering admin waste), and using the higher of the two cost estimates, we’re looking at a project that pays for itself within four months (40/120 gives us one-third of a year). That’s a fast payback considering most programs these days have a two-year payback period. So, the final question is, "Isn’t it a waste of $40k if we end up throwing it away?"
Most likely, no, it is not a waste. Firstly, it’s unlikely that we’ll resolve the questions around the core system in less than four months. Even if we only consider the numbers, as long as it takes four months or more, it makes perfect sense to take control of our destiny and improve staff experience while dramatically lowering the cyber risk of having spreadsheets scattered across the organisation. Additionally, there’s a very real possibility that the system will be judged as not fit for purpose. In this event, the timelines would extend markedly, increasing the value of an interim solution.
Even if the product is retained and fully adopted, having an interim user experience that works for all programs and teams will be invaluable in shaping the requirements we share with the vendor. In this case, I would recommend using the vendor’s software as the system of record. This would allow us to replug the interim solution into the retained product, effectively repurposing most of our spend on the interim solution.
So, what I’m advocating here is to stop using the "buy, don’t build" mantra of the last millennium. It’s no longer applicable and distracts us from the real goals: increasing capacity and reducing cyber risk. Instead of relying on this outdated decision-making process, we should start every investigation with the question, "What is the justification for NOT fixing this problem right now?"
[New essay needed: “Building Board Support for Value-Based Governance”]
[New essay needed: “Managing Government Funders Under Value-Based Governance”]
Part 5: Future Directions
9. New Technologies and Approaches
The Cost of Custom Digital Solutions is Plummeting
Custom digital solutions are now over 90% cheaper to deliver than 20 years ago, with small teams and faster timelines.
New efficiencies make smaller projects feasible, but traditional assessment processes are now often more costly than the projects themselves.
Organisations need streamlined approval methods to truly benefit from the low cost and speed of modern project delivery.
In 2004, delivering a significant digital project required a budget of at least $2.5 million. A typical team consisted of 12 or more people, and projects would take 12 months to complete. A large portion of that budget went to infrastructure, as we had to set up our own data centres, supply computers, networks, and configure everything from scratch.
By 2010, with the advent ofCloud V1.0 technology, we could deliver the same scope of projects for around $1.5 million, typically within a six-month delivery phase. This reduction was possible because cloud vendors began handling much of the computing and networking infrastructure, freeing us from the need to order, supply, and install hardware ourselves.
By 2016,Cloud V2.0 technology had further reduced costs. Projects of similar scope could now be completed for approximately $450,000, with a team of six people in just four months. The need for infrastructure provisioning had been eliminated, dramatically reducing both resource requirements and the lead time associated with setup.
Over the past year, I’ve observed the start of yet another drop in costs, driven by a new generation of young developers harnessing AI tools to accelerate project delivery. Today, I’m starting to see projects like those above delivered by teams of just three or four people in two to three months, with budgets around $180,000.
It’s important to note that these figures haven’t been adjusted for inflation. In real terms, the cost difference over this 20-year period is even more pronounced014today’s costs are over 90% lower than they were two decades ago.
This trend is undeniably good news, but there are a couple of nuances worth noting.
Firstly, projects that would have been unfeasible 20 years ago due to high costs are now almostno-brainers. Today, we can achieve the same business benefit at a fraction of the cost, especially for smaller projects. In other words, building integrations and connected user experiences makes far more sense now than it did 20 years ago.
The second nuance is that to fully leverage this cost reduction, we need to rethink how we assess and prepare for these projects. In many cases, the overhead of assessing a project costs more than delivering it. This issue is particularly pronounced with smaller projects, where the cost and time of assessments, approvals, and partner management can outweigh the project delivery costs.
There’s little value in applying a traditional 12-month assessment, funding, and management process to a project that can now be delivered for $100,000 in a couple of months.
Personally, I’m excited about this trend, as it promises to lead to more niche software offerings in sectors likehuman services.
Will AI-Powered Developers Leapfrog Low Code?
Low code solves small problems; AI-assisted developers tackle the full picture.
Young, AI-trained developers deliver 90% solutions vs. low code’s 10%-20% effectiveness.
The future of business apps may favor expertise amplified by AI over simplicity-driven platforms.
Some organisations have indeed built hundreds014or even thousands014of citizen-developed applications. However, it’s still unclear if these apps have delivered a net positive impact. I’ve personally seen a few citizen-built apps that made life easier for small admin teams, outperforming spreadsheets for specific tasks. Even so, in the examples I’ve observed, these apps often left significant business challenges unresolved, particularly aroundsystem integration and user experience.
My current thesis is that ahuman-centred approach to solving business process bottlenecks requires some level of training. Interestingly, I don’t think this training is particularly technical014it’s actually independent of coding skills. However, it’s the kind of training that’s often included in coding qualifications, like a computer science degree. I believe a subset of these skills is essential for addressing process bottlenecks with technology, whether through low code or traditional coding methods.
Recently, I’ve been following the journey of five young people who became productive (paid) developers in under six months from their first introduction to code. Their rapid progress has been powered by the latest AI tools, combined with expert training and guidance from experienced developers.
I’ll write more about the journey these apprentices took another time, but the experience has sparked a question in my mind: which is better, low code or youngAI-assisted developers? Based on what I’ve seen so far, I’m leaning strongly towards young AI-assisted developers. The solutions they build tend to address 90% of the business problem, compared to just 10%01320% for low code, which often struggles with integration and user experience limitations.
Over the next 12 months, I’ll be watching closely to see whether the marketing machines of major low code vendors pick up speed or if these tools begin to fade into the background.
What019s your experience with low code platforms or AI-assisted developers? Have you seen either approach succeed (or fail) in your organisation?
- [New essay needed: “The Future of Value-Based Governance”]
10. Security and Risk
Are Charities Immune to Cyberattack?
Charities are increasingly becoming prime targets for cyberattacks due to their perceived vulnerability, as evidenced by high-profile breaches like the one involving Pareto Phone. With methods ranging from insider recruitment to exploiting software vulnerabilities, hackers are taking advantage of charities’ weak defenses to steal sensitive data and funds, highlighting the urgent need for stronger security measures and thorough vetting of third-party vendors.
When I first moved into the charity sector, I assumed it was barren ground from the perspective of hackers and fraudsters. Speaking with charity leaders and friends in the cyber industry has disabused me of this belief, though.
Certainly, there have been some quite public breaches in the past year. First among these is the Pareto Phone breach, which resulted in the exposure of sensitive data for around 50,000 donors across up to 70 organisations. The breach was orchestrated by the LockBit ransomware group. The specifics of how the breach occurred point to a few potential methods commonly used by LockBit.
Notably, LockBit is known to recruit insiders within organisations, promising significant financial rewards for providing access to networks through credentials like RDP, VPN, or corporate email. This method involves insiders either sharing login credentials or running malicious software provided by the attackers, allowing remote access to the network. Additionally, LockBit and similar ransomware groups often use phishing attacks and exploit vulnerabilities in software or network configurations to gain initial access. Once inside, they can move laterally across the network, exfiltrating data before deploying ransomware to encrypt the organisation019s files.
In the case of Pareto Phone, the breach included data from a single machine019s D: drive that held extensive legacy data, indicating potential lapses in data security and retention policies. This suggests that the attackers either exploited a specific vulnerability in the system or utilised insider information to access this particular machine. It makes sense that hackers and fraudsters target concentration points like this one. By breaching one organisation, they breached many.
How Breaches Actually Happen
Security breaches often start with deceptive emails/sites, not high-tech hacking.
Fraudsters exploit human error and unpatched vulnerabilities to gain access.
Awareness and education are the best defences against these attacks.
I asked my go-to cybersecurity expert today whether the popular view of a security breach, as depicted in the movies, is accurate, and I found the answer interesting enough to share. The popular view involves some ultra-genius hacker in a dark room with multiple monitors running arcane commands to overcome the technical defences of the target organisation.
Mark Belfanti, who has a history as Chief Information Security Officer (CISO) for large corporations and headed security for the NBN network, now leads the cybersecurity practice at ThunderLabs. Mark explained that the more common approach to initially gaining access to someone’s infrastructure involves some clever email writing and a simple website. All of this takes a couple of days to set up, max.
First, the fraudsters set up a website to look like one of the company’s internal systems—often the HR system or payroll. Once that’s done, they send emails to as many of the employees as they can find. The email is made to look like it is coming from someone in authority within the company. Commonly, the email will notify the recipient that HR requires them to log in and recheck their times to process payroll. But instead of taking them to the internal payroll system, the link in the email directs them to the new (copy) website, controlled by the fraudsters. Even the most vigilant employees can have a momentary lapse of concentration and click on the link. It only takes one.
When the employee thinks they are typing their username and password into an internal system, they are actually entering their credentials into a fraudulent website, which stores the details. This process is known as “phishing.” The term is a play on “fishing” because the cybercriminals are throwing out thousands of hooks, hoping for one of us (the “fish”) to take the bait.
As a side note, the “ph” comes from “phreaking” (a combination of “freaking” and “phone”), which is where hacking originated in the 70s and 80s when people exploited security holes in the telephone system to get free access to national and international calls. This naturally transitioned into computers when the digital age hit.
The fraudsters then use these details to sign in for themselves. Once they breach one system, they begin gaining more and more privileges and accessing additional internal systems. Although this work is closer to the popular view of “hacking,” the reality is they are just exploiting “known vulnerabilities” in various systems to see if any have been left unpatched.
This is where the importance of “patching” comes in. The fraudsters are not identifying these vulnerabilities themselves (this is the work of real hackers); all they are doing is checking to see if the target organisation has patched against them. Many of the vulnerabilities exploited today were identified and published over a decade ago—it’s just that someone forgot to patch them.
By exploiting these vulnerabilities, fraudsters can escalate their privileges beyond those of the employee whose credentials they’ve stolen. This process is called “privilege escalation.” In many cases, it’s relatively straightforward to gain access to shared drives and even back-end databases with sensitive data. Once they have access to this information, the fraudsters have several options, including selling the data, holding the organisation to ransom, or using it for further attacks—either on the same organisation or others.
The only practical defence against this type of attack is education. Once you’re aware of what to look for, the habit of checking the source of suspicious emails quickly forms. The best example I’ve seen of this education was at a previous client, where the internal team would regularly and randomly send out typical phishing emails to employees, followed by a lighthearted educational video for those who clicked on the links. Some of these emails were crafted more cleverly than the real phishing ones I see every day.
Digital - the First Step in Your Cyber & Data Journeys
Charities can’t tackle cyber risks or data ambitions without first securing their data through robust digital solutions. Start by fixing core systems to replace spreadsheets and shared drives, freeing up staff capacity and paving the way for broader digital, data, and cyber initiatives.
There is a tendency in the charity sector to treat data, digital, and cyber as separate initiatives. For organisations that have data and digital covered, this approach is appropriate. However, for most small and mid-tier charities, addressing cyber risks or data ambitions is nearly impossible without first ensuring data is securely stored in core systems, not inspreadsheets and shared drives.
This is adigital challenge because much of the data is exposed due to shortcomings in core systems or the lack of effective digital solutions around those systems. When consulting these organisations, it is usually impossible to suggest addressing the top three cyber risks or establishing an impact reporting framework without first strengthening the systems and filling the digital gaps.
In practice, the first step towards digital is a triple whammy. Not only do we open up internal capacity by making staff more efficient, but we also fix the dependencies required to address our cyber and data ambitions. The good news is that these first steps are typicallysmall initiatives aimed at removing spreadsheets, either by completing abandoned system implementations or addressing shortcomings in core systems bybuilding thin, joined-up experiences around those systems. Each of these initiatives typically takes less than a month.
New Essays Needed:
- The Hidden Costs of Traditional Governance in Charities
- What is Value-Based Governance?
- Why Value-Based Governance Matters More in the Impact Sector
- Building Board Support for Value-Based Governance
- Managing Government Funders Under Value-Based Governance
- The Future of Value-Based Governance