12 April 2021

The Tech Industry Has A Futures Problem

For an industry so focussed on inventing the future, you'd imagine tech companies would spend more time thinking about the impact of their products. However rather than imagining rich and nuanced futures, the tech industry seems hamstrung by incrementalism. Fortunately there is a solution.

Start-up Innovation often begins with a single big idea, before dividing it up into user stories, feature teams and two-week sprints. Trying to visualise the future product is seen as "big design up front"—a form of heresy for many modern Agilistas. However without a clear picture of what you're building, it's almost impossible to predict what problems may arise. Instead we’re encouraged to learn about the future not by thinking, imagining or researching the outcomes, but though launching products and learning from the results.

Trying to visualise the future product is seen as "big design up front"—a form of heresy for many modern Agilistas.

This may sound good on paper. Especially if the thing you’re trying to learn is how to bend your sign-up graph into a that much vaulted hockey-stick shape. However barely any thought is given to the externalities you create along the way. How, for instance, might encouraging large scale, frictionless and often anonymous communication contribute to disinformation and online abuse? Or how might encouraging a home-stay revolution affect house prices, displace key-workers and contribute to a rise in anti-social behaviour?

How We Deliver Projects May Be Part Of The Problem

A lot of Silicon Valley insiders will claim that it’s easy to understand these problems in retrospect, but it’s almost impossible to predict what will happen in advance. While this is true to an extent—hindsight is a gift most product teams don’t have—it’s much easy to blame the complexity of the problem if you've put minimal effort into understanding it in the first place.

It's easy to see why this happens. If the drumbeat of progress forces the team ever forward, there’s never any time to understand the implications of your actions. Resources are often limited and the backlog gets ever bigger, so what's the point of thinking about stuff which may never happen, and almost certainly won't help us hit our targets, when we've got a raft of new features to ship?

This is arguably fine when you’re a small start-up servicing early adopters, and the potential harm you can do is relativity constrained. However the product decisions you make in the early days get baked into the product, making them much harder to unpick as you grow. Now you're no longer just serving a small number of customers that mostly look and sound like you, but rather whole markets. Groups you once saw as rounding errors now become entire constituencies—often justifiably vocal ones—and things start to tricky.

The product decisions you make in the early days get baked into the product, making them much harder to unpick as you grow

One of the (many) reasons governmental bodies and public institutions are seen as being slow and ineffective is that they (understandably) focus on harm mitigation over speed and impact. There will be ethics boards. There will be internal governance processes. And there will be a strong cultural focus on following the letter of the law. As a result, these sorts of organisations hire people whose sole job it is to think deeply about the potential ramifications of their decisions.

The Rise of Ethical Debt

In a move fast and break things culture, a lot of this goes out of the window. Ethical considerations become another form of technical debt. We’ll launch the product now, and deal with the accessibility issues, abuse vectors or algorithmic bias later. This approach undoubtedly allows rapid value creation, and in the world of tech, is almost always seen to outweigh the small amount of harm you may cause along the way.

In a move fast and break things culture... ethical considerations become another form of technical debt.

Sadly the focus on “fixing it later” rarely happens without some sort of public outrage. Because solving these problems rarely impacts a teams OKRs, and gets in the way of a nicely curated roadmap. Instead you launch a new feature and largely forget about it, until some sort of public outcry happens, when you divert resources to addressing it. One of the big problems with this approach is that it takes time for these outcries to happen, and they're aways a lagging rather than leading factor.

It's worth pointing out that there are always unintended consequences. The unknown unknowns that probably could never have consider without launching the services. However I suspect these are fewer and further between than the tech companies would have you believe. So the trick is to get ahead of these issues before they become public, or even better, before they create harm.

Ethicists to the Rescue?

There’s a small but growing drive for tech companies to hire ethicists, and I think this is laudable. This often positions ethics as as an education problem. That people are making unethical decisions due to a lack of knowledge. While this may happen, I'm not sure this is primarily an information problem.

This also positions ethics as a governance problem, which I think is closer to the truth. Ethicists inside tech companies will often work to create a set of standards and procedures for staff to follow. This has the added benefit of casting blame if something goes wrong. This person on this team failed to follow our governance policy, rather than anything more deep rooted.

One of the challenges of hiring ethicists is that it's a lot of work and responsibility to put on a couple of peoples shoulders. It also has the habit of outsourcing the problem, and can be seen as ethics washing at times. So while this is a good start, there's another capability missing from most digital teams — that of futurists.

The Growing Importance of Future Thinking

Now I know a lot of tech companies have futurists as part of their robotics and AI programme. Largely because the things they want to build can’t actually be built yet—or if they can, there's no real demand—so most corporate futurism becomes a mix of intellectual R&D and marketing.

While this is all well and good, I’d like to see more futures thinking make its way out of the R&D labs and cutting edge technology, and into the daily practice of design teams. Designers are great at thinking through future consequences, and have a whole range of tools at their disposal (the future cone in my header imagine is just one of them). However without the time, resources and governance procedures, these tools are rarely deployed in a systemic and meaningful way. Instead they rely on individual designers trying to sneak them into the process, running the risk of being anti-agile, anti-lean or just difficult to manage for diverting attention away from targets. Like just things in corporate life it comes down to misaligned incentives.

Designers are great at thinking through future consequences, and have a whole range of tools at their disposal

As such I'd like to see more design and product teams build out and normalise design futures capabilities. This would involve ensuring that certain future thinking practices become part of the typical design process. So in the same way that teams will refuse to launch a feature until it's gone through QA, I'd like to see teams refuse to ship a feature until they've spent some time thinking how it can be used, misused or cause harm, and put some mitigation strategies in place. If you identify these issues as problems, and give them owners, they become much harder to ignore.

As well as distributing futures skills amongst feature teams, I think it also makes sense to develop specialist capabilities. Teams whose sole job is to imagine next and future products—and the problems they may cause. Some of the larger tech companies are already doing this, but I think it's going to become of increasing importance to teams of all sizes. If you're in the business of inventing the future, having people dedicated to thinking about the future seems like an obvious choice.

If you're in the business of inventing the future, having people dedicated to thinking about the future seems like an obvious choice.

Footnote

While this trend may happen on its own, we as a society can give it a push, by putting legislation in place that makes tech companies take some ownership of the decisions they make and externalities they inadvertently create. We've done this with accessibility legislation and data privacy. There are moves afoot in California to do similar around the use of Dark Patterns. There's a long history of industries ignoring the externalities they create—like pollution—until legislation comes around. There's also a long history of sectors starting to self govern—news and advertising to name just two—in order to avoid much stricter legislation. While flawed, Facebooks independent oversight board is an interesting start.