There’s a debate amongst economists as to whether new technology can restore economic growth, or whether the low-hanging fruit (a growing working-age population, spare arable land and the basic utilities and services of the early twentieth century) has already been eaten.
- Certainly the demographics are now against Western nations, and we aren’t starting from a position of economic strength.
Those economists who believe growth has slowed because of a lack of technological innovation generally believe that new technological progress will restore economic prosperity.
- This may or may not happen, but let’s start by looking as how technology progresses.
Technological change tends to follow an S-curve.
- After the initial invention, progress is slow, but gradually more people use the technology and there is a period of rapid innovation.
- When most people have adopted the technology, progress slows again and people wait for a new, disruptive (“game-changing”) technology to come along.
Planes are a good example.
- The first flight was in 1903, but it was World War 1 that really spurred progress.
- In the inter-war years rapid progress continued, so that by the Second World War, the limits of performance in propeller aircraft were reached.
- At the close of the war, an entirely new technology – jet engines – arrived to disrupt the existing status quo.
On this analysis, the reason that planes in 2016 are quite similar to the planes of the 1960s is that no new technology has arrived to disrupt and replace jets.
- It could be that this new technology is just around the corner, or it could be that the physical laws that govern powered flight mean that there isn’t another technology to come.
What makes information technology different is the multiplier effect it has on other technologies.
- The human genome is a good recent example – sequencing the genome would have been impossible until recently because of the sheer amount of data than needs to be processed.
Another area of massive change has been oil and gas exploration.
- All of the known reserves from 1980 have been consumed, but we have been able to discover so much more using computers and 3-D underground imaging that there is still more left than we started with.
Simulations, models and computer-aided design are used everywhere today.
- IT makes R&D much easier in every area of technology, and so should make breakthroughs into new S-curves more common.
In fact, IT is now so embedded into the fabric of daily life that in some contexts it makes more sense to think of it as a utility, like electricity.
From a distance, it might also appear that IT is different because Moore’s Law has enabled it to stay in the “rapid development” phase for much longer than other technologies.
- In fact this is an illusion – Moore’s Law is in reality a staircase of over-lapping S-curves, each reflecting a new way of preserving the relentless development of computing power and value.
For some reason – not least enormous financial rewards for successful innovators, and hence enormous R&D spend, IT has been able to produce a new disrupting technology to improve chips whenever one was needed.
Within a few years, physical limits will be reached on how small the transistors can be made.
- Even then, new materials or new architectures may extend the staircase for a while.
Parallel programming – where lots of cheap computers are networked together to work on a common problem – is a promising avenue.
New designs that produce massive leaps forward in performance are also conceivable.
- Many people point to the human brain as a concrete example where the elegance of the design overcomes the pedestrian nature – and slow speed – of the hardware.
But there is another side to IT – software.
In the Rise of the Robots, Martin Ford quotes an analysis by Martin Grötschel of the Zuse Institute in Berlin.
- A production planning problem that would have taken 82 years to solve in 1982 could be solved in 2003 within one minute.
- This is an improvement by a factor of 43 million.
- The computer hardware involved was around 1,000 times, which means that the software algorithms accounted for a separate 43,000 times increase in speed.
This is however by no means typical.
The software used most commonly by the most people – office “productivity” suites have squandered the gains in hardware performance in order to make ever more bloated “user-friendly” interfaces.
- The cynic in me would suspect computer firms of trying to drive faster hardware upgrade cycles if the software and hardware companies were more closely connected (historically few companies provided both halves of the equation, though this is beginning to change).
Alongside this waste has been a parallel failure to automate routine tasks so that people can focus on where they best add value.
The real way in which IT is different is that it represents not the automation of humanity’s physical characteristics – strength and labour – but our cognitive abilities.
- IT has the appearance of – and is moving towards the substance of – intelligence.
Machines don’t yet think, but they do to a limited extent reason and make decisions.
The more sophisticated versions can solve simple problems.
I think it’s a mistake to see this as a black and white thing – there’s a continuum of cognitive ability in machines, right up from a kettle’s ability to turn itself off when water boils.
- This is in fact a famous debate in robotics – Google “Is a Toaster a Robot” – which we’ll look at in a future post.
Human economic progress comes largely from “the division of labour” – we each specialise in something we’re reasonably good at, and rely on others to do everything else for us.
- But the more we specialise, the more easily we can be replaced.
Computers / machines / robots are quickly getting better at routine and predictable work, and will get better at them than people are.
- That’s where things get interesting.
The end game of the division of labour is comparative advantage.
- Each person or country should do the work that they do best (or as economists would have it, the work at which they are least bad.
So here in the UK we might make TV programmes or jet engines, while in China they might make sinks and toilets, and assemble electronics. In fact China is already too expensive for a lot of this work, which has moved to places like Vietnam – but bear with me for the purposes of the analogy
Or at the individual level, I might become a brain surgeon, but I would need someone else to fix my car and mow my lawn, or perhaps cook my dinner.
This doesn’t hold true any more when I can substitute a machine to do any tasks I don’t have time to do for myself.
- And things get even worse when I can replicate the relevant parts of my personality – my preferences, my “style” if you will – in software, to guide the machine into doing things just the way I like.
In a limited sense, robots and AI will allow people to be in two places at once, doing two things they want done, and in the way they want them done.
And that’s just the selfish angle.
- Let’s look at it from the point of view of society.
Elite institutions – hospitals, universities, law firms, orchestras – employ elite individuals.
- The same principle applies in factories and offices, but we’re less used to thinking of elite workers in those contexts.
These are the people whose services everybody would love to have access to.
- But not everyone can have access – there isn’t enough of the elite person to go around.
What happens when the elite can train an army of machines to copy them and their elite ways?
- Who will want to use a non-elite surgeon once scarcity (and hence cost) has been removed?
- Who will want a lecture from a non-elite professor?
- Or a meal from a non-elite chef?
We’re some way from this future, but that’s the direction we’re headed.
Until next time.
Footnotes [ + ]
|1.||⇧||In fact China is already too expensive for a lot of this work, which has moved to places like Vietnam – but bear with me for the purposes of the analogy|