It’s 2016, extreme poverty is on the verge of disappearing from the earth, private companies are planning manned flights to Mars, you’re probably reading this on a portable device that’s millions of times more powerful than the computers used for the first lunar mission, and there’s two ideas I keep hearing over and over again:

  1. Robots are taking our jobs; and
  2. We’re living in an age of economic stagnation

What’s going on here? It sounds like two opposite versions of old sci-fi dystopias both coming true. Why are both of these memes having such a strong impression? Which one, if any, is true? And what does this have to do with the convoluted theory of finance I’ve been detailing over the last several posts? All that and more, but first a little background:

A Brief History of the Human Race

Strange as it may seem to us now, planning for the future is a relatively recent phenomenon. Prior to the advent of farming a few thousand years ago, most of the humans that ever existed had little need to make plans out beyond the next day or so. Our hunter-gatherer ancestors, just like all the animals that had ever existed, ate what the found or killed that day, and owned little more than what they could carry with them. In the beginning, there was no concept of savings or insurance, higher education or retirement, schedules and deadlines, work-time or play-time. The concept of time itself was fuzzy; certainly people understood that they would inevitably grow older, and hoped they would earn the respect of their tribe as they did, find a mate and have children, and watch them grow up and have children of their own. But there was no specific order of operations that would link the present to the future. People mostly just did the same thing they’d done the day before, and hoped that no mischievous spirits thwarted their efforts.

Things changed with the agricultural revolution. Farming required hard work for a future reward. Like the Ant and the Grasshopper, those who were industrious and planned ahead were successful, while those who didn’t perished.

The favorite bedtime story of financial advisors

The favorite bedtime story of financial advisors

This was a Malthusian world. For most of history the number of births roughly equaled the number of deaths and the vast majority of people were farmers who worked long, hard hours to earn just enough to keep themselves fed. In his book A Farewell to Alms, economic historian Gregory Clark shows that in this harsh era those that were the most future-oriented had the greatest odds of surviving and leaving behind children and grandchildren. In England, estate records going back to the medieval ages show that men with larger estates had more surviving descendants at the time of their death; thrift and savings was encouraged by natural selection. Clark theorizes that over centuries this selection process eventually led to capital accumulation that reached a tipping point with the industrial revolution.

With the industrial revolution mankind did the impossible and escaped the Malthusian trap. For the first time on earth population exploded, unconstrained by famine or pestilence.

And wealth grew even faster than population. Once mired seemingly endlessly in poverty and drudgery, the average person for the first time saw their living conditions improve, steadily and dramatically. Below is real per-capita GDP over the last thousand years.


At the heart of the industrial revolution was the breaking of the bottleneck on capital formation; there had been technological innovation between the death of Socrates and the European Enlightenment, but tools still generally had to be hand-made by skilled artisans, seriously limiting the rate at which wealth could be created. Whatever economic progress that was made was quickly outstripped by population growth, leaving the average person no better off as a result. The average 18th century Englishman was no materially wealthier than the average citizen of the Athenian demos. With the advent of steam power and the modern factory (and the commercial and legal institutions that supported them) however, people figured out how to make machines that make other machines. This opened up a hitherto unrealized source of recursive self-improvement, increasing the size of the exponent of our economic growth rate to a level higher than our population could keep up with. The difference was a growth in per-capita incomes – our standard of living – that we have enjoyed ever since.

Robots!

The radical improvement in our tools has obviously changed the way we work and the sort of work we do. On the eve of the industrial revolution, more than half of the people in the US and Western Europe worked on farms. Today it is 2% or less. Entire industries and professions have been created and destroyed in the interim, and this state of perpetual change has always been disruptive. Technological progress necessarily entails making certain kinds of jobs obsolete, creating (at least temporarily) losers made worse off by the forward march of civilization. The most famous historical example of this were the Luddites, a group of skilled textile weavers in early 19th century England displaced by new power looms operated by less-skilled, lower-paid laborers. The Luddites organized as a mob and destroyed the factory machinery they saw threatening their livelihoods before being quashed by the British government.

Today the word “Luddite” is mostly used as a pejorative against anyone who expresses a view that’s skeptical of the benefits of technology. The view that is universally held by economists and most educated people today goes like this: sure, technology can displace workers in the short term, but it always results in more jobs being created doing something else. Today many of the jobs people do didn’t even exist 100 years ago, and this increased specialization makes us all better off. For over two hundred years, at every industry-shaking technological innovation there have been people warning that it would cause social upheaval, widespread unemployment, oppressive corporate monopolies, the rich getting richer and the poor getting poorer, and for over two hundred years they have been wrong, and the economists have been right. In every corner of the world where industrialization has reached, people are on average wealthier, healthier, happier, better educated, and work fewer hours than their ancestors of just a generation or two before.

But is this an ironclad law of economics or just a contingent feature of the present industrial era? Most of the advances in our capital have thus far been about automating dull, repetitive, and physically demanding tasks that freed up human skill to work on more mentally challenging problems. Where once there was a fleet of laborers on an assembly line mindlessly cranking out widgets (as in the famous Charlie Chaplin scene), now there are just a few highly skilled technicians that oversee gigantic factory robots assembling widgets at ten times the speed. But some of the more recent developments are of a different flavor. Advances in artificial intelligence increasingly are being deployed in such white-collar industries as medicine, law, and finance. Might this time be different?

Economically, the question is about whether capital is a complement to labor (like peanut butter and jelly) or a substitute for it (like peanut butter and steak). Since the industrial revolution, machines have complemented humans, making them more productive, resulting in ever-increasing wages. But today’s complement can be tomorrow’s substitute. During the early part of the industrial revolution, for example, the population of horses significantly increased alongside the human population, as horses complemented the efforts of humans, especially in transportation. But once the automobile became cost-effective the economic impetus for horses disappeared, and their population collapsed over the following decades. In 1915 there were an estimated 26 million horses in America. By 1960 that number was down to 3 million.

Could computers make humans themselves obsolete? For years futurists have speculated about the prospect of a technological singularity, a term used for the rapid and fantastic technological and social changes that many believe will occur with the advent of human-level artificial intelligence. (If you are unfamiliar with the concept of the singularity, I highly recommend you read this excellent introduction to the idea.)

Once computers or robots can be made that are as intelligent as humans in every meaning of the sense, they could be produced so as to perform every conceivable job that humans do. This has profound economic implications. In his recent book The Age of Em, economist and polymath Robin Hanson explores the social and economic consequences of widespread artificial intelligence, focusing in particular on the scenario of human brain emulations, or “ems.” That is, humans who have had their brains scanned and “uploaded” into a computer, living thereafter in a virtual reality or robot avatar. Hanson thinks that the technology for ems is easier and will come before “traditional” AI that is coded from scratch, although the broad economic picture is much the same either way.[1]

What does the economy look like in the age of em? The breakthrough of the industrial revolution, as mentioned above, was the enabling of rapid capital formation. We might say the limiting factor in today’s economy then is “labor formation.” That is, while we can today quickly produce new machines, it takes us much longer to produce new humans, especially skilled ones that show up to work on time. This is the fundamental reason why wages have risen in the industrial era. The breakthrough of the singularity will be the removal of this constraint. A key feature of an emulated human (or AI more generally) is that he or she can be copied or sped up. Hanson imagines a world in which there are billions or trillions of copied ems, each running at thousands or even millions of times the speed of an ordinary human.

Such a world would be able to grow much, much faster than today’s. Hanson estimates that a hundredfold increase in the economic growth rate is not at all unreasonable, which would work out to an economy that doubles in size every month or so (the transition from the agricultural to industrial era witnessed a speedup of a similar magnitude).

Critically, however, this is not a rich society on average. Just as technological progress prior to the industrial revolution merely enabled a larger population that ate up all the gains and left the average person no better off, the ability to copy ems will lead to a similar equilibrium. As long as the relevant technologies are widely available in a competitive market (as is the case with most information technology today) then any em who receives a premium wage for their work should eventually be undercut by a copy that charges less. In equilibrium, we return to the Malthusian trap in which wages are driven down to subsistence level – the cost of running the necessary hardware and software. Suffice it to say, that level is a good deal lower than the cost of running a human – food, shelter, and clothing cost more than it takes to keep a computer turned on. In a world of cheap artificial intelligence, biological humans will be unable to earn a living.

This may sound like a bad way for things to turn out, and many people have reacted with shock and horror to Hanson’s vision of the future. But there is a particularly shiny silver lining. Triple digit annual GDP growth implies similarly high rates of interest and returns on assets. The singularity will bring with it the greatest bull market of all time; real assets such as stocks, commodities, and real estate will in aggregate increase in value hundreds or thousands of times over as the demand for capital skyrockets. Though biological humans will be unable to find work in the em economy, they will own valuable assets that contribute to the hyper-fast world they’ll find themselves in. Much like the elderly today live off the proceeds of capital they have accumulated through their working life, the human race will essentially “retire” after the singularity.

The greatest bull market ever will be an extremely uneven one, however. Our current era of industrial capitalism is marked by creative destruction, where industries come and go over time with the changing of the technological opportunity set. The em economy will be creative destruction on steroids. Commodities in aggregate will become hugely more valuable as ems build much more physical stuff (think giant city-sized computers), but which ones depends on the specific technologies used. The massive demand for energy could send the price of oil to the moon, or it may become marginalized as ems rely more on solar and nuclear energy. Copper and gold may become vital electrical components, or ems may use more fiber wire or other techniques for transmitting data. Similarly with real estate, the land used for em cities could become greatly more valuable; real estate in cities primarily supporting human labor (e.g. manufacturing hubs) could become near worthless. In the stock market, some corporations will see their equity value quickly rise to the equivalent of hundreds of trillions of dollars or more, while many others swiftly go bankrupt, all depending on how their services or supplies plug into the em economy (or don’t).

Nominally denominated bonds issued before the singularity would quickly become worthless in this scenario. As interest rates rise into the hundreds of percent, a bond issued in today’s world of single-digit interest rates becomes the financial equivalent of pennies dropped on the sidewalk. Cash balances in a checking/savings account could in principle start earning a meteoric yield, but this presumes a stable banking sector, which may or may not be a reasonable assumption. The singularity could be massively destabilizing to our current major institutions. The emergence of emulation or AI technologies is likely to originate as a geographically localized phenomenon, beginning in a major technology hub such as, for example, the San Francisco Bay Area. The people of other areas will participate in the booming economy by virtue of the highly valuable capital resources they own, but other parts of the world may be completely locked out. Nations such as Japan, for example, which are rich in high-skilled labor but poor in natural resources, may have little to offer the nascent em economy and see their currency collapse and economy bankrupt practically overnight as their working population finds their wages rapidly fall below subsistence levels. Avoiding such extreme disparities in outcomes will depend on the degree to which governments, central banks, the private financial sector, and individual investors anticipate and prepare for the coming singularity. More than ever, diversification is essential.

Hanson’s depiction of trillions of ems living at subsistence wages spending almost all their lives working strikes many as dystopian, but there is reason to be extremely optimistic about this strange new world. The shock one has at imagining the Age of Em is in part due to Hanson’s peculiar decision to write mostly about the average em living in this world, and not the average person who lives through the transition between these two eras. The distinction is important. As the technology is most likely to originate in a modern liberal democracy, it seems plausible that uploading ones mind will be completely voluntary. The realities of competition make it almost inevitable that the majority of ems will be copies working for subsistence wages, but Hanson points out that these are likely to be what are essentially cheerful workaholics who choose to adopt this sort of life. If the predominant technology is traditional hard-coded AI instead of emulations, the reality of the Malthusian condition plausibly becomes even less worrisome. The point is that many, if not the vast majority of humans living at the time will be wealthy beyond our present comprehension and will be able to choose the kind of life they wish, whether in virtual reality or meatspace.

Individuals hoping to live to and thrive through the singularity would do well to build a globally diversified, equity-dominant portfolio of investments while they can still earn a wage that’s worth something. While one can hope that our society has enough foresight to offer protection to people who enter this period without any capital assets to their name, relying on the government or charity for help is rarely a prudent financial decision if one can avoid it. Like it or not, even if the singularity is a rising tide that lifts all boats, it will almost certainly bring us an even more unequal world where the rich have greater access to previously unimaginable dimensions of possibility that only computation can provide, such as multiple instantiations running in parallel (imagine being able to literally be in more than one place at a time). The degree to which one can enjoy (and share with others) the wonders our robot descendants bring us will depend in large part on the amount of capital one has going into the age of em.

…and on the other hand…

Readers who do not keep up with the futurism prominent among Bay Area techies may think I have gone off the rails. How soon could such a fantastical sci-fi scenario come true, if ever? People who talk about this subject seem to have predictions that vary anywhere from “any year now” to “maybe in a few centuries.” Hanson himself stays fairly neutral with a prediction of “within a century or so.” The most influential and controversial booster of the singularity is definitely Ray Kurzweil, whose books The Singularity is Near and The Age of Spiritual Machines were bestsellers. In the latter, written in 1999, he predicted that by 2019 a $1,000 computer would have roughly the processing power of the human mind, and that the software would be about as good a decade after that. This seems a bit overoptimistic at this point, but is not totally out of line with the estimates of other experts. In a 2013 survey of hundreds of researchers in machine learning, the median respondent said there was a 50% likelihood of human-level AI occurring by 2040, and a 90% likelihood of it happening by 2075.

I am not an expert in computer science, neuroscience, or any other kind of science related to the technical challenges relevant to artificial intelligence. But I am a financial professional, and I can say that there is no sign of the singularity anywhere in the market.

As I mentioned above, a robust prediction one can make conditional on the singularity is that GDP growth would massively accelerate, taking interest rates with it. If the singularity were to happen within a few decades, long-term interest rates should rise today in anticipation of the event. This is exactly the opposite of what we see today.

yield-curve

Interest rates remain low decades out into the future

As I wrote about last time, long term interest rates around the world are at all time lows, with many countries now seeing unprecedented negative yields on long-term government debt. Similarly, while equity valuations in the US are rather high, most everywhere else around the world they are quite low, especially in major commodity exporting countries. Commodities in particular have fallen tremendously in value over the last five years.

Gold Price in US Dollars Chart

All these signs point to a world that is in fact slowing down, not speeding up. Now, it is true that exponential technologies like AI have a tendency to “sneak up on us” and it may not be reasonable to think that financial market should be able to predict such a radical transformation of society decades out in the future, but even our recent advances in information technology, incredible as they are, have not done much to improve our economy. Indeed, though it pains me to say it, our economic growth rate seems to be stuck in slow motion (there’s always cryonics). Below I plot trailing 10 year annualized real per-capita GDP growth in the US.

us-per-capita-gdp-growth

Since the 1940s when the US started officially tracking it, per capita GDP, averaged over the ups and downs of the business cycle, has grown pretty steadily at a little over 2% a year, but starting around the new millenium it started falling and hasn’t looked back since. Based on the latest figures, our standard of living has increased only by an average of half a percent over the last ten years. We see the same pattern throughout the developed world today. Many emerging market countries are still posting high rates of economic growth as they catch up with the Americas, Europes, and Japans of the world, but as they approach the technological frontier they too tend to slow down to more modest rates.

At the heart of the slowdown in per capita economic growth is a slowdown in productivity growth. Productivity refers to how much economic output can be produced given a constant amount of inputs: land, labor, and capital. You can increase the size of the economy by throwing more of one of these inputs at it (such as by having more women enter the labor force) but in the long run what makes us richer is how efficiently we use these inputs to make the products and services we value. The root cause of the productivity slowdown we’re in the middle of is a hot topic in economics right now and economists offer many different explanations. Some of them include:

  1. Demographics: as people get richer, they have fewer kids. This is one of the most robust findings in social science. Societies that are far advanced in the demographic transition – namely the rich developed countries – are increasingly finding themselves with an ageing population. Research shows that basic cognitive abilities such as working memory begin declining in individuals no later than age 30, and peak productivity in most occupations occurs before the age of 50. Most scientific achievements are made by individuals in their 20s or early 30s. Most developed countries now have a population whose median age is over 40 (the US is relatively young at 36), making for a smaller proportion of people likely to make productivity-enhancing innovations. A study released by the Rand Corporation this year finds that a 10% increase in the fraction of the population over age 60 reduces the per capita GDP growth rate by 5.5%
  2. Regulatory burden: looking at measures such as number of pages in the Federal Register or proportion of workers with an occupational license, the scope of economic activity subject to regulatory oversight seems to grow with each passing year. Comparing relatively less regulated and more regulated industries, and looking at comparative performance of OECD countries, increasing regulation seems to hamper productivity growth and deregulation to improve it.
  3. Culture of risk aversion: compared to times in the past, Americans are less likely to move to another state, switch jobs, or start a new company today. A statistic we often show clients is how the percentage of Americans who own stock has been trending down for years and is currently around an all time low. This increase in risk aversion could be in part the result of other factors like regulation, but the fact that we are richer today than our ancestors may simply mean that we feel less need to take risks in order to get ahead.

The ultimate cause for the current slowdown is likely to be a combination of these as well as other factors, but the demographic issue warrants special attention. In addition to the headwind an ageing population provides for productivity, a slower-growing population makes for slower-growing overall economy, as the following graphic demonstrates.

output-breakdown

Fewer workers means lower economic output overall. This is important for investors because key financial variables such as bond yields and per-share dividend growth is more closely linked to aggregate rather than per-capita economic growth. Geanakoplos, Magill, and Quinzii (2004) use an economic model to show that demographic changes could have a predictable effect on stock market returns (and furthermore that this is not a violation of the efficient market hypothesis), while Arnott and Chaves (2012) find strong evidence around the world for the impact of the proportion of age groups in the population on GDP growth, bond returns, and stock returns. In particular they find a strong negative relationship between the share of the population in retirement age and its stock returns.

I wrote previously on how institutional investors seem to be greatly overestimating the returns they’ll be able to get on their stock and bond portfolios. This is a problem with far-reaching consequences. Public pensions in the US, for example, discount their liabilities by the rate of the expected return on their assets, a number that they get to make up themselves. Using more conservative discount rates as recommended by the Actuarial Standards Board, the unfunded liabilities of America’s state and local government pension plans amount to over $5 trillion, which taxpayers are on the hook for. According to the Social Security Administration’s own numbers, the Social Security Trust Fund will be depleted by 2034 (sooner if bond yields and economic growth rates stay as low as they currently are). That means that, barring some large unforeseen boost in productivity, sometime within the next 18 years one or a combination of three things will have to happen. In order of my estimation of likelihood:

  1. The age at which people can start claiming Social Security benefits will increase
  2. Social Security benefits will be decreased (including through means testing)
  3. Taxes will be increased

In sum, over the next few decades Americans (and people around the developed world) should expect to see lower returns on their investments, haircuts applied to their pension benefits (to the extent they have any), and lower payments from their government safety nets. 1 In other words, the best financial advice for people preparing for an economic stagnation is the much the same as for preparing for a technological singularity: build a globally diversified, equity-dominant portfolio of investments while one is still able to work, using a focus on valuation and perhaps leverage in order to enhance returns. We are hardly the first ones to point out that the present environment requires people to save more. A recent white paper from the investment firm AQR, for example, finds that a 2% reduction in expected returns from historical norms requires roughly doubling the amount of savings required to fund the same level of spending in retirement.

Little more than a hundred years ago, financial planning as we think of it today was most irrelevant to the average person. Most people simply worked til they died. The modern world has brought to our lives great wealth but also great complexity. Whatever comes of the 21st century, this trend is likely to continue. Like the proverbial ant preparing for winter, those that plan for whatever contingencies the future might hold stand to benefit the most.

 

 

[1] Some of my readers who are familiar with these sort of scenarios may claim that AI risk related to a “hard takeoff” scenario is the most relevant concern related to these technologies. Like Hanson, I find the existential risk related to AI to be quite remote, but in any case such a discussion is beyond the scope of this post.

 

Disclosures: This post is solely for informational purposes. Past performance is no guarantee of future returns. Investing involves risk and possible loss of principal capital. No advice may be rendered by RHS Financial, LLC unless a client service agreement is in place. Please contact us at your earliest convenience with any questions regarding the content of this post. For actual results that are compared to an index, all material facts relevant to the comparison are disclosed herein and reflect the deduction of advisory fees, brokerage and other commissions and any other expenses paid by RHS Financial, LLC’s clients. An index is a hypothetical portfolio of securities representing a particular market or a segment of it used as indicator of the change in the securities market. Indexes are unmanaged, do not incur fees and expenses and cannot be invested in directly.