001: Chill, Robots Won’t Take All Our Jobs – Really?!?

001: Chill, Robots Won’t Take All Our Jobs – Really?!?

In the past 1-2yrs, automation, the degree to which AI innovation will displace workers, and what to do about it have become incredibly hot topics of conversation. While much has been written on these topics, unfortunately the ‘signal to noise’ ratio is quite low, and frequently not very well argued or supported by appropriate evidence. I recently read an article in Wired by James Surowiecki titled “Chill: Robots Won’t Take All Our Jobs[1] that seemed to me to have some good examples of weakly supported and/or generally bad arguments, so I thought I would walk through the article and discuss some of the places that I think could use some work.

If you haven’t read it already, take a minute and hop over to Wired and give it a quick read. The basic argument is that people are worried that automation will take away our jobs and result in some degree of permanent unemployment, but concerns are not justified by the data. Surowiecki says that right now there is both a general fear of job displacement caused by automation and concern over secular stagnation, which are contradictory concerns. His bottom line is that either automation will not displace workers at a rate which will cause major problems, or if they do, the resulting economic growth and productivity gains will at least offset the problems caused by the job displacement. Who knows, maybe he’s right, but the case he makes is definitely not persuasive to me.

Argument #1:

“Imagine you’re an economist back on the ground, and a panic-stricken software engineer is warning that his creations are about to plow everyone straight into a world without work. Just as surely, there are a couple of statistical instruments you know to consult right away to see if this prediction checks out. If automation were, in fact, transforming the US economy, two things would be true: Aggregate productivity would be rising sharply, and jobs would be harder to come by than in the past.”

Aggregate productivity and employment data are lagging indicators which would only tell you about shifts in the economy after they have taken place. By the time productivity data shoots up because automation has taken our jobs, automation has taken our jobs. A lot of leading indicators are based on survey data, where you are taking the views of market participants as an indication of the future path of some given factor. In the example above, survey data where N=1 would be the economist listening to the software engineer.

Argument #2:

“productivity gains over the past decade have been, by historical standards, dismally low.… Low productivity growth does slide in the face of the story we tell about amazing technological progress.”

While it does seem contradictory on the surface, the value in analyzing historical macroeconomic data comes not as much from what the datapoints are, but what the underlying story is that explains shifts over time.[2]

There are a couple observations I would put forward regarding current and historical productivity trends. Output spiked in the post-war period as the US became the dominant superpower, in the 1960-1980 period as women joined the workforce, and 1981-2007 where the adoption of basic technologies and computing started to pay dividends. The period of 1980-2007 saw the largest productivity ramp due to the combination of the negative base rate effect of the 1980-81 recession, and subsequent adoption of technology.

So why the weak productivity numbers since 2007? We had a huge recession, and businesses retrenched. Most companies have chosen to take advantage of the current low-rate environment not to invest in capital and labor development, but instead to buy back shares – which provides a much more immediate and reliable benefit to management teams whose compensation packages are tied to share price performance.

But what about all the cool new apps and stuff that we have now that we didn’t have prior to 2007? Werent these things supposed to make us super productive and change the way we live and work? Cal Newport recently wrote an amazing book called ‘Deep Work’ in which he makes the case that all this ‘technological innovation’ is actually having the opposite effect of what we had hoped for and expected. There is good evidence that supports the case that recent technological progress has actually hampered productivity. However, the technological progress that we are seeing now with automation and machine learning, which really started to take off around 2014, is a completely different type of technology. So while low productivity growth is contrary to expectations for recent technological innovation, AI/ML is fundamentally different in many ways. In addition, while there have been huge strides in the field, there still is not deep and wide gaadoption in the corporate sector. In the years to come, this will change as the technology continues to improve and more high-level and diverse use-cases are addressed.

Argument #3:

“wage increases are meager by historical standards, but they’re rising faster than inflation and faster than productivity. That’s something that wouldn’t be happening if human workers were on the fast track to obsolescence.”

fredgraph.png

chart (2)

Not completely clear that is the case. In the first chart from FRED, you can see that real wage growth has been more or less flat – bumping around above/below zero. And the second chart shows that since 2007, real hourly compensation growth has been lower that labor productivity growth.

Still, let’s assume these charts don’t exist. If humans were being displaced, it’s not clear that wage growth above productivity growth would contradict this. It would really matter what the remaining workers in the economy would be doing. If lower value-add workers were displaced, and the only people left with jobs were relatively higher skilled, then average wages would definitely increase, possibly by a lot. Either way, not a very persuasive argument.

Argument #4:

“recent paper by Robert Atkinson and John Wu of the Information Technology and Innovation Foundation, … if automation were truly remaking the job market, you’d also expect to see a lot of what economists call job churn … Occupational churn in the United States are now at historic lows…. Churn since 2000—an era that saw the mainstreaming of the internet and the advent of AI—has been just 38 percent of the level of churn between 1950 and 2000.”

Employee churn rate is defined as “(Number of employees resigned during the month / Average number of employees during the month) x 100 where Average number of employees during the month = (Total number of employees at the start of the month + Total number of employees at the end of the month) / 2”.[5]. Atkinson & Wu provide a link to the supporting data for their calculation of historical churn rate, and it looks like there could be a problem with their methodology. The calculation is based on employment by occupation from historical census data – so a snapshot of comparable employment every 10 years. They sum up the absolute value of change by occupation and divide it by the baseline aggregate employment level to get to churn rate. This approach is akin to taking a ‘balance-sheet only’ approach to an ‘income statement and balance-sheet’ calculation.

Lets consider an example. Lets say in 2000, we have an economy with 100MM people employed – 50MM in industry A and 50MM in industry B. By 2010, the economy has grown to 110MM, and the distribution is 75/25. Based on Atkinson’s calculation, this would imply a 50% churn rate (25MM industry A hires/25MM industry B fires over 100MM total employed in the base year). That’s all good and well – as long as no one changed jobs more than once. What if those 25MM switched every year, or every 6 months? The census data would not change, but the churn picture would be drastically different.

To be fair to Aktinson & Wu, this is a common methodology. Another way to look at the data might be to look at ‘net cycles’ and ‘margin’, or the total number of workers who have left one job/found another plus/minus the difference of the period aggregates. With that, one could see a much more accurate picture of the US market. Difference between that and the author’s is to what degree the measure reflects the ‘economy inventory turnover rate’.

Argument #5:

“Goldman Sachs just released a report predicting that autonomous cars could ultimately eat away 300,000 driving jobs a year. But that won’t happen, the firm argues, for another 25 years,”

First of all, 300K/year is a LOT of people to reemploy – especially when we are talking about truckers who make 80K/year. Second, Goldman’s timeframe is 2025-2030, which is just 8 years away. They also say 4MM people are employed in driving occupations – or about 2% of the working population. With the types of progress that has taken place already with self-driving cars and trucks, and will continue to take place going forward, its not unreasonable to expect that a very large proportion of that 4MM will be displaced.

Argument #6:

“Corporate America, for its part, certainly doesn’t seem to believe in the jobless future.”

“If the rewards of automation were as immense as predicted, companies would be pouring money into new technology. But they’re not.”

My old boss used to love to quote his professor Rudy Dornbusch, saying “things take a much longer time than you think they will take, and then they happen much faster than you would have thought” (slightly altered).[6] I think this is frequently the case with technology, and it is definitely the case with corporate adoption of technology.

Companies have not been making capital investment in a meaningful way since before the recession, as discussed above. This is partly due to the fact that companies were seriously traumatized by the recession, partly due to the fact that buying back stock is easier, but also because in many cases it is not clear what profitable investment they should make. AI/ML is already used by many, many companies for a wide range of purposes,

I believe the difference between ‘this time’ and previous periods of innovation is the implied IQ of the technology. Once the implied IQ of the technology rises above a certain level, the realm of possibilities of how they can be deployed increases exponentially. Also, once the capability is developed, the marginal cost of an AI ‘worker’ is basically zero. This is going to take a long time to reach I think, but between now and then, we will see employment chipped away at the corners as more and more sophisticated AI models and applications are developed.

Argument #7:

“Between 2000 and 2009, 6 million US manufacturing jobs disappeared, and wage growth across the economy stagnated. In that same period, industrial robots were becoming more widespread, the internet seemed to be transforming everything, and AI became really useful for the first time. So it seemed logical to connect these phenomena: Robots had killed the good-­paying manufacturing job, and they were coming for the rest of us next.”

“If you want to know what happened to manufacturing after 2000, the answer is very clearly not automation, it’s China,” Dean Baker says. “We’ve been running massive trade deficits, driven mainly by manufacturing, and we’ve seen a precipitous plunge in the number of manufacturing jobs.”

Looking at manufacturing post-2000 is not very relevant to how automation will progress going forward. AI has been around for quite a while, and has progressed in fits and starts every decade or so. The most recent period of progress started around 2014, and has been gaining some impressive momentum. Whatever robotization or automation was commercially available from 2000-2009 would be unlike any of the methods and practices commonly employed in the field today. So while it’s possible that automation did not play a huge role in the decline in US manufacturing employment, that does not prove that a different, more sophisticated and powerful approach to automation will not have an impact on employment and displacement of workers in the years to come.

Argument #8:

“The peculiar thing about this historical moment is that we’re afraid of two contradictory futures at once. On the one hand, we’re told that robots are coming for our jobs and that their superior productivity will transform industry after industry. If that happens, economic growth will soar and society as a whole will be vastly richer than it is today. But at the same time, we’re told that we’re in an era of secular stagnation, stuck with an economy that’s doomed to slow growth and stagnant wages. In this world, we need to worry about how we’re going to support an aging population and pay for rising health costs, because we’re not going to be much richer in the future than we are today. Both of these futures are possible. But they can’t both come true. Fretting about both the rise of the robots and about secular stagnation doesn’t make any sense. Yet that’s precisely what many intelligent people are doing.”

On the topic of automation and aggregate benefit, it reminds me of a corny joke that economists like to tell. It goes something like, “when asked how he feels, the economist with his head in the oven and feet in the freezer responded ‘about average’”.[7] Arguing that huge productivity gains will allow for sufficient fiscal room to accommodate transitioning and unemployed workers assumes the political will and ability to increase taxes and implement transfer schemes. Income gains of the past 30 years have been unbelievably skewed towards the top 1% of the top 1%, who will likely be the beneficiaries of the current spurt of technological innovation and automation.

On the topic of secular stagnation, these two concepts CAN coexist, but we need to adjust our concept of capital, labor, and employment. I believe we need to start splitting out and tracking ‘human-equivalent’ AI units (HEU). Assume we are able to do this, and build out the dataset. In this case, it would be easy to see how these two concepts can coexist. If HEU are only measured for productivity purposes on the output side, then productivity will soar. If, on the other hand, we record HEU hours worked, productivity will probably decline, but that won’t matter, because we will have the equivalent of a huge global population increase. So we could end up seeing the labor slice of the human employable population in a ‘secular stagnation’ situation, while the output/productivity gains of automation would go to the owners of capital, as has been the case in recent history. ‘Using’ some of the output gains produced by automation to benefit the less well off or unemployed will not be an easy task.

Argument #9:

“A recent study by Accenture, for instance, suggests that the implementation of AI, broadly defined, could lift annual GDP growth in the US by two points (to 4.6 percent). A growth rate like that would make it easy to deal with the cost of things like Social Security and Medicare and the rising price of health care. It would lead to broader wage growth. And while it would complicate the issue of how to divide the economic pie, it’s always easier to divide a growing pie than a shrinking one.”

“The irony of our anxiety about automation is that if the predictions about a robot-dominated future were to come true, a lot of our other economic concerns would vanish.”

“A growth rate like that would make it easy to deal with the cost of things like Social Security and Medicare and the rising price of health care.”

Same issue discussed previously – aggregate growth is nowhere near evenly distributed. Social security is assessed on ordinary income, which the very rich have very little of. Losses on the part of the displaced will start at the most vulnerable slice of the population, and progress towards those more secure.

We are in the very early stages of the current AI/automation boom, which will likely continue for decades to come. Make no mistake, there will be significant challenges to address as these technologies are more widely deployed, and there are good reasons to believe this will be different from previous periods of technology-driven change. It’s easy to say ‘these things always work themselves out’, and while this is true, it’s because people anticipate problems and figure out how to do something about them. We are in such a period today, and convincing ourselves that there will not be painful adjustments to be made in the years to come will only make the problems more difficult to address.

[1] https://www.wired.com/2017/08/robots-will-not-take-your-job/

[2] https://www.bls.gov/opub/btn/volume-6/below-trend-the-us-productivity-slowdown-since-the-great-recession.htm

[3] https://fred.stlouisfed.org/series/COMPRNFB

[4] https://www.bls.gov/opub/btn/volume-6/below-trend-the-us-productivity-slowdown-since-the-great-recession.htm

[5] https://en.wikipedia.org/wiki/Churn_rate

[6] https://en.wikiquote.org/wiki/Rudiger_Dornbusch

[7] http://listserv.linguistlist.org/pipermail/ads-l/2014-January/130494.html

Advertisements
001: Chill, Robots Won’t Take All Our Jobs – Really?!?