(via Bill Gates: We Should Tax Robots That Take Jobs | Observer)
Are you going to tax my lawnmower, washing machine, or car because they do the work that people used to perform?
STUPID
A number of the “No Robot Jobpocalypse” arguments seem to hinge on the idea that as productivity increases, the costs of goods and services will approach zero.
But this seems based on the assumption that resources are effectively a function of labor. However, if base resources are largely fixed after some level of labor (e.g., there are only so many iron atoms in a volume of dirt), and there are other potential uses for those resources than feeding the proles, then the laborers must competitively bid for the resources.
In that bidding, they may have to bid with someone several orders of magnitude more productive than they are (either due to owning the robots or just being that much more skilled/productive). What guarantee is there that, even as the price of goods produced from the resources decreases overall, they are not bid out of the reach of the low-marginal-production workers?
(via Bill Gates: We Should Tax Robots That Take Jobs | Observer)
Are you going to tax my lawnmower, washing machine, or car because they do the work that people used to perform?
STUPID
What’s amazing about this logic is that he closes by saying, “I don’t think the robot companies are going to be outraged that there might be a tax. It’s okay.” Says the guy who has made billions through the luxury of selling software that is so high and demand and with a high barrier of entry from competition. Not to mention the high level of human capital it takes to create his products. His market doesn’t have to worry a lot about automation and manufacturing, especially since they moved away from physical packaging software; but just imagine those businesses that have a low margin of profit but need to heavily invest in cost-cutting labor supplied by automatic production equipment. Those businesses would get hosed because they are not only getting taxed on the mass amount of labor they still need to employ but now also on the equipment itself.
I’d like to think that Bill Gates is just being a sympathetic sycophant to manufacturing laborers here but I believe he is smarter than that. He is blatantly advocating for redistributing wealth by not only taxing labor but now taxing capital goods as well. You would not only be taxed for the purchase of this equipment, you would then also be taxed on its perpetual use as well. You may as well be renting it directly from the government.
This is just one more step towards the government controlling all private property. If they can’t seize it outright, they will tax every aspect of its worth to redistribute as they deem fit.
It’s far from stupid, it’s downright devious.
If robots are going to be so heavily taxed, then wont employers just keep hiring humans?
I suppose it would depend on how much the tax would be. If it is the same progressive income tax as employees have to pay then you would have to weigh the production output compared to manual labor along with the new added tax costs compared to what it would it would cost for actual employee costs like wages, benefits, and payroll taxes. I still believe the employer would go with automation all the same.
I’m not sure how they would calculate a tax based on the theoretical amount of profits they generated from the automated production. What if the business does not reap a profit? Do they just not pay their robot tax? Doubtful. The government always gets its cheese.
Personally, I don’t think that ai and automation will ever be sophisticated enough to truly replace human labor,
But it seems like no one has a viable solution to the problem. The republicans want to institute a base national salary, and the democrats want ever more bureaucracy.
They said AI would never win at Go, either, but we all see how well that’s worked out. It’s all but guaranteed that if civilization doesn’t collapse, AI will replace human labor. The question is when.
But let me throw an alternate, mid-term solution at you that wouldn’t crash the economy: wage subsidies.
If you lower the minimum wage, then make up the difference with direct-to-employee wage subsidies that decline as employer wages increase, you can accomplish multiple things.
And so on and so forth.
The program can be implemented and tested incrementally. It can be rolled back if it doesn’t work, or expanded if it does. It will be less expensive since it displaces some welfare spending, and it multiplies the effect of money spent with private spending.
The main limitation I would put is that the subsidies are only available for goods and services produced for domestic and not foreign consumption. The bureaucracy required shouldn’t be too bad, otherwise, since this isn’t an approach targetted at specific industries, districts, or means levels.
I would appreciate it if some AI enthusiast would get mad at me
right now I’m objecting to a diffuse and incoherent set of fears about the future, but someone out there’s gotta have a theory of what mass technological unemployment actually looks like and a modestly granular account of the mechanisms by which artificial intelligence takes us there
I don’t have that model or that account, but y’all really seem to believe that the machines are right around the corner, so it’d be nice if someone laid it out somewhere
I wrote a lengthy harangue to @peopleneedaplacetogo about a week back, which appears here lightly edited:
If I were a smarter or better-informed person, would I feel differently about the intelligence explosion thesis? What do its better-informed advocates know that I don’t? What intuitions do they have that I lack?
I guess you’d have to know what I believe before you could tell me why I’m wrong, but as a person who’s much closer to the technology than I am, what are the sources of the rationalist belief in artificial intelligence more generally?
Because, from the outside, with the little understanding of the technology that I have, it seems like intelligence is harder and progress more limited than the boosters are telling me.
From the outside, throwing more processing power at the problem doesn’t seem to address the lack of sound concepts underpinning general machine intelligence, rather than specific intelligence.
The ‘machine learning’ we have, where we train algorithms on large data sets to sort the data and identify the patterns is impressive, sure, but the strength and limitations of ML suggest that we need more and more innovative conceptualizations and operationalizations of the problems we want the machines to address before we can apply machine power to any effect.
I apologize for my technological illiteracy; I’m sure I’m missing something crucial. I guess I just don’t have a good sense of what the conceptual paradigm for general intelligence would look like – “ML applied to the conceptualization of problems in the world”?
To which he replied:
I don’t have any specific knowledge of the topic either. I think a big intuition is just “don’t make strong predictions about what AI can or can’t do”.
What am I missing?
Neural networks and dedicated hardware for them. Have you seen their image generation capabilities lately? It’s like we’re creating slices of animal brains.
Now, it’s easy to object that this does not create an intelligence explosion, and fortunately it probably won’t.
The issue is that if you can create a neural net for walking, a neural net for object recognition, and a neural net for an industrial task, well… you put them together and you get something like an industrial task animal.
Most jobs don’t use anywhere near the whole human capability. What’s necessary for us as creatures (and what shaped us) isn’t necessarily what’s most effective economically. That is, we have more capabilities than artificial task animals, but are they profitable enough in such an economy to feed us? Is a model where former truck drivers all become Patreon-sponsored bloggers at all viable? And this will hit every sector basically at once.
There are limits to consumption based on available time/attention during a day. But food requirements aren’t really negotiable.
But I’m not an economist.
Like how the birth of farm machines meant the excess former farmers were unemployed forever, huh?
A sector largely requiring large amounts of unskilled labor is replaced by a sector largely requiring large amounts of unskilled labor. In what ways might the current situation be different from that?
Horses’ power and speed were their primary economic interest. Once machines were able to do this better and cheaper, with horses limited to niche applications, what happened to the horses?
Humans’ intelligence is unique in the economy, but machines are now becoming more and more intelligent and adaptable. In one sector this might just displace workers, but what happens when it applies to all sectors simultaneously? Why would you hire a human worker, who cannot work below a certain minimum due to resource requirements to survive, rather than just use a machine that does the same thing for less money?
Is there any law of economics that requires that someone’s maximum feasible production be enough for them to survive? Remember to account for opportunity cost of the necessary resources in your answer, such as real estate being purchased by those with orders of magnitude higher productivity.
It seems there rather clearly isn’t such a law since economically non-viable people already exist.
This position of yours appears to stem from an ideological pre-commitment to Capitalism, and I say this as someone that argues against Communists. The ability of Capitalism to outperform Stalin on human suffering is conditional, and those conditions have held for a long time, but that is slowly changing.
Anonymous asked:
sadoeconomist answered:
Well, I’m sure there is but I can’t think of any right now.
It’s a shibboleth for people who are not just anti-capitalist but dialectical materialists, which is the Marxist equivalent of millennarian religion. It’s like hearing someone say that we’re living in the End Times, you know that there’s not going to be a lot of productive dialogue with someone after that.
They’re like a secular version of the Millerites, they keep predicting an apocalypse that never happens. You’ve really got to question what part of their personality draws these folks to a doomsday cult, and you’ve got to question their reasoning ability when their predictions have failed to come true over and over and yet they still stick to their same doctrines.
I honestly thought it was a joke.
It is sometimes used to refer to the kind of capitalism we have right now, where such things as the Laborpocalypse do seem to be looming rather high.
What do you mean by ‘Laborpocalypse’ exactly
I was going to guess you were referring to Tony Blair returning to politics but that’d be a ‘Labourpocalypse’
By “Laborpocalypse”, I mean economic/techological/social developments that collapse labor relations as we know them.
The conventional example would be the appearance of robots that can do enough tasks cheaper than human workers that there is little hope of keeping the unemployment rate under, say, 80%.
Man, I was really hoping you weren’t going to say that, I don’t want to have the Neo-Luddism argument again
If you want my opinion on what’s wrong with that idea send me an ask, otherwise let’s just leave it at ‘I disagree’
I don’t think the term Late Stage Capitalism is about the oncoming labor apocalypse so much as “given decades to feed its own recursive cycle, capitalism looks a lot different now than it did in Marx’s time when industrialization was just coming to fruition.”
Primarily, it’s about the opinion that a lot more wealth production is in finance and sales than in the “making stuff” sectors. It’s also about soaring inequality and anomie, and alieving those are a result of unfettered capitalism for so long.
So in late stage capitalism, a really smart kid… aces their SATs, moves across the country to go to an Ivy League, after graduation moves to working at an investment bank, and spends most of their money on New York rent, ethnic cuisine, and electronic products manufactured in China. Is this an improvement over them staying home and just being an effective manager of the family banana stand chain? Who can say.
man now I want to see the neo-luddism argument
someone needs to make, like, a Museum of Arguments.
Is it not The Worst Mistake In History?
I don’t think that’s the one SE has in mind, though.
I think the one SE has in mind is that you cannot have all three of the set { High Artificial Intelligence, Humans, Capitalism } at once, so you must sacrifice one. Otherwise, Humanity gets washed away by the Robot Jobpocalypse.
For true-blooded Capitalists who view Capitalism as a system tied into morality itself, believing in property rights and free association and the like as being inherent elements of morality rather than purely contingent ones, it’s a fundamental challenge to one’s worldview. Kind of like a very large collective action problem, like climate change, which has a very high payoff for individuals defecting.
Of course, “Neo Luddism” implies sacrificing “Artificial Intelligence” rather than “Capitalism”, which would represent an enormous cost in terms of lost future wealth. You already know my response, which is to slowly sacrifice larger chunks of “Capitalism” over time.
Like how the birth of farm machines meant the excess former farmers were unemployed forever, huh?
A sector largely requiring large amounts of unskilled labor is replaced by a sector largely requiring large amounts of unskilled labor. In what ways might the current situation be different from that?
Horses’ power and speed were their primary economic interest. Once machines were able to do this better and cheaper, with horses limited to niche applications, what happened to the horses?
Humans’ intelligence is unique in the economy, but machines are now becoming more and more intelligent and adaptable. In one sector this might just displace workers, but what happens when it applies to all sectors simultaneously? Why would you hire a human worker, who cannot work below a certain minimum due to resource requirements to survive, rather than just use a machine that does the same thing for less money?
Is there any law of economics that requires that someone’s maximum feasible production be enough for them to survive? Remember to account for opportunity cost of the necessary resources in your answer, such as real estate being purchased by those with orders of magnitude higher productivity.
It seems there rather clearly isn’t such a law since economically non-viable people already exist.
This position of yours appears to stem from an ideological pre-commitment to Capitalism, and I say this as someone that argues against Communists. The ability of Capitalism to outperform Stalin on human suffering is conditional, and those conditions have held for a long time, but that is slowly changing.
I take some exception to the very term ‘unskilled labor’ as a general term, because agricultural work is not 'unskilled’ and neither were the various manufacturing jobs that often replaced them. These are not skill sets that have cross over. So we start off with that error, but I’ll say right now I can already see you’re missing my point, but I’ll get to that.
The flaw here is comparing an animal who was used for an end (horses) and the animal that built the system (humans.) That is even putting aside the idea that somehow machines will become intelligent and adaptable enough to displace workers in the first place, a reality that is likely not nearly as close as we think. Indeed, there is a flaw that even if we did, the idea we’d be able to replicate the human way of thinking is itself improbable. And the idea that it would happen and suddenly penetrate every industry simultaneously is itself flawed.
Further, I think you’re also missing the point by your claim that this is based on an ideological pre-commitment to Capitalism, to which I’d argue, as opposed to what? The flaw here is capitalism, which is private ownership of 'capital’ (really, property, as the designation of capital is frankly arbitrary) and the exchange there of with other private individuals. At its core, it is an expression of individual rights. The only other option would be a disregard for individual rights, and implicitly authoritarianism of some form or another. I’m an individualist, I’m anti-authoritarian, therefor, I am capitalist, not the other way around
I also think you’re arguing something I don’t believe and never have. I would argue that folks may very well hire humans out of their desire to do so, as humans are not and never have been homo economicus, but that is largely an aside to the real point.
My real point is actually that whatever the next revolution is, the ability to predict its effects is likely beyond any living human in any real capacity, in the same way that predictions for the Industrial Revolution were themselves largely impossible until we passed into it and could adapt to the particulars of it. I largely think doomsaying can be set aside because it seems to disregard that humans will shape the system to suit humans.
And what, exactly, is the alternatives? No one seems to have proposed anything somehow forestall this supposed doom of robots taking our jerbs. The supposed 'fixes’ are little more than rehashes of old policies that didn’t work then and won’t work now, and/or are ethically compromised.
As an aside, I’d argue the vast majority of folks who fall under 'economical unviable’ do so for reasons beyond actual economic concerns, and more to due with government intervention, but that’s largely my anarchism, I suspect.
I take some exception to the very term ‘unskilled labor’ as a general term, because agricultural work is not 'unskilled’ and neither were the various manufacturing jobs that often replaced them. These are not skill sets that have cross over. So we start off with that error, but I’ll say right now I can already see you’re missing my point, but I’ll get to that.
They’re both skillsets which don’t require as much training or IQ. Putting someone to work on an assembly line is not something which requires a four year degree’s worth of education (though I’m sure you’ll argue that the training isn’t really required, regardless of whether it is) and an IQ over 110.
The flaw here is comparing an animal who was used for an end (horses) and the animal that built the system (humans.)
In other words, the human beings will change the system away from purist Capitalism before it destroys them and replaces them with a more economically efficient form of matter. Capitalism does use people for ends. Employment is an unwanted side effect of production that so-called “job creators” do not actually want.
That is even putting aside the idea that somehow machines will become intelligent and adaptable enough to displace workers in the first place, a reality that is likely not nearly as close as we think.
It doesn’t need to displace all workers, just those with an IQ below some amount, in order to cause problems with mass unemployment. As for how close it is, well, factories in China are performing layoffs in favor of automation, warehouses are getting factor 5-6x reductions in staff, it’s hitting lawyers with tools for document search, and doctors, and so on.
You have to remember that even if jobs still exist, the number of applicants kicked out of other sectors can drive down the wages to unsustainable levels because the amount of most categories of services actually needed by the economy are limited. (eg, if a typical plumber can fix X pipes per hour, and there are Y pipes needed per person normally without much more gain from Y+1 pipes, then the number of plumbers that it’s beneficial to have is limited.)
Indeed, there is a flaw that even if we did, the idea we’d be able to replicate the human way of thinking is itself improbable.
“A computer will never defeat human masters at Go. Surely that can’t happen, it’s far too intuitive of a game.”
And, computers don’t actually have to think like humans to displace human workers. They often come at things in ways we would consider sideways.
And the idea that it would happen and suddenly penetrate every industry simultaneously is itself flawed.
By and large, computers have penetrated every industry over the last several decades. Suggesting robots won’t penetrate almost every industry at once is almost proposing that capitalists will simply leave money on the table and that capitalism is not efficient.
Further, I think you’re also missing the point by your claim that this is based on an ideological pre-commitment to Capitalism, to which I’d argue, as opposed to what? The flaw here is capitalism, which is private ownership of 'capital’ (really, property, as the designation of capital is frankly arbitrary) and the exchange there of with other private individuals. At its core, it is an expression of individual rights. The only other option would be a disregard for individual rights, and implicitly authoritarianism of some form or another. I’m an individualist, I’m anti-authoritarian, therefor, I am capitalist, not the other way around
If participation in the market is necessary for survival, then participation in the market is not truly voluntary. It doesn’t matter that a specific agent isn’t holding the gun to mandate it - it is nonetheless mandatory. Capitalism is just another form of hierarchy, and ideal Capitalism does not and cannot exist. Of course, individual rights are purely an intermediate node, too, and always were.
Put simply, Capitalism is an amoral (not moral or immoral) resource production and distribution algorithm. Its moral value derives purely from its consequences. Treating it any other way is bound to cause disappointment.
I also think you’re arguing something I don’t believe and never have. I would argue that folks may very well hire humans out of their desire to do so, as humans are not and never have been homo economicus, but that is largely an aside to the real point.
The relative popularity of check-out kiosks at grocery stores, and other low-human-contact services such as internet retailers trouncing brick and mortars, suggest that this is limited to a niche appeal only… sort of like horses.
My real point is actually that whatever the next revolution is, the ability to predict its effects is likely beyond any living human in any real capacity, in the same way that predictions for the Industrial Revolution were themselves largely impossible until we passed into it and could adapt to the particulars of it. I largely think doomsaying can be set aside because it seems to disregard that humans will shape the system to suit humans.
…by passing laws to make it not purist Capitalism anymore.
And what, exactly, is the alternatives? No one seems to have proposed anything somehow forestall this supposed doom of robots taking our jerbs. The supposed 'fixes’ are little more than rehashes of old policies that didn’t work then and won’t work now, and/or are ethically compromised.
It’s only ethically compromised if you’re foolish enough to think Capitalism is a moral system and that property rights are not subordinate to utility. (Yeah I know that’s dangerous ground to tread (even if it’s true), but as you’ll see below, my solution isn’t that radical, because I’m aware that it’s dangerous.) Furthermore, while it’s great at producing large volumes of goods, Capitalism with work-or-starve is already fundamentally ethically compromised, and therefore any complaints that “oh, it’s immoral to do something that isn’t pure Capitalism” are ungrounded.
Also quite frankly, unless you support giving the whole of the land of the United States of America back to the descendants of the natives, then you don’t really believe in transcendent moral property rights that are beyond the bounds of human invention and therefore systematic human alterations. Unlike other human beings themselves, who would continue to exist if we erased all our data and memories about them, allocated property rights as we know them would be almost totally gone if all the data about them were erased. They’re just a human invention - a useful one, but only a tool. (Yes, I know animals have territorial behaviors, but that isn’t property rights as we know it.)
As for solutions…
Across-the-board wage subsidies (edit: it’s a bit more complicated than that but you get the idea - not favoring specific industries) would not only avoid drawing the ire of economists, but allow society to lower the minimum wage dramatically (as many economic freedom types want - despite their ignoring the massive negotiating power disparity). Job choice would expand a great deal, putting a lot more bargaining power in the hands of low level workers. The program can be rolled out incrementally and reversed if it does not work - unlike socialist revolution. It promotes membership in the community and could help fix improverished regions such as inner cities, by reconnecting them to the normal societal status hierarchy instead of them being disconnected from it and inventing new status hierarchies that cause collateral damage. It would also help to get people off of welfare, and recover a portion of the economic value that would normally be lost to welfare payments.
As an aside, I’d argue the vast majority of folks who fall under 'economical unviable’ do so for reasons beyond actual economic concerns, and more to due with government intervention, but that’s largely my anarchism, I suspect.
I can’t say I agree there. It’s far too convenient for your worldview to simply ignore the effects of disability, mental illness, and age, and simply handwave it all away as the fault of the state.
So, how bout those self-driving cars?
Has noone considered the fact that the robot was a threat to the building’s security, the robot recognized this, and in proper Asimov style it proceeded to kill itself?
Anonymous asked:
There is a popular myth, spread by my enemies, that I am a robot and never sleep, existing only to spread political ideology at all hours of the day.
This is, of course, nothing more than propaganda. Like any normal human (and most mammals in general), I must undergo a process of prolonged stillness, unconsciousness, and vivid hallucinations, for repair and maintenace purposes, for multiple consecutive hours during each solar cycle.
Should a man living alone with a robot be able to adopt a child?
Submit your answer as a reblog of this post. For full credit, write at least 800 words in double-spaced format, then discard the double-spacing by posting it to Tumblr. Remember, plagiarism is against Tumblr University’s Academic Integrity Policy.
Local Blogger Accidentally Reblogs Supervillain, Frantically Deletes Post to Avoid Becoming Reactionary Nationalist & Starting Robot War with UN for Control of Moon
