I have no idea what the hell anybody is talking about
You say that, and yet for some reason you’re following this blog, which surely makes that situation worse?
I have no idea what the hell anybody is talking about
You say that, and yet for some reason you’re following this blog, which surely makes that situation worse?
A number of people recommended this post to me, and it is indeed good and worth reading. I say that only partly because it provides evidence that aligns with the preconceptions I already had :P
Specifically, here is what I wrote in this post:
I was thinking about this stuff after I was arguing about deep learning the other day and claimed that the success of CNNs on visual tasks was a special case rather than a generalizable AI triumph, because CNNs were based on the architecture of an unusually feed-forward and well-understood part of the brain – so we’d just copied an unusually copy-able part of nature and gotten natural behavior out of the result, an approach that won’t scale
The gist of Sarah’s post is that in image recognition and speech recognition, deep learning has produced a “discontinuous” advance relative to existing improvement trends (i.e., roughly, the trends we get from using better hardware and more data but not better algorithms) – but in other domains this has not happened. This is what I would expect if deep learning’s real benefits come mostly from imitating the way the brain does sensory processing, something we understand relatively well compared to “how the brain does X” for other X.
In particular, it’s not clear that AlphaGo has benefitted from any “discontinuous improvement due to deep learning,” above and beyond what one would expect from the amount of hardware it uses (etc.) If it hasn’t, then a lot of people have been misled by AlphaGo’s successes, coming as they do at a time when deep learning successes in sensory tasks are also being celebrated.
Sarah says that deep learning AI for computer games seems to be learning how to perform well but not learning concepts in the way we do:
The learned agent [playing Pong] performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection. Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.
Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?
This is reminiscent of something I said here:
My broad, intuitive sense of these things is that human learning looks a lot like this gradient descent machine learning for relatively “low-level” or “sensorimotor” tasks, but not for abstract concepts. That is, when I’m playing a game like one of those Atari games, I will indeed improve very slowly over many many tries as I simply pick up the “motor skills” associated with the game, even if I understand the mechanics perfectly; in Breakout, say, I’d instantly see that I’m supposed to get my paddle under the ball when it comes down, but I would only gradually learn to make that happen.
The learning of higher-level “game mechanics,” however, is much more sudden: if there’s a mechanic that doesn’t require dexterity to exploit, I’ll instantly start exploiting it a whole lot the moment I notice it, even within a single round of a game. (I’m thinking about things like “realizing you can open treasure chests by pressing a certain button in front of them”; after opening my first chest, I don’t need to follow some gradual gradient-descent trajectory to immediately start seeking out and opening all other chests. Likewise, the abstract mechanics of Breakout are almost instantly clear to me, and my quick learning of the mechanical structure is merely obscured by the fact that I have to learn new motor skills to exploit it.)
It is a bit frustrating to me that current AI research is not very transparent about how much “realizing you can open treasure chests”-type learning is going on. If we have vast hardware and data resources, and we only care about performance at the end of training, we can afford to train a slow learner that can’t make generalizations like that, but (say) eventually picks up every special case of the general rule. I’ve tried to look into the topic of AI research on concept formation, and there is a lot out there about it, but a lot of it is old (like, 1990s or older) and it doesn’t seem to the focus of intensive current research.
It’s possible to put a very pessimistic spin on the success of deep learning, given the historically abysmal performance of AI relative to expectations and hopes. The pessimistic story would go as follows. With CNNs, we really did find “the right way” to perform a task that human (and some animal) brains can perform. We did this by designing algorithms to imitate key features of the actual brain architecture, and we were able to do that because the relevant architecture is unusually easy to study and understand – in large part because it is relatively well described by a set of successive “stages” with relatively little feedback.
In the general case, however, feedback is a major difference between human engineering designs and biological system “design.” Biological systems tend to be full of feedback (not just in the architecture of the nervous system – also in e.g. biochemical pathways). Human engineers do make use of feedback, but generally it is much easier for humans to think about a process if it looks like a sequence of composed functions: “A inputs to B, which inputs to C and D, which both input to E, etc.” We find it very helpful to be able to think about what one “part” does in (near-)isolation, where in a very interconnected system this may not even be a well-defined notion.
Historically, human-engineered AI has rarely been able to match human/biological performance. With CNNs, we have a special case in which the design of the biological system is unusually close to something humans might engineer; hence we could reverse engineer it and get atypically good AI performance out of the result.
But (I think; citation needed!) the parts of the brain responsible for “higher” intelligence functions like concept formation are much more full of feedback and much harder to reverse engineer. And current AI is not any good at them. If there are ways to do these things without emulating biology, many decades of AI research has not found them; but (citation needed again) we are no closer to knowing how to emulate biology here than we were decades ago.
That might be for the best. In order to hold the economy together (for human workers) and ensure human safety, we need AI to develop slowly enough that its arc of development can be directed towards human goals.
the issue here is that the “prominent person” in question has no intrinsic value, thus strip-mining them for news leaves them with nothing left.
Donald Trump and Hillary Clinton have each said a host of problematic things over the course of their lives, yet strangely they haven’t been abandoned by everyone yet.
Trump was elected because of this and actively exploited it on purpose, a move most others cannot safely execute.
Intrinsic value isn’t the actual defense against it. It’s more about a sort of social or political power.
Sometimes, a prominent person P says something ambiguous and weird on TV. It can be pre-taped, but it has to be “live” like an interview or a late night talk show. The statement is possibly problematic when taken out of context, and only a small point in support of the main thesis.
For example: “If you don’t know what the candidates stand for, maybe don’t vote” or “Women’s child rearing work is important and should be valued”or “Black men have big penises”.
The talk show host asks next question. Someone tweets this sentence in isolation.
“P said racist/sexist/fascist thing”
“Other people react to thing said by P“
“What twitter users think of P’s latest gaffe“
“Former friend condemns P”
“People distancing themselves from P”
Now our protagonist clarifies that they meant what they said, but they meant it in an innocuous, literal way.
“P doubles down on racist/sexist/fascist comments”
“P still not apologising”
“Right-wing weirdos agree with P“
Now P must clarify that he really didn’t mean it like that. He does not agree with the weirdos at all and regrets any offense he may have caused. He clarifies his original statement to eliminate any confusion.
“P offers non-apology, repeats offending statement”
“We decided not to give P a platform any more”
“Has racism/sexism/fascism re-entered the mainstream? A political scientist explains, also P is terrible“
At this point, the actual statement by P is buried three clicks deep in these news articles. P thinks the original offhand statement was blown out of proportion. He tries one more time.
“P: Concerns about racism/sexism/fascism blown out of proportion“
“P goes on offensive in racism/sexism/fascism row“
Q, a friend of P, tries to give a sympathetic account of the original statement.
“Q: P was misunderstood“
“Q defends P’s racist/sexist/fascist outburst“
“Q’s defense of P proves old boys networks still at work“
“P’s employer has still not fired racist/sexist/fascist P“
After Q, nobody wants to stick their neck out for P now, and nobody wants to be seen talking to P. People who defend P mostly do so anonymously.
“People need to stop defending P“
“Stop saying racism/sexism/fascism is no big deal“
“Waffling about giving racist/sexist/fascist people a platform hurts marginalized people the most“
The media realise that there is nothing more to say, and smaller outlets/latecomers try to milk the issue one last time. Nobody wants to talk to P any more, and P is wary of any journalist who contacts him.
“The privilege of P-supporters“
“We’ve had it with pro-P trolls in our comment section“
“Why we don’t talk to P and why people like P do not deserve a right of reply“
P tries to find somebody who wants to talk to him, somebody sympathetic. He does not want to talk to anybody who previously painted him as racist/sexist/fascist.
“P sets record straight“
“P shows true colors, talks to far-right ‘newspaper’ “
Repeat until so many people get fed up with racism accusations / fear unfounded racism accusations that a living meme gets elected President by showing he doesn’t care about racism accusations and plows through them like fresh fallen snow.
voxette-vk replied to your post
Surely the generalization to other markets is “being good at satisfying demand”?
Ohhh, duh. I am dumb :P (Thanks to @mbwheats for also pointing this out)
I have to be somewhere soon so I shouldn’t write too much, but yes – this is a real and important tradeoff. @furioustimemachinebarbarian said something good about this in this reblog, in that they framed it explicitly as a tradeoff
If you want the capitalist mode of production to work, people need to be able to reap returns from their activities that they can reinvest in capital. But capital investment is just another element of the bundle of goods someone buys, so my argument as stated ought to apply to it as much as to anything else. So my argument, as stated, was too broad.
I hope it was clear that my argument, as stated, was trying to establish the existence of a particular mechanism rather than provide a proposal. I don’t actually want everyone’s wealth to be literally the same at all times (trying to cause this would break all sorts of other things too, I’d expect). Rather, the point was that when the “initial endowments” are closer to equal, supply and demand (which I called “markets,” and which are a distinct desideratum from “capitalism”) work better.
Distinguishing capitalism from supply and demand is important. I should have done it more clearly in the OP, but I am also not sure @neoliberalism-nightly was doing it sufficiently in their ask – as far as I can tell prediction markets are supposed to work because of supply and demand, even without capitalism (which is not yet having a non-negligible internal effect in them).
I’m no longer in a hurry, so let me expand on this a bit.
To be completely precise, the target of my post was the tradition in economics of distinguishing “efficiency” from “distribution.” This distinction encourages economists to treat distribution (i.e. wealth [in]equality) as an outside concern that can be ignored when considering the market mechanism as a system.
The attitude is that the market “works” (in some “efficiency” sense) no matter what is going on with distribution, and insofar as we care about distribution, this is a separate value which we will in general have to trade off against “efficiency” / “the market working.” (Although it may be possible in principle to alter distribution without introducing market distortions, it is not generally possible in near-term political practice.)
This story is internally consistent if you define “efficiency” in the usual way, which is Pareto optimality. We know thanks to Arrow and Debreu (et. al.) that under some idealized assumptions, supply and demand will get us to a Pareto optimal outcome (First Theorem of Welfare Economics), and this is frequently viewed (see e.g. Stiglitz here) as a successful formalization of the views popularly associated with Adam Smith. Even work that is critical of the invisible hand, such as Stiglitz’s, has tended to concede Pareto optimality as the correct formal desideratum, arguing only that markets do not achieve it in practice as much as the First Theorem would lead one to think.
By contrast, my position is that Pareto optimality does not capture the good things we wanted out of the invisible hand in the first place. I first started thinking about this stuff after reading Brad deLong’s very entertaining post “A Non-Socratic Dialogue on Social Welfare Functions,” which I recommend reading. (I am largely just repeating deLong here, and less stylishly at that.)
As in the OP, I think what we want out of the invisible hand is (at least) a market that “gives the people what they want” in some intuitively recognizable sense.
A Pareto optimal outcome is defined to be an outcome in which no one can be made better off without making anyone else worse off. The phrase “can be made” should be interpreted as “by physically achievable means,” like transferring goods from one person to another. That sounds obvious, but has significant implications.
The richer you are, the less marginal utility you will get (on average) from goods you acquire. This is implicit in standard economic assumptions, to the extent that you cannot deny it without being very heterodox at best, and talking nonsense at worst. (You can get it from the usual assumption of convex preferences, plus the idea that individuals have utility functions, since convex preferences correspond to [quasi-]concave utility functions. Or, if you like, you can get concave utility functions from the assumption of loss aversion, without which finance makes no sense whatsoever.)
In practice, if people do deny it, they tend to do it by rejecting the utility concept as a whole (as the Austrians do). But without some way to do interpersonal utility comparisons, I’m not sure how you can even state the invisible hand idea. (How can individual self-interest serve the common good if there is no valid concept of “the common good”?)
OK, enough sidenotes. As I said, the richer you are, the less marginal utility you will get (on average) from goods you acquire. Thus, when there are large wealth inequalities, Pareto optimality is compatible with large sub-optimalities in sum-aggregated utility, in that it allows transfers (from rich to poor) which would increase summed utility a lot. The bigger and more widespread the inequalities, the more sub-optimality we can have (in this sense) even if everything is still Pareto optimal.
There are much more rhetorically forceful ways to put this. deLong puts it this way: if we say that the market’s desirable property is its tendency to produce Pareto optima, we are saying it optimizes a certain social welfare function, and if this function is a weighted sum of individual utilities, then it gives rich people bigger weights than poor people. (He derives this formally here.)
In other words, by saying “we will consider efficiency first and worry about distribution later,” and defining efficiency as Pareto optimality, we are implicitly saying that what we really ask the market to do is “give the people what they want, weighted by wealth.” This is pretty clearly not what we originally wanted out of the invisible hand, and not something that one would ever come up with as a natural desideratum. If the First Theorem vindicates the invisible hand, it is only by moving the goalposts.
Another way of putting it is that, by over-valuing the utility of the wealthy, the Pareto optimality desideratum treats the wealthy as utility monsters.
That last line.
“We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.”

(from here)
I’d rather the Invisible Fist than the Actual Boot, tbh.
It’s a joke name dude. My Communism tag is “The Red Hammer”, which should not be interpreted in the benign sense of the word. More in the sense of “the nail that sticks up gets hammered” and, well, red as in blood.
I should make a joke tag so folks know when I’m not being overly serious, I think. :P
“#The Actual Boot” would be a great tag for tankie posts, tbh.
Even the axiom of self-ownership isn’t so simple to pin down, and biological experimentation is only going to make it worse.
I literally could just amend “to you” to every post you make on the subject, at this point. :P
It’s pretty tough to define self-ownership given the existence of chimeras and conjoined twins, let alone psychological issues like split personalities and all the future weirdness that biotech is going to unleash.
Given that people have been arguing over the definition of “self” for thousands of years so far and it shows no sign of abating I don’t think it’s unreasonable to say that there are still unresolved issues here.
So this post has helped me finally crystalize a recurring train of thought I am having when confronted with other people’s opinions. See, my first reaction to the post above is absolute terror.
Because my brain tends to very quickly and wildly extrapolate any given view to its most extreme consequences. And boy howdy can you extrapolate a lot of things from a negation of self-ownership. Existing terrible things, like the war on drugs (of course the actual historical reasons for the war on drugs are horrible and racist but in theory you can rederive it from one’s health being a public matter), or reproductive coercion; but also lots of speculative terrible things. So th thoughts short-circuit from ‘there are weird things going on in the margins’ to ‘Argumate wants use the fact that chimeras exist to be able to kill me and harvest my organs for the greater good, and I will not have any moral foundation to object to that’.
Of course this is a bizarre way of thinking because the majority of people argue for issues because they care for these specific issues and not some wild consequences that are conceptually related, and aren’t trying to use foot-in-the-door tactics (and those who do try to get a foot in your door can be identified pretty easily). And in the concrete example of this here conversation it’s not even a policy discussion, but rather a theoretical musing. So all that anxiety is completely unfounded. Alas.
And I think that most concepts are useful even if they’re fuzzy at the margins. Non-relativisitc moleds were wrong but still we’ve managed to come up with planes.
Personally I think that some kind of contractualism is a better approach for getting the outcome that you want.
I don’t want to have my organs harvested without my consent, and nor does anyone I know, and even though the veil of ignorance is not mandatory, in practice in a world of seven billion people it’s very difficult to make rules that say you can’t be a dick to anyone except barry specifically.
Negotiating the individual issues is always going to be necessary; simple axioms either imply too much or too little, and are best used as slogans and rallying points to guide the political process.
While I believe in self-ownership, that really means I support most of the positions associated with the concept of self-ownership, not that I think they can necessarily be derived from this single axiom nor that this axiom is necessarily the foundation for morality and politics.
The issue I have with this, my esteemed strigiform and self-employed pharmacist, is idea that someone can like the arguments and concepts that surround a thing (ie: self-ownership in the recent case, but broadly libertarian ideals in general seem to get caught up in this a lot) but then dislike or even reject the principles behind those arguments. In short, there is a lot of folks who seem to like the results of libertarian arguments but don’t like where they come from.
Which is sort of a running issue, because in many cases the principles the arguments are founded upon can lead to some unpalatable ends, at least to some people. Folks will seem to say they don’t want to throw out the baby with the bashwater and ditch the principles with the unfortunate implications for their wants and desires, and keep the results, but the problem is you can’t really do that.
Like, the arguments and the like that surround self-ownership, and the derived protections from it, cannot be defended by the merits of how you, or anyone else likes them. The issue is that far from being difficult to make rules that say you can’t be a dick to any of the seven plus billion people except Barry, it’s actually exceedingly easy to do so unless your moral and ethical foundations are in order, and are universal.
Because that’s the only way to avoid explicitly allowing arbitrary and subjective choices into the system of morals and ethics.
Like, yeah, you have to use negotiation and navigate the complex network of human interactions and any society is going to be heavy on contract, but you can’t build your ethical framework from the top down. It’s got to have a base to build up from. Folks like the results of the principles but hate the principles, and that is just a recipe for disaster.
In practical terms, people liking something enough to take up arms to force others to comply with it - like property in general for instance - is how a political theory is physically realized. So if everyone hates the principles, then it doesn’t matter how much you think they’re true, unless you have all the guns. And from what I’ve seen of actual human behavior and actual markets and not hypothetical spherical cow markets, AnCap/pure libertarianism’s consequences will ensure that it is never the most viral meme. Which, IMO, is good because it lacks the ability to recognize that entire categories of human suffering are bad.
if it’s successful how you say it gonna die
Computers are an absolutely amazing device but it doesn’t take that much to render them inoperable. Entropy is the norm. Order is productive but fragile.
I didn’t explicitly and solely mean contemporary material culture and technology; it can be extended to philosophy, to art, to every aspect of d e e p c u l t u r e
Computers metaphorically. AR’s fears are probably overestimated, but “if it’s so strong, how can it die?” ignores how fragile complex systems are. You can build in some antifragility, but there are limits. (Humans, for instance, can recover from a variety of injuries, but routinely die from cancer, which requires only a few molecular alterations/mutations in one cell to start.)
SAN FRANCISCO—In an effort to reduce the number of unprovoked hostile communications on the social media platform, Twitter announced Monday that it had added a red X-mark feature verifying users who are in fact perfectly okay to harass. “This new verification system offers users a simple, efficient way to determine which accounts belong to total pieces of shit whom you should have no qualms about tormenting to your heart’s desire,” said spokesperson Elizabeth James, adding that the small red symbol signifies that Twitter has officially confirmed the identity of a loathsome person who deserves the worst abuse imaginable and who will deliberately have their Mute, Block, and Report options disabled. “When a user sees this symbol, they know they’re dealing with a real asshole who has richly earned whatever mistreatment they receive, including profanity, body-shaming, leaking of personal information, and relentless goading to commit suicide. It’s really just a helpful way of saying to our users, ‘This fuck has it coming, so do your worst with a clear conscience and without fear of having your account suspended.’” At press time, Twitter reassuredly clarified that the red X was just a suggestion and that all users could still be bullied with as little recourse as they are now.