1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
argumate
argumate

the idea that Milo can’t be racist because he’s Jewish or can’t have regressive gender politics because he’s gay is just strange, I mean isn’t that the exact kind of identity politics weirdness that he and his acolytes decry?

or is it supposed to function as a tu quoque to his opponents?

I mean it just seems like the correct response to allegations of racism or sexism or whatever would be to own it and stick to his guns.

mitigatedchaos

It may be operating on the theory that if this brute categorization is weaponized and shoved in the Left’s face enough, they’ll get the idea that Venn Diagram Intersectionality is dumb and cuts the corners off of people to get them to fit in the overlapping circles of identity prescribed for them.

If that is the intent, it won’t actually work, though.  We’re talking about the same political movement that pushed identity politics so much that white nationalism is starting to come back from the dead.

politics
nostalgebraist
nostalgebraist

A number of people recommended this post to me, and it is indeed good and worth reading.  I say that only partly because it provides evidence that aligns with the preconceptions I already had :P

Specifically, here is what I wrote in this post:

I was thinking about this stuff after I was arguing about deep learning the other day and claimed that the success of CNNs on visual tasks was a special case rather than a generalizable AI triumph, because CNNs were based on the architecture of an unusually feed-forward and well-understood part of the brain – so we’d just copied an unusually copy-able part of nature and gotten natural behavior out of the result, an approach that won’t scale

The gist of Sarah’s post is that in image recognition and speech recognition, deep learning has produced a “discontinuous” advance relative to existing improvement trends (i.e., roughly, the trends we get from using better hardware and more data but not better algorithms) – but in other domains this has not happened.  This is what I would expect if deep learning’s real benefits come mostly from imitating the way the brain does sensory processing, something we understand relatively well compared to “how the brain does X” for other X.

In particular, it’s not clear that AlphaGo has benefitted from any “discontinuous improvement due to deep learning,” above and beyond what one would expect from the amount of hardware it uses (etc.)  If it hasn’t, then a lot of people have been misled by AlphaGo’s successes, coming as they do at a time when deep learning successes in sensory tasks are also being celebrated.

Sarah says that deep learning AI for computer games seems to be learning how to perform well but not learning concepts in the way we do:

The learned agent [playing Pong] performs much better than the hard-coded agent, but moves more jerkily and “randomly” and doesn’t know the law of reflection.  Similarly, the reports of AlphaGo producing “unusual” Go moves are consistent with an agent that can do pattern-recognition over a broader space than humans can, but which doesn’t find the “laws” or “regularities” that humans do.

Perhaps, contrary to the stereotype that contrasts “mechanical” with “outside-the-box” thinking, reinforcement learners can “think outside the box” but can’t find the box?

This is reminiscent of something I said here:

My broad, intuitive sense of these things is that human learning looks a lot like this gradient descent machine learning for relatively “low-level” or “sensorimotor” tasks, but not for abstract concepts.  That is, when I’m playing a game like one of those Atari games, I will indeed improve very slowly over many many tries as I simply pick up the “motor skills” associated with the game, even if I understand the mechanics perfectly; in Breakout, say, I’d instantly see that I’m supposed to get my paddle under the ball when it comes down, but I would only gradually learn to make that happen.

The learning of higher-level “game mechanics,” however, is much more sudden: if there’s a mechanic that doesn’t require dexterity to exploit, I’ll instantly start exploiting it a whole lot the moment I notice it, even within a single round of a game.  (I’m thinking about things like “realizing you can open treasure chests by pressing a certain button in front of them”; after opening my first chest, I don’t need to follow some gradual gradient-descent trajectory to immediately start seeking out and opening all other chests.  Likewise, the abstract mechanics of Breakout are almost instantly clear to me, and my quick learning of the mechanical structure is merely obscured by the fact that I have to learn new motor skills to exploit it.)

It is a bit frustrating to me that current AI research is not very transparent about how much “realizing you can open treasure chests”-type learning is going on.  If we have vast hardware and data resources, and we only care about performance at the end of training, we can afford to train a slow learner that can’t make generalizations like that, but (say) eventually picks up every special case of the general rule.  I’ve tried to look into the topic of AI research on concept formation, and there is a lot out there about it, but a lot of it is old (like, 1990s or older) and it doesn’t seem to the focus of intensive current research.


It’s possible to put a very pessimistic spin on the success of deep learning, given the historically abysmal performance of AI relative to expectations and hopes.  The pessimistic story would go as follows.  With CNNs, we really did find “the right way” to perform a task that human (and some animal) brains can perform.  We did this by designing algorithms to imitate key features of the actual brain architecture, and we were able to do that because the relevant architecture is unusually easy to study and understand – in large part because it is relatively well described by a set of successive “stages” with relatively little feedback.

In the general case, however, feedback is a major difference between human engineering designs and biological system “design.”  Biological systems tend to be full of feedback (not just in the architecture of the nervous system – also in e.g. biochemical pathways).  Human engineers do make use of feedback, but generally it is much easier for humans to think about a process if it looks like a sequence of composed functions: “A inputs to B, which inputs to C and D, which both input to E, etc.”  We find it very helpful to be able to think about what one “part” does in (near-)isolation, where in a very interconnected system this may not even be a well-defined notion.

Historically, human-engineered AI has rarely been able to match human/biological performance.  With CNNs, we have a special case in which the design of the biological system is unusually close to something humans might engineer; hence we could reverse engineer it and get atypically good AI performance out of the result.

But (I think; citation needed!) the parts of the brain responsible for “higher” intelligence functions like concept formation are much more full of feedback and much harder to reverse engineer.  And current AI is not any good at them.  If there are ways to do these things without emulating biology, many decades of AI research has not found them; but (citation needed again) we are no closer to knowing how to emulate biology here than we were decades ago.

mitigatedchaos

That might be for the best.  In order to hold the economy together (for human workers) and ensure human safety, we need AI to develop slowly enough that its arc of development can be directed towards human goals.

ai
argumate
argumate

oh god, I just realised the extent of the parallels between slut and cuck, and what that implies for our future.

reclaiming the slurs.

“I’m a Cuck” t-shirts.

Earnest YouTube videos of guys giving slam poetry expositions on why it’s actually good to be a cuck, and only neanderthals claim otherwise.

CuckWalk.

is this something we can prevent from happening, like some kind of back to the future deal where we change the timeline??

mitigatedchaos

How does that silly NRX saying go, “Cthulu always swims to the left”?

I intend to roll my eyes at it and continue to not be associated with Feminism.

gender politics
ranma-official
ranma-official:
“ fed-detector:
“ ranma-official:
“ susies-studyblr:
“This is so gross
”
I don’t see a problem with children earning money - the problem kicks in with opinions like this. Expectations shift to assume that if you don’t start earning...
susies-studyblr

This is so gross

ranma-official

I don’t see a problem with children earning money - the problem kicks in with opinions like this. Expectations shift to assume that if you don’t start earning money as early as possible, you are lazy. (See: the concept of “putting your foot in early”).

Forcing workers to compete with each other rather than oppose the bosses.

fed-detector

If you’ve made $10,000 recycling cans you either have people helping you and bringing you their cans or you’re working an insane amount of hours doing it. If he’d earned a couple hundred dollars doing this that would be different. But this is serious time consuming labor this kid must be doing. It’s not kid stuff. And since education ought to be free he shouldn’t have to save up money for it anyway.

ranma-official

The worst I’ve ever seen this attitude go is, people made an argument that it’s easy to accumulate money quickly, citing the Harvard experiment with students who were given $5 and two hours. As you remember, the winners completely forego the initial sum and 1) make and sell reservations for restaurants at peak hour 2) sell their five-minute presentation time as ad space to a company, making $600.

You can see interesting assumptions about the average person who has $5 that this makes.

Source: weavemama
discoursedrome
discoursedrome

oktavia-von-gwwcendorff

If fossil fuels are so great, surely somebody should be able to make money off them without externalizing ~90% of the costs to nonconsenting others and still getting big subsidies from the state.

If one compares market solar to collectivist coal, of course market solar ends up looking worse than it actually is, because market solar isn’t taking everyone else’s money at gunpoint (or at smokestackpoint via hospital bills, disabilities etc.).

mugasofer

In what sense are negative externalities “collectivist” or “taking everybody’s money at gunpoint”? Because they wouldn’t exist in ancap utopia, because in ancap utopia all problems would be fixed?

Calling things with unaddressed negative externalities “collectivist” sounds like some kind of psyop to trick libertarian capitalists into accidentally becoming socialists. I mean, I’m happy to see capitalists acknowledging the seriousness of externalities, but trying to roll them into a capitalist economic model takes you to weird places.

Externalities tend by their nature to be subtle and off-book: they’re very hard to quantify or even identify, and companies and NGOs expend considerable resources on further obfuscating them. So you might go 20 years under a policy before you have even a crude measure of its externalities, and even then, getting that information is so costly that the crude measure will be heavily influenced by the interests of whatever group first chooses to bear that cost. And then, what? How do you actually price externalities from air pollution and climate change into carbon? As far as I can tell, you can’t except via a carbon tax (which will almost surely not price it “correctly” since it’s imposed by political fiat). Which might not sound like a dealbreaker for a capitalist, but the problem with handling broad externalities this way is that there are so many of them.

Like, okay, one thing I like to go on about is that small neighbourhood stores have major positive externalities on their neighbourhoods and broader communities, and the move toward big-box stores is one of countless ways in which companies improved margins by declining to provide those externalities. Thus big-box stores have an “unfair” advantage and which eventually leads to a world where no one can afford to provide those classic benefits and all the social structures dependent on them collapse. Morever, economic monoculture (in this case, The Only Store Is the Wal-Mart) is itself a negative externality, since it reduces the ability to weather shocks. If you really want to get serious about accounting for subtle and diffuse externalities in a capitalist model, you end up with massive interventionism and forced wealth redistribution through taxation and subsidy pretty fast, at which point you’re not really letting “the market” do things in the first place.

It seems like at some point you just have to give up and accept that the market will price things not just in accordance with their real value but also according to the ease of pricing them and of slotting those prices into a transactional model. There are countless cases where things that are very valuable are handled poorly by the market simply because they have issues with that second criterion. As long as costs and benefits vary in legibility, profit-seeking will optimize for illegible costs and legible benefits at the expense of other varieties, irrespective of their true importance.

mitigatedchaos

I mean, you say that about a Capitalist model, but any model is going to have difficulties effectively finding, evaluating, and pricing externalities. …even models that insist on “not using prices”.

Source: xhxhxhx capitalism communism
argumate
beldaran

Argh, I’m super bored and ornery today.

Hey internet, I just mixed Indian curry with kimchi and put it over Japanese rice. Quick, someone come yell at me about cultural appropriation.

argumate

What about Japanese curry! They picked it up from the British, who picked it up from India, so it’s been through multiple layers of cultural reinterpretation.

The Imperial Navy made it a standard dish, just like the Royal Navy, and apparently the maritime defence force still eat curry for Friday meals even today!

But we must go deeper… while the Japanese were occupying Korea, they introduced curry there. Now you can get Korean Japanese British Indian curry, a hybrid dish whose existence depends on the most brutal imperialism of the 19th and 20th centuries!

Coming up next: why the US navy drinks coffee and the British navy drinks rum, a sordid tale of slavery and exploitation. Yay, history!

Source: beldaran
bambamramfan

What can you see

bambamramfan

Wrapping up this weekend of posts talking about the future, I’d like to ask a question.

A few weeks ago someone pointed out that the anti-SJW crowd is so cavalier towards Trump because they can’t really imagine our country ending up anywhere to the right of where we currently are. There might be some policy change in the tax rate, but fundamentally, campus activists will be arguing for corporations and the US government to stop oppressing numerous identities, and corps and the gov will condescendingly humor them.

Which sounded accurate (especially as rightists would argue I was wrong about things, but not actually deny that particular mindset,) but it raised a broader question - can any of us imagine a significantly different future than now?

Think about the strangeness of today’s situation. Thirty, forty years ago, we were still debating about what the future will be: communist, fascist, capitalist, whatever. Today, nobody even debates these issues. We all silently accept global [liberal democratic capitalism] is here to stay. On the other hand, we are obsessed with cosmic catastrophes: the whole life on earth disintegrating, because of some virus, because of an asteroid hitting the earth, and so on. So the paradox is, that it’s much easier to imagine the end of all life on earth than a much more modest radical change in [liberal democratic capitalism] .  - Zizek

So what does everyone reading this think things will look like ten, twenty, or thirty years from now? Yes we can joke a lot about potential disaster scenarios (apocalypse, Big Brother, fast takeoff, the Social Justice Internationale) but uh, seriously, what do you think that will look like? What would living in it be like?

Do you think a fascist takeover no-for-real is likely? Will there be an underground? What will happen to the internet? Will we go backwards on racial justice and if so in what ways?

On the other side, does anyone think the forces of progressivism can win? Not just keep their head above water, but actually establish enough equality to make racism and sexism less pressing issues? What the hell does that look like?

Or even if your a techno-utopian who thinks some of these life changing developments (immortality, super AI, brain upload) will happen within 30 years, what will that uptake practically look like? Will everyone in the world get it on day 1? If not, how will it be distributed? How long will it take before more than the 10% richest people in the world benefit from it? 50%? Everyone? In the interim what does a world with radically powerful technologies in only the hand of a few look like to you?

I want to be more imaginative, and have at least some idea of what a medium-term future is that isn’t just more “Democrats and Republicans without progress fight and young people whine about it on the internet.” But do we even have the capability to take it seriously?

mitigatedchaos

If you like, I could brainstorm some more exotic alternate futures.

In practical terms, however, I think the tech won’t be addressed until it is closer and looms in the public imagination. Same with lots of other issues.

bambamramfan

Sorry if I was unclear (to @wirehead-wannabe too.) I mean what do you really think is a likely possibility for the 10-30 year timeframe.

A lot of decisions to pursue the normal career path don’t really make sense if you think within 20 years the world will look like a crapsack. And if you think fascism is really coming, writing easily search criticism of it is also a bad idea (same for if you really believed the Left was going to be sending enemies to the gulag.) Investing in retirement vehicles or long term assets would be absurd.

But most people don’t, and they act as if they are preparing for a life of perpetual liberal capitalism. I guess I can only think of survivalists or MIRI not fitting that, but I’m sure there are plenty of other groups putting their money where there mouth is.

What are non-liberal futures you see that you think might really happen, enough to consider life choices around?

mitigatedchaos

Actually, as a combination of technological developments, lack of resource shortages so far, and the election of Trump, my estimates of the risks of global nuclear war and total collapse have gone down, even though my estimates of necessity of geoengineering have gone up.  I was wanting to increase my level of survivability, and I still do to a degree, but less so now.

Which is basically the opposite of the Left’s reaction to him.  But I’m a Nationalist (though I did not vote for Trump), and Trump’s election felt unreal - the Establishment was freaking out about him, even on the Right.  So that meant it actually is possible to break out of the Establishment and its goals, possibly even lower the amount of unnecessary war, maybe.

If he is successful, Trump may shift the Republicans into a sort of Populist party that cares less about wedge social issues and less about raw exploitation and exporting all of the nation’s intellectual property/capital for short-term gains now.  We’re seeing movement on the H1B issue, which was something big business desperately wanted, so while my estimate of environmental risks has gone down, my estimate of indestructible corporate oligarchy has also gone down (even as it went up for most leftists).

Don’t discount the possibility that Trump will be somewhat successful.  His immigration plan is going to tighten the labor market, and non-citizen immigrants don’t get to vote.  He also isn’t fundamentally committed to hard right capitalist policies economically.

The most probable non-liberal outcome for the United States is a military coup after some combination of factors.  Leftists lack the power to conduct a violent overthrow of the government, and the power of the US military is immense.  I don’t think it would be a civil war.  I also wouldn’t expect the coup forces to be hard right or to be sympathetic to hard racism - rather to just continue to let racial problems go unresolved.  Likewise, they wouldn’t be Communist, but probably some kind of Capitalist economic Nationalists with some eventual level of corruption.  The coup would lower economic output, but probably not wipe out all savings.

(Edit: I think the coup forces might target Muslims, but I think other groups such as Hindus and Buddhists would be left alone.  (People worry about Islam spreading and undermining all of society, but only worry about Hindus as competition for jobs, which is way less pressure.)  One of the groups at risk are Chinese immigrants, depending on the actions of the PRC and if it becomes way more dangerous than it currently is, instead of starting to succumb to the problems it has allowed to build up.  Gays probably wouldn’t be pushed on too hard since they’re already in the military, and might even be used as a justification to exclude Muslims.  They are even some transgender veterans, though they would be at higher risk.)

It’s also possible that California may leave the Union for real - the difference in values is increasing.  If they do, I think they’ll be let go, and maybe the rest of the West coast will follow them.  In this case, the US will start to be split into multiple countries and will shift right politically without the heavy blue weight of California.

politics
bambamramfan

What can you see

bambamramfan

Wrapping up this weekend of posts talking about the future, I’d like to ask a question.

A few weeks ago someone pointed out that the anti-SJW crowd is so cavalier towards Trump because they can’t really imagine our country ending up anywhere to the right of where we currently are. There might be some policy change in the tax rate, but fundamentally, campus activists will be arguing for corporations and the US government to stop oppressing numerous identities, and corps and the gov will condescendingly humor them.

Which sounded accurate (especially as rightists would argue I was wrong about things, but not actually deny that particular mindset,) but it raised a broader question - can any of us imagine a significantly different future than now?

Think about the strangeness of today’s situation. Thirty, forty years ago, we were still debating about what the future will be: communist, fascist, capitalist, whatever. Today, nobody even debates these issues. We all silently accept global [liberal democratic capitalism] is here to stay. On the other hand, we are obsessed with cosmic catastrophes: the whole life on earth disintegrating, because of some virus, because of an asteroid hitting the earth, and so on. So the paradox is, that it’s much easier to imagine the end of all life on earth than a much more modest radical change in [liberal democratic capitalism] .  - Zizek

So what does everyone reading this think things will look like ten, twenty, or thirty years from now? Yes we can joke a lot about potential disaster scenarios (apocalypse, Big Brother, fast takeoff, the Social Justice Internationale) but uh, seriously, what do you think that will look like? What would living in it be like?

Do you think a fascist takeover no-for-real is likely? Will there be an underground? What will happen to the internet? Will we go backwards on racial justice and if so in what ways?

On the other side, does anyone think the forces of progressivism can win? Not just keep their head above water, but actually establish enough equality to make racism and sexism less pressing issues? What the hell does that look like?

Or even if your a techno-utopian who thinks some of these life changing developments (immortality, super AI, brain upload) will happen within 30 years, what will that uptake practically look like? Will everyone in the world get it on day 1? If not, how will it be distributed? How long will it take before more than the 10% richest people in the world benefit from it? 50%? Everyone? In the interim what does a world with radically powerful technologies in only the hand of a few look like to you?

I want to be more imaginative, and have at least some idea of what a medium-term future is that isn’t just more “Democrats and Republicans without progress fight and young people whine about it on the internet.” But do we even have the capability to take it seriously?

mitigatedchaos

If you like, I could brainstorm some more exotic alternate futures.

In practical terms, however, I think the tech won’t be addressed until it is closer and looms in the public imagination. Same with lots of other issues.

politics