I would appreciate it if some AI enthusiast would get mad at me
right now I’m objecting to a diffuse and incoherent set of fears about the future, but someone out there’s gotta have a theory of what mass technological unemployment actually looks like and a modestly granular account of the mechanisms by which artificial intelligence takes us there
I don’t have that model or that account, but y’all really seem to believe that the machines are right around the corner, so it’d be nice if someone laid it out somewhere
I wrote a lengthy harangue to @peopleneedaplacetogo about a week back, which appears here lightly edited:
If I were a smarter or better-informed person, would I feel differently about the intelligence explosion thesis? What do its better-informed advocates know that I don’t? What intuitions do they have that I lack?
I guess you’d have to know what I believe before you could tell me why I’m wrong, but as a person who’s much closer to the technology than I am, what are the sources of the rationalist belief in artificial intelligence more generally?
Because, from the outside, with the little understanding of the technology that I have, it seems like intelligence is harder and progress more limited than the boosters are telling me.
From the outside, throwing more processing power at the problem doesn’t seem to address the lack of sound concepts underpinning general machine intelligence, rather than specific intelligence.
The ‘machine learning’ we have, where we train algorithms on large data sets to sort the data and identify the patterns is impressive, sure, but the strength and limitations of ML suggest that we need more and more innovative conceptualizations and operationalizations of the problems we want the machines to address before we can apply machine power to any effect.
I apologize for my technological illiteracy; I’m sure I’m missing something crucial. I guess I just don’t have a good sense of what the conceptual paradigm for general intelligence would look like – “ML applied to the conceptualization of problems in the world”?
To which he replied:
I don’t have any specific knowledge of the topic either. I think a big intuition is just “don’t make strong predictions about what AI can or can’t do”.
What am I missing?
Neural networks and dedicated hardware for them. Have you seen their image generation capabilities lately? It’s like we’re creating slices of animal brains.
Now, it’s easy to object that this does not create an intelligence explosion, and fortunately it probably won’t.
The issue is that if you can create a neural net for walking, a neural net for object recognition, and a neural net for an industrial task, well… you put them together and you get something like an industrial task animal.
Most jobs don’t use anywhere near the whole human capability. What’s necessary for us as creatures (and what shaped us) isn’t necessarily what’s most effective economically. That is, we have more capabilities than artificial task animals, but are they profitable enough in such an economy to feed us? Is a model where former truck drivers all become Patreon-sponsored bloggers at all viable? And this will hit every sector basically at once.
There are limits to consumption based on available time/attention during a day. But food requirements aren’t really negotiable.
But I’m not an economist.