1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
remedialaction

Anonymous asked:

I think it's sort of a mistake to try to come up with a "real" definition of private property. It's not a physical truth about the universe we can discover if we only try hard enough; it's an agreement we can make amongst ourselves. I mean, there are better and worse ways of defining it, but the goal should be "useful" (like, for social/legal purposes, such that it's fairly clear to everyone what IS considered theirs) rather than "philosophically airtight".

argumate answered:

Yes. Even if you do come up with a definitive proof of something you still have the problem of some geezer with a shotgun ignoring all of your logic.

mitigatedchaos

I mean, I think this is a bit of a drift from the core, and that we’re dropping some element speaks a bit to the weakness of the counter arguments. I’m really curious still how the potential dangerous AI has anything to do with the ideology, that one still sort of boggles me, but alright.

No, it is the core, or very close to it.  The core of your ideology is about this (or very close to it), and most of the chains of logic I consider absurd spread from there.  Arguments about the desirable amount of Capitalism are different from whether Capitalism, in a radical sense, is the fundamental morality.  

I mean, this is sort of a weak counter to the idea that only one physical object can be located in any physical space at any time, the idea that ‘well MAYBE we’ll be able to have to minds in one place so…’

Your position is absolute.  The development of technology which undermines it weakens it.  Based on extrapolations of current trends, there is nothing about this which seems fundamentally physically impossible, just honestly not worth spending the money to develop directly.  (I mean, I can’t see much of a market for that particular use of the technology.)  The capability to do something like that would probably be developed for some other use, like just thinking faster.

The other thing is that “only one object can occupy one space” doesn’t actually prove any moral binding.

There is an implicit admission here in the sense that this supposed entity and the supposed victim are both seen as independent beings. 

Irrelevant - this does not prove ownership/property exists.

I am curious if the entity could hijack a victim entirely, and thus receive the full range of input from senses and the like, would they cease to be the original being and become something new, or would we still identify the victim as existing?

My opinion is that a partial forced mind overwrite is some fraction of murder, personally.  However, my grounds for the opposition to this are Consequential.

It logically follows that only you control you because you are, in fact, a distinct entity. You are. It is intrinsic to our existence. It is indeed a necessary component for us to exist. Indeed, one could say it is axiomatic.

Except with partial integration, this could cease to be the case.  It could even be voluntary, with thoughts coming and going from a group of people that willingly altered themselves in this manner, in a state where it isn’t clear whether they are, say, three agents, one meta-agent, or a mix between them.

That would form a sort of indistinct mind/agent.  Which, now that it comes to mind, is a philosophical issue I’ll have to examine in more detail when I have more energy.

Anyhow, drugs altering your behavior means someone else is partially controlling you, so this isn’t really true.  If drugs couldn’t alter behavior, this would be more believable.

Why? I never said your consciousness was not linked to the biological, I merely said it was not the central nervous system. That actions can (and are) obviously linked to the biological (because… I mean, of course they are?) does not change anything. This is entirely a red herring argument, it changes nothing.

Because you seem to have this idea of absolute control, which would be necessary for absolute morality, but it rather clearly doesn’t exist in the real world.

Additionally, it suggests the viability for technologies that break the “you” barrier.  It would be very, very strange if it is physically impossible (not just absurdly difficult) to inject thoughts.

Because it establishes first that ownership as a property exists, and the consequences there of. 

I think we need your definition of “ownership” here to work from.

Because ownership necessarily is linked to the attribution of consequences of actions. 

Only if your moral system is about judging the moral value of agents rather than the value of the actions themselves.  A Utilitarian can rank worlds based on differences that aren’t even the result of the actions of an agent.  Bad things don’t stop being bad if they aren’t attributed to an agent.

Also, neither of these produce outside property.

To go on the offensive for once, I have to turn this back around and invoke something you seemed to avoid in other places.

I may withhold answering some of these questions until you supply your definition of property, as I requested at the bottom of my post.  (Though I can’t remember if that was an edit, so maybe you started replying before it showed up.)

“With that in mind, let’s get your definition of “property”, seeing as “property” as it exists in the real world exists only insomuch as it is enforced, and is violated constantly.”

I question where, for you, morality comes from

Essentially, the route starts in that I know I exist and have positive and negative experiences, and I experience the positive experiences as desirable.  This is direct knowledge that applies regardless of whether this is the top level of reality or not.

There is a minimum level of complexity for any system mapping certain inputs to certain outputs.  Below that level of complexity, the mapping becomes imperfect.  Since I am a person in order to exhibit my level of complexity and human behavior, this strongly suggests that other humans are either people, or puppets of an entity that is a person of great size pretending to be a lot of smaller people.  The odds of a simulation successfully mimicking this decline rapidly as the sapience in the simulation decreases.   In the limit, this applies even if the entire world were pre-computed, since that requires actual people to have existed and interacted on my exact path before.

So other people exist, like me, and experience value, like me.  Now, since there is no reason to believe my value is exceptionally special, it follows that either their value is also valuable, or my value is not valuable.  Since I experience my value as valuable, the former makes sense.  

And also, if you are consequentialist, than you have essentially no choice but to conclude that my position is the only tenable one, in as much as it (in the broad sense of self-ownership) is the only basis we can establish universal, inalienable human rights.

I don’t need universal, inalienable human rights.  Human rights to me are in the same class as property itself.  Useful, but not true.

You have cited some sort of innate value to humanity as a collective that can be weighed against the individual, which would (as I pointed out) inevitably mean you could measure that value against each other. Yet to do so would mean that all else being equal, it is always preferable to kill one person if it saves the lives of two others.

In theory, yes.  In practice, humans trying to kill one person to save two often get it wrong due to incomplete information, irrationality, or just being too gung ho about violence.  Often people claim to be in favor of ‘the greater good’, but are only really using it as a cover for de facto Egoism.  So, a very high level of skepticism is required as we get farther along the spectrum from “let’s make a few small changes to the tax code” to “we need to kill large numbers of people”.  (If you look at my policy positions, I tend to favor incremental, reversible changes, rolled out in ways that can be empirically tested, and object to things like Communist revolutions or anything that involves large groups of people getting excited about killing other people.)

Even though rights theory isn’t true, it’s useful as a method of correcting for these biases in human behavior.  It might even be better for some people to believe in it even though it isn’t true, since it isn’t exceptionally harmful.  In fact, there is a risk that Utilitarianism/Consequentialism should be knowledge reserved only to an elite few, even if it’s true.

The thing to remember about the Trolley Problem, though, is that with high probability (5/7), you are one of the ones tied to the track the trolley is already going down, rather than the lever operator (1/7), or the guy on the other track (1/7).

 This is without getting into granular attempts to codify what it is that makes any given person valuable. And this of course also runs into the flaw of subjectivity; value is not an objective measure, and so any system based around supposed ‘value’ is also subjective. 

The algorithm of a perfect Utilitarianism (which has not been developed yet - and there’s no reason to think that the true morality will necessarily be easy to discover all the details of, math isn’t either -) is independent of the agents themselves.  (It just outputs a null if there aren’t any agents at any future point in the timeline, since in that case, nothing matters.)

What you value is not necessarily what any other person may value, and why you value a thing may differ widely from another person who may value the same thing, even if both are valued technically the same, without even getting into the further granulation of how much something is valued.

Because it is subjective, it is also therefor impossible to actually use to build any sort of system around that isn’t fundamentally just an arbitrary dictat of a given person or person imposing their subjective preferences on others.

Is this an argument that people will lie about whether they like things?  Because we’re already operating on imperfect information all the time, and most minds won’t be designed to constantly lie all the time about what they like if they won’t be punished for being honest about it.

People can lie about property, too, and do frequently.

Source: argumate philo