mitigatedchaos

No, you did not explain why it ‘does not make sense,’ you posited some objections that I answered. You have no ‘violated its understanding of unified agents,’ given I answered your supposed objections each time. You have NOT pointed out how it does not logically follow, you’ve merely said it doesn’t. The fact that minds ‘precede’ ownership isn’t even relevant, and so on.

Well, let’s see your derivation for self-ownership, then.  Explain how it logically follows. 

A moral system is about ‘good’ things happening, and ‘bad’ things not happening, but this is an issue of your definition of what is good, and what is bad, not the moral system. The issue is that you are focusing on an outcome and thus essentially disregarding the even existence of rules, 

The only reason to have rules is because of outcomes.  

but that is because things happen that you do not like, and you define that as bad, ignoring even the concept that the act itself may also be bad, but that is tied to your view of value as some actual meaningful thing, because it allows you, if accepted, to attempt to weigh out what is ‘valued’ and thus measure things against each other.

The universe which is best is the universe which is best for all observers with which it interacts (more or less, various intra-Utilitarian arguments etc), this is good even if no one particular person makes the change.

The reason that this is important is that things like starvation, death, and so on aren’t bad because someone caused them on purpose, they’re bad regardless of whether someone caused them on purpose.  

Suppose there are AnCap Communists.  They all sign a contract and build their commune.  Commune fails, famine follows, people die, but all the AnCap rules were obeyed.  It makes sense to say that they should not have done that.  AnCap lacks the moral language to criticize it.

I mean, suppose some guy called Bob is the one that talked them into it, and he knew this was going to happen, and it was going to fail, and some of them were going to die.  He talked them into it anyway.  But, since the AnCap rules were met, officially, he didn’t do anything wrong, and we have no standing to criticize him.  Here, we see again, the lack of a moral framework to say “tricking people into starting failed communes is wrong”.

So as long as Bob the Villain follows the rules, he can cause as much damage as he wants, harm an arbitrary number of people, and not have this considered morally bad.

But that requires value to be measurable that way, and it simply isn’t. You aren’t avoiding ‘bad’ things in the scenarios you proposed, you are just choosing what you consider a ‘less bad’ thing.

We know that difference in the level of value exists because we don’t experience one constant set point at all times.  Since you’re the one constantly going on about how it doesn’t matter how practical AnCap is, then in this case it isn’t actually required that any particular observer within the system be able to observe all value, merely that that value exists.

Additionally, since any relation is either deterministic, probabilistic, or random/arbitrary, the whole thing with having thoughts - to have those thoughts, there has to be some sort of process by which they were created if causality is real and we aren’t Boltzmann Brains.  This suggests minds as a kind of system even if exotic supernatural entities such as souls exist, or this isn’t the top-level reality.  Value being a thing of the mind, this suggests that value is, in principle to an omniscient observer, possible to determine.

That we cannot know it in others with certainty is not a disproof of Consequentialist ethical theory as a class, and if it were, it would be possible to mount an attack on AnCap by assuming everyone else is a P-zombie or some other kind non-person entity and therefore entirely devoid of rights of any kind.

That isn’t about control, though, it’s a surrendering of control. 

Look, if you’re going to give me bull about how “oh you really only think this because you fear loss of control”, well I don’t appreciate that, and I’m going to shove it right back at you.

Consequentialist systems have fixed rules, just at a higher level of abstraction than your AnCap system does.  It’s more like a function where the desires/states/etc of the various agents are inputs.  The function is fixed, the variables are not.

In this sense it’s more dynamic, like a market (or how a market is supposed to be), where there are rules but what is valued at any particular moment is a product of the values and interactions of the agents within the system.

You’re telling me I need to let go and trust the market, even though it might not value my life or autonomy, or might sacrifice me for some other goal, supposing the rules are followed.

Maybe you need to let go and trust the utility, even though it might sacrifice you, or not value your life or autonomy, even though the rules were followed.  Maybe you’re afraid of giving up that control.

Now, I don’t actually have a specific, fixed Consequentialist system on hand to give you, not that I wasn’t seeking one when I was operating better years ago, but you didn’t come to your system instantaneously, and there’s no reason to expect that the development or discovery of the One True Moral System will be easy.  After all, physics isn’t easy.

It’s not about wanting a system that is fixed, its the only way for there to be a system is for it to be fixed.  If it can be redefined, than it’s not really morality,

So, see above, fixed function, variable inputs.  The answer changes, but the code doesn’t change.  And it makes sense that the answer should change, since the best world, the best outcome, depends on who it’s for, even if its measurable.

It’s not about wanting a system that way, it’s about that system being the only logical answer to the question of morality. 

Again, I’m throwing your accusation of “you just believe that because of emotions” back at you.  Why did you seek it out, why does this answer appear satisfying to you?  It’s about wanting it that way.  And of course, I dispute that it is even a logical answer, but we’ll see about that later.

It isn’t about being ‘valued’ either, and attempting to project this control concept doesn’t make sense. It isn’t about me

Yeah, it is.  It’s all about you, your self-ownership, your property, your certainty of consent violations being wrong.  Not trusting the dynamic system the way you trust markets.  Not giving up that control.

If you didn’t like it, you shouldn’t have brought it up.

That’s your system, simultaneously trying to be derived from the individual in terms of ‘experiencing value’ and yet denying the individual actual moral agency in any objective way.

What’s the point of vengeance?  Retribution?

You’re arguing about applying moral agency to individuals, this is so you can justify their suffering to yourself.  So you can excuse it.  Erase it from the register of things that matter.  Much like the religious love to assign infinite moral liability to finite human beings in order to justify the fires of Hell they’ve been told of.

But the suffering isn’t actually good in itself.  It may be a practical necessity, or a natural consequence, or a learning experience, but it isn’t actually good inherently.

And if we’re talking about doomed frameworks, yours is the one that means that consent really doesn’t matter (given it’s subjective,) that even life really doesn’t matter (because it may be better to kill some because it has a net benefit for others,) any number of other things.

I ain’t done workin’ on it yet buddy, but if you ain’t gonna trade one life for ten billion, I don’t think you’re treating life as if it really matters.  You’re encountering a hang in system where it says one life and ten billion lives are both worth infinity.

I have ideas about consent and agent modification, among other things, but I haven’t been properly philosophical for years.