1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
remedialaction
mitigatedchaos

We’ve already had that talk? The short form is that we recognize ourselves as thinking entities, IE: I think, therefor I am as you’ve repeatedly stated. What follows is a recognition that our actions have effects upon the world, and those can be directly, due to being directed by our thinking minds, attributed consequentially to us. We ‘own’ the results of our actions, we are responsible for them. From this flows such necessary moral precepts such as the illegitimacy of initiating force against another thinking actor and the necessary fact that because are responsible for the results of our actions, we also own them, and that includes actions that mix with other material goods.

What follows is a recognition that our actions have effects upon the world, and those can be directly, due to being directed by our thinking minds, attributed consequentially to us. We ‘own’ the results of our actions, we are responsible for them.

This requires a kind of internal unity of agents/minds that I’ve already established does not exist.  You want absolute moral liability, but people do not have absolute control over their minds and never did, which is why brain injuries, drugs, and mental illness can alter behavior.

For your position to make sense, the effectiveness of drugs such as Ritalin should be impossible.  It shouldn’t be feasible to change someone’s level of alignment between their will and its execution through biochemical means if their will is absolute and unified.  

And if will isn’t absolute, if it’s subject to all the limitations and complex complications of life in physical bodies in a physical world, then the result of binding liability (if we even accept that) is far, far lower.

Because of this lack of perfect unity, if we took your proposition seriously, then it should be possible to charge someone’s executive functioning capability with a crime (or just moral liability) independently of the other subcomponents of their mind.  

Some sort of unification of limited moral binding based on limitations of execution, limitations of information, the default will, targeting of subcomponents of mind, does not, I think, move towards AnCap, but some new class of moral theory that has yet to be born, which is the first thing new/valuable I think I’ve actually gotten out of these discussions with you.

…though not entirely without precedent, but rather not formalized into a total system.  See typical handling of limitations in many common courtrooms, and many laws.

Source: mitigatedchaos the yellow black snake