Text archives Help


Re: [projectvrm] Ethics, Autonomous Cars and VRM


Chronological Thread 
  • From: Devon M T Loffreto < >
  • To: John Havens < >
  • Cc: " " < >
  • Subject: Re: [projectvrm] Ethics, Autonomous Cars and VRM
  • Date: Tue, 10 Mar 2015 11:25:49 -0400

This points at root Sovereignty idea...where does accountability for root structural decisions lay when we can observe paths of decision making, and should our social-engineering constructs which intentionally manipulate liability protection be the basis of values in an automated administration context?

An employee sits at a desk monitoring drone activity over a foreign country... a yes/no decision structure exists for dropping payload with multiple downstream consequences... "war" targets versus "civilian" targets... National versus personal accountability structures... where does human sovereignty exist in the administrative process?

Self-driving cars... Google codes human sovereignty out of existence...or does it? Is the engineer, manager, shareholders, board accountable for decisions of vehicle in its automated administration of data based decisions? Do humans re-enginner liability constructs to further protect human sovereignty in an automated admin context?

If there is no human sovereignty, what accountability to humanity exists in an AI system? Define "humanity".

When we use an "Administration First" model of human participation in any/every aspect of the "system"... and we connect that system through every facet of our management of resources, both human and otherwise, when and how does Human independence ever exist/matter/have power of authority? Where is your own personal on/off switch?

The enslavement of people to the administration system as data under management is counter-active of Human Sovereignty as defined in any blood context familiar to people. We make laws to make enslavement of people by people illegal, but seem to make no notice that our enslavement by an administration system that requires data capture for legal participation is destructive of Human freedom also.

Granted... we are in new terrain.

But... lets not pretend the writing is not on the wall, and W2's are a major problem in propagating effective communication.

Devon

On Tue, Mar 10, 2015 at 10:45 AM, John Havens < " target="_blank"> > wrote:
I don't know if any of you had the chance to read Jason Millar's excellent piece in WIRED last September called, You Should Have a Say in Your Robot Car's Code of Ethics.  It's a great introduction to some of the ethical and legal ramifications of self-driving cars, but also poses a fantastic idea I think could be applied to VRM.  Or I should say I know it's already being applied within the VRM community, but autonomous cars/AI may provide a more crystallized way for the average individual to understand why making decisions around their identity and personal data is so important.

In the article, Millar poses an ethical dilemma known as the "Tunnel Problem" - you're in a self driving car going towards a narrow tunnel, and a child runs in front of your car.  You don't have time to swerve, so you either kill the child or sacrifice your life.  This decision has already been programmed by Google into their vehicles.  Is it "fair" or "right" that Google's programmers should make this type of ethical decision that may run contrary to the driver/owner's wishes?  Here's an excellent idea Millar suggests in his piece in regards to this issue:

(snip)

For starters, we could choose to consider a manufacturer’s failure to obtain informed consent from a user, in situations involving deep moral commitments, a kind of product defect. Just as a doctor would be liable for failing to seek a patient’s informed consent before proceeding with a medical treatment, so too could we consider manufacturers liable for failing to reasonably respect user’s explicit moral preferences in the design of autonomous cars and other technologies. This approach would add considerably to the complexity of design. Then again, nobody said engineering robots was supposed to be simple.

(snip)

I wonder if VRM and personal identity issues could be presented within a similar context as a form of educating individuals why it's so critical they protect their data?  Note, of course, this also implies individuals codify their values or ethics within an IOT or other data-friendly framework so their decisions can be made known in the context of these consent situations.  That's a big challenge, but one this community has already begun doing with aplomb. 

Thoughts?  A lot of my work is trying to discover ethical frameworks people could utilize to essentially write their own "code" that could be recognizable in real/virtual landscapes.

Thanks.




For starters, we could choose to consider a manufacturer’s failure to obtain informed consent from a user, in situations involving deep moral commitments, a kind of product defect. Just as a doctor would be liable for failing to seek a patient’s informed consent before proceeding with a medical treatment, so too could we consider manufacturers liable for failing to reasonably respect user’s explicit moral preferences in the design of autonomous cars and other technologies. This approach would add considerably to the complexity of design. Then again, nobody said engineering robots was supposed to be simple.




Archive powered by MHonArc 2.6.19.