Thanks.Thoughts? A lot of my work is trying to discover ethical frameworks people could utilize to essentially write their own "code" that could be recognizable in real/virtual landscapes.I don't know if any of you had the chance to read Jason Millar's excellent piece in WIRED last September called, You Should Have a Say in Your Robot Car's Code of Ethics. It's a great introduction to some of the ethical and legal ramifications of self-driving cars, but also poses a fantastic idea I think could be applied to VRM. Or I should say I know it's already being applied within the VRM community, but autonomous cars/AI may provide a more crystallized way for the average individual to understand why making decisions around their identity and personal data is so important.In the article, Millar poses an ethical dilemma known as the "Tunnel Problem" - you're in a self driving car going towards a narrow tunnel, and a child runs in front of your car. You don't have time to swerve, so you either kill the child or sacrifice your life. This decision has already been programmed by Google into their vehicles. Is it "fair" or "right" that Google's programmers should make this type of ethical decision that may run contrary to the driver/owner's wishes? Here's an excellent idea Millar suggests in his piece in regards to this issue:(snip)
For starters, we could choose to consider a manufacturer’s failure to obtain informed consent from a user, in situations involving deep moral commitments, a kind of product defect. Just as a doctor would be liable for failing to seek a patient’s informed consent before proceeding with a medical treatment, so too could we consider manufacturers liable for failing to reasonably respect user’s explicit moral preferences in the design of autonomous cars and other technologies. This approach would add considerably to the complexity of design. Then again, nobody said engineering robots was supposed to be simple.(snip)I wonder if VRM and personal identity issues could be presented within a similar context as a form of educating individuals why it's so critical they protect their data? Note, of course, this also implies individuals codify their values or ethics within an IOT or other data-friendly framework so their decisions can be made known in the context of these consent situations. That's a big challenge, but one this community has already begun doing with aplomb.
For starters, we could choose to consider a manufacturer’s failure to obtain informed consent from a user, in situations involving deep moral commitments, a kind of product defect. Just as a doctor would be liable for failing to seek a patient’s informed consent before proceeding with a medical treatment, so too could we consider manufacturers liable for failing to reasonably respect user’s explicit moral preferences in the design of autonomous cars and other technologies. This approach would add considerably to the complexity of design. Then again, nobody said engineering robots was supposed to be simple.
--John C. Havens
" target="_blank">
www.johnchavens.com
917-597-3323
@johnchavens
Author, Hacking HappinessFounder/Executive Director, The H(app)athon ProjectNewsletter, Happiness and Emerging Technology Research
Archive powered by MHonArc 2.6.19.