I don't know if any of you had the chance to read Jason Millar's excellent piece in WIRED last September called,
You Should Have a Say in Your Robot Car's Code of Ethics. It's a great introduction to some of the ethical and legal ramifications of self-driving cars, but also poses a fantastic idea I think could be applied to VRM. Or I should say I know it's already being applied within the VRM community, but autonomous cars/AI may provide a more crystallized way for the average individual to understand why making decisions around their identity and personal data is so important.
In the article, Millar poses an ethical dilemma known as the "Tunnel Problem" - you're in a self driving car going towards a narrow tunnel, and a child runs in front of your car. You don't have time to swerve, so you either kill the child or sacrifice your life. This decision
has already been programmed by Google into their vehicles. Is it "fair" or "right" that Google's programmers should make this type of ethical decision that may run contrary to the driver/owner's wishes? Here's an excellent idea Millar suggests in his piece in regards to this issue:
I wonder if VRM and personal identity issues could be presented within a similar context as a form of educating individuals why it's so critical they protect their data? Note, of course, this also implies individuals codify their values or ethics within an IOT or other data-friendly framework so their decisions can be made known in the context of these consent situations. That's a big challenge, but one this community has already begun doing with aplomb.