Text archives Help


Re: [projectvrm] VRM tool characteristics


Chronological Thread 
  • From: Doc Searls < >
  • To: Serge Ravet < >
  • Cc: Project VRM < >
  • Subject: Re: [projectvrm] VRM tool characteristics
  • Date: Fri, 17 Jun 2011 12:49:29 -0400

Thanks, Serge. More below.

On Jun 17, 2011, at 11:01 AM, Serge Ravet wrote:

> Dear VRM colleagues,
>
> I've been following with interest the VRM discussions and I can't help but
> feel that I'm witness to a conversation between creatures living in a 2
> dimensional space trying to explain what "thinking out of the box" is. And
> the project of regulation designed by the European Commission doesn't even
> attempt to "think out of the line." It is based on good intentions (e.g.
> "right to be forgotten" ) and a complete inability to take even the very
> first step to make it a reality (there is already a regulation relative the
> right to "rectify" personal data, but the whole system would simply crumble
> if even a limited number of individuals were seriously trying to enforce it
> ). We don't need yet another piece of legislation but

This is VRM:

> we ought to equip citizens with the means to put an end to digital slavery.

Yes.

> It's not a piece of legislation regulating relationships between digital
> Masters and digital Slaves that will make the difference. We just want to
> be free!

Making ourselves free, and equipping that freedom, is what VRM is about.

> I will focus my argument on one point: personal data protection.
>
> Joe wrote earlier:
>> In a VRM system...
>> 1. your data is private
>> 2. your data is yours
>> 3. only fourth parties get only temporary access to shares of your data
>
> In the real world
> 1. your data is shared (even intimate thoughts are shared at some point,
> sometimes involuntarily --freudian slip)
> 2. our data is ours (within multilayered circles of trust)
> 3. unless one makes lobotomy normal practice you can't avoid leakage of
> personal information beyond circles of trust ("did you know that John did
> so and so?"), you can't oblige someone to forget something (Fahrenheit
> 451!).
>
> The introduction of technology in the discussion on privacy and identity,
> far from being conducive to enlightenment is creating a reified vision of
> identity as a set of attributes isolated from the rest of the world by high
> and thick privacy walls, controlled by user-defined policies. Moreover, all
> identity technologies have solely focused on the 'identification *of*',
> being oblivious to the 'identification *to*' issue. Identity technologies
> have not been developed in relation to the construction of one's identity
> but to implement authentication and authorisation processes, and these
> processes are now the foundation on which we construct our reflection on
> social interaction on the web (and VRM...). It is the story of the tail
> wagging the dog.

Well put.

> Data, like identity, is social. Any bit of data about myself is necessarily
> shared with other people and organisations. Starting from that premise,
> could't we imagine an architecture where all data is public while
> reinforcing privacy? Wouldn't it be wonderful if VRM services and
> applications where able to access a whole bunch of data without having to
> ask for permission, without forcing users to define fine-grained access
> policies for every bit of data by and about themselves (with XACML?
> Ontologies?)?
>
> It is possible and it isn't rocket science! At first iteration, we don't
> even need personal data stores nor "policy enforcement points" and all the
> paraphernalia of identity and access management.
>
> The current view of the interaction process in a trusted environment is:
> 1) store data in a PDS (what prevents a hacker or a crooked admin to access
> my data in a PDS hosted by a PDS provider?)
> 2) ask users to define access policies (booooring!)
> 3) establish complicated protocols to exchange data (requires ID providers
> that eventually be can hacked just as well)

"Identity Provider" as a term has always turned me off.

> 4) track, audit, punish, reward, etc.
>
> The approach I suggest is to fully separate data, metadata and services to
> eliminate the need for any direct connection between users and service
> providers. Full anonymity and strong identity can coexist.
>
> The process goes like this:
> 1) I put all my metadata in a public place; when I do so, I receive a
> handle to read statistics and messages placed in a mailbox associated to
> that handle. The only person that can make a connection between those
> metadata and me is me, unless I share the handle with someone else. If by
> accident I give it to someone I shouldn't have, I simply reset the handle
> to receive a new one.
>
> 2) if a VRM application wants to sell me a product or service based on my
> medata, they just have to leave a message in a space associated to my
> handle, and I'll collect it when I want --a daemon process would do that
> for me. This message is also public and can be harvested by other services
> --competitors, watchdogs, etc.
>
> So, if I made public data on my sexual orientations and practices, I could
> be discovered by a search engine without fear of being identified. I would
> know that I've been contacted by using the handle generated at the time of
> the creation of my profile (this would give a whole new meaning to 'love
> handles'!).
>
> With this simple architecture (akin to a wiki of metadata!), there is no
> need for identity providers, policy enforcement points, trust protocols. We
> can build a fully fledge VRM system. It is not perfect, but it could work.
> Privacy is reinforced by making metadata public! --through disconnecting
> data, metadata and identification... It would create the conditions for
> massively, anonymous, meaningful interaction.
>
> Of course, I could split my metadata across several spaces and have
> multiple handles. Some of my handles could even be created by others, like
> health authorities, employers or vendors. I would be the only one able to
> make the connection between all those handles. We could even imagine an
> even finer granularity of handles --and there is no limit to the number of
> handles: we could create more than there are particles in the universe.
>
> In such a system, there is no need for any kind of identity provider. One's
> identity can be computed in real time based on the aggregation of metadata
> collected from one's profiles. It is possible to have a strong multiple
> social identities (identification to) without being identifiable
> (identification of).
>
> One missing element in this simple architecture is the ability to manage
> trust and reputation. As everything is public and anonymous, how can we
> make the difference between someone pretending to have a diploma and
> someone who actually has one ---while remaining 100% anonymous. How can I
> make the difference between a legitimate business or client?
>
> In order to achieve that goal, we could simply use an existing piece of
> software: it's called a proxy. If every entity interacting on the Internet
> did so through a proxy of agent (personal or organisational), we could then
> implement global policies enforcing basic social behaviour, like not
> spamming: in order to send a message to 1 million proxies, we could have a
> rule such as the requirement to be in existence for more than 3 years. So
> if a new bio-engineering company discovers a new molecule and, through
> metadata harvesting, finds out that there are potentially one million
> people that could benefit from it, it would not be allowed to send a
> message to more than 100 (unknown) people and in order to reach the million
> found (but not identified!), it would have to go through a patient
> association, a national health service or some trusted service.
>
> One person could have multiple proxies, share metadata across them,
> transfer them one to another, providing individuals the ability to reinvent
> themselves. In order to prove an employer that one has a diploma through
> an anonymous 'job finding proxy' (ethnic discrimination is not anecdotal
> employment practice in France) the proxy of the employer simply checks with
> the proxy of the University that the statement is genuine -e.g. the
> university proxy simply checks wether the diploma database contains the ID
> of the proxy of the applicant, so there is no need to have names.
>
> While those agents/proxies would not require any kind of Identity provider
> to exist (they join "the society of agents," just like a new DNS joins the
> network of Domain Name Servers), they would have a strong identity: date of
> creation, social networks, history of activities, etc. It is this 'strong
> anonymous identity' that would create the base for massively, anonymous,
> meaningful and trusted interaction.
>
> Creating this "society of agents" (that are little more than glorified
> proxies) on top of a massive "wiki of personal metadata" would add the
> possibility to indicate the level of trustworthiness of metadata without
> having to reveal the source, nor the identity of the target: when a piece
> of data is written in the public space, it is automatically associated with
> a trust indicator based on who is writing it, so it is not the same if I
> write that I'm diabetic and when it is a health authority.
>
> The two layers (without and with agents/proxies) could co-exist and grow in
> parallel. But to exist, we need to free our personal data.
>
> Why not start today?
>
> Cheers
>
> Serge

Not enough time to go through all this, but I'm interested in hearing from
the rest of you on it.

Doc

>
>
> On 17 juin 2011, at 09:08, Luk Vervenne wrote:
>
>> Joe,
>>
>> Sure. Content wise we don’t disagree.
>> I ment for ‘access control’ (parent) to include ‘usage control’ (child).
>> But we might as well state that ‘control’ has two subtypes.
>>
>> On the other hand the EC data directive talks about “protection & control”
>> of personal data
>> One could argue that ‘usage control’ refers more to ‘protection’ and
>> ‘access control’ more to ‘control’
>> Anyway we need both and the EC data directive had both included since 1995.
>>
>> This year the European Commission will propose a review of the 1995 Data
>> Protection Directive (95/46/EC).
>> This will result in a new general legal framework for the protection and
>> control of personal data adapted to the Internet age, covering data
>> processing operations in all sectors and policies of the EU.
>>
>> This legal framework is envisioned to include the following topics:
>>
>> 1. Personal data management by users, requiring policy makers to
>> shift their focus
>> 2. Personal data processing by Social networks must go must go hand
>> in hand with the necessary respect for personal data
>> 3. Strengthen individuals' rights by giving them a high level of
>> protection and control over their own data and about how and by whom their
>> data is collected and processed
>> 4. "Right to be forgotten," : the right to have your data fully
>> removed when it is no longer needed for the purposes for which it was
>> collected. (i.e. for deleting profiles on social networking sites the
>> service provider can be relied upon to remove personal data completely).
>> 5. Users’ right2know
>> a. how your Internet use is being monitored for the purposes of
>> behavioural advertising.
>> b. when online retailers use previously viewed web sites as a basis
>> to make product suggestions.
>> c. how to access, rectify or delete your data. Exercise these rights
>> for free and without constraints.
>> d. when your personal data has been unlawfully accessed, altered or
>> destroyed by unauthorised persons. (Obligation to notify personal data
>> breaches beyond the currently covered telecommunications sector will be
>> extended to other areas, such as the financial industry)
>> 6. Data controllers are to implement effective policies to ensure
>> compliance with the EU data protection rules, such as :
>> a. appointing Data Protection Officers
>> b. carrying out Privacy Impact Assessments
>> c. applying a “Privacy by Design” approach
>> 7. Review of the 2006 Data Retention Directive (2006/24/EC),
>> concerning the type and amount of data necessary for security reasons and
>> whether the length of time that authorities can hold data is appropriate.
>> 8. Tighten current procedures for international data transfers,
>> including the so-called "adequacy procedure”, which verifies that a third
>> country ensures an "adequate" level of protection of personal data.
>>
>> Regards,
>>
>> Luk Vervenne
>> CEO
>> Synergetics NV/SA
>> Terlinckstraat 75 | 2600 Antwerp | Belgium
>> T(+32)3/239.58.13 | F(+32)3/239.59.88
>> M(+32)478.64.23.46 | VAT BE 0455.690.261
>> www.synergetics.be |
>>
>>
>> ---------------------------------------------
>> Disclaimer:
>> This email and any files transmitted with it are confidential and intended
>> solely for the use of the individual or entity to whom they are addressed.
>> If you have received this email in error please notify the system manager.
>> Please note that any views or opinions presented in this email are solely
>> those of the author and do not necessarily represent those of the company.
>> The integrity and security of this message cannot be guaranteed on the
>> Internet.
>>
>> Van: Joe Andrieu
>> [mailto: ]
>>
>> Verzonden: vrijdag 17 juni 2011 0:44
>> Aan:
>>
>> CC: 'Gon Zifroni'; 'Project VRM'
>> Onderwerp: Re: [projectvrm] VRM tool characteristics
>>
>> Luk,
>>
>> I've got to say Access Control is insufficient. It's not just about
>> controlling who gets to see your data, it's about what they are allowed to
>> do with it. Note that usage control--within a proper contractual or
>> regulatory framework--also addresses data already out there.
>>
>> -j
>>
>> Joe Andrieu
>>
>> +1 (805) 705-8651
>>
>> On 6/16/2011 2:53 PM, Luk Vervenne wrote:
>> 1 and 2 can be compressed (without losing meaning) into : you have full
>> access control over you data.
>> While doing you also avoid using the data ownership issue. You don’t own
>> many of ‘your’ data elements, but you do control who gets to see them.
>>
>>
>> Luk Vervenne
>> CEO
>> Synergetics NV/SA
>> Terlinckstraat 75 | 2600 Antwerp | Belgium
>> T(+32)3/239.58.13 | F(+32)3/239.59.88
>> M(+32)478.64.23.46 | VAT BE 0455.690.261
>> www.synergetics.be |
>>
>>
>> ---------------------------------------------
>> Disclaimer:
>> This email and any files transmitted with it are confidential and intended
>> solely for the use of the individual or entity to whom they are addressed.
>> If you have received this email in error please notify the system manager.
>> Please note that any views or opinions presented in this email are solely
>> those of the author and do not necessarily represent those of the company.
>> The integrity and security of this message cannot be guaranteed on the
>> Internet.
>>
>> Van: Gon Zifroni
>> [mailto: ]
>>
>> Verzonden: donderdag 16 juni 2011 23:19
>> Aan: Project VRM
>> Onderwerp: Re: [projectvrm] VRM tool characteristics
>>
>> Devon hi,
>>
>> Yes and no, it seems to me like a potential leak.
>>
>> Since it is a construction built on trust, if you decide to trust a second
>> (vendor) or third (platform, right?) party with the same privileges as the
>> fourth party then yes, but you clearly entrust it with your data. Even if
>> it is granular typically you'll have repeat interactions (subsequent or at
>> a later time).
>>
>> i.e. By trusting the second or third party for that role of managing your
>> identity (who you are) and data (what I do, who I know, where I am, where
>> I go, what I like, what I buy, what I want, etc) you open up to tracking
>> and profiling based on repeat exchanges (not just transactions I believe).
>>
>> How did you see it though? I was also thinking of the PGP architecture.
>>
>> Gon
>>
>> On 16 Jun 2011, at 11:07, Devon Loffreto wrote:
>>
>>
>>
>> Ill submit an edit:
>> First part #7 = 4th parties can be first, second and third parties, but
>> can only authenticate one role per transaction.
>>
>> Devon Loffreto
>>
>>
>> On Thu, Jun 16, 2011 at 9:18 AM, Gon Zifroni
>> < >
>> wrote:
>> Hi list, I've been following silently for the last year and took part in
>> IIW 11 last year.
>>
>> I'm not sure if I got everything right with the terminology, but from what
>> I can synthesize it seems to me we're talking about a system like so:
>>
>> In a VRM system...
>> 1. your data is private
>> 2. your data is yours
>> 3. only fourth parties get only temporary access to shares of your data
>> 4. third and second parties never get access to your data, the second
>> trusts the third and the third trusts the fourth.
>> 5. fourth parties of your choosing share your data for you
>> 6. only fourth parties can be polled on your behalf
>> 7. fourth parties can not be third parties too
>>
>> Let me flesh this out a little bit further:
>> 1. TOS, your data is your legal private property
>> 2. You are the only one who has complete access to all of your data. Even
>> if it is in the cloud, you are the only one authorized full access at any
>> given time.
>> 3. Only fourth parties are allowed to get and index only portions of your
>> data, and you can set for how long that data is retained.
>> 4. They can index it along with other people's data so they can be queried
>> by third and second parties. The query is not a query for data but a query
>> for matching people. The fourth party only returns to third parties the
>> number of matching people not their identity nor data about them. Second
>> parties can connect with first parties via the current fourth party.
>> 5. In terms of data storage and indexing it is a federated system like
>> email whereby you can choose your fourth party and have several for
>> different kind of data if you choose for it, jsut like people have several
>> email accounts.
>> 6. see 4.
>> 7. Fourth parties cannot make use of your data.
>>
>> I'm not sure if this is exactly the logic but I thought, given the Google
>> Wallet discussion (I think it'd be a mistake to let it aggregate, index
>> and know about all of your transactions, see 7), that it is a good moment
>> to zoom in and draft an architecture that by its nature keeps data private
>> while maintining certain level of flexibility and performance. Disclosure:
>> my background is in industrial design and architecture (housing). I moved
>> to SF to start a LBS with a group of engineers.
>>
>> I'm sure this can be further compressed into 3 or 4 basic rules that
>> qualify any VRM system.
>>
>> Gon
>>
>> On 16 Jun 2011, at 03:29, Katherine Warman Kern wrote:
>>
>>
>>
>> +1
>>
>> Katherine Warman Kern
>> 203.918.2617
>>
>> On Jun 15, 2011, at 4:36 PM, Doc Searls
>> < >
>> wrote:
>>
>> Thanks!
>>
>> I had meant #4 to cover that, in the sense that "managing" one's data
>> would include understanding it; but maybe that's not the case. Gotta think
>> about it....
>>
>> Doc
>>
>> On Jun 15, 2011, at 3:44 PM, Jamie Smith wrote:
>>
>>
>>
>> Thanks Doc, this is a great start.
>>
>> Would you say that number 4 ('help customers manage') would include tools
>> to analyse your own data?
>>
>> Such tools might help you identify your own behavioural or commercial
>> trends (for example by finding patterns in your travel expenses or your
>> weekly shopping), and in doing so would help you better a) express intent
>> (#3) and b) engage (#4).
>>
>> I suspect that such VRM tools would not necessarily have to have this
>> characteristic, but if they did, then I'd want it to be a separate and
>> distinct characteristic from 'help customers manage' - perhaps along the
>> lines of:
>>
>> 6. VRM tools help customers better understand their own data. This is
>> helping the customer discover and expose new value in their own data sets,
>> on their terms and for their own benefit.
>>
>> Keen to hear your views.
>>
>> Jamie
>>
>> On 15 June 2011 19:07, Doc Searls
>> < >
>> wrote:
>> @jamiedsmith tweeted a pointer to Alex Bogusky's New Conscious Consumer
>> Bill of Rights...
>>
>> https://twitter.com/#!/jamiedsmith/status/80903314803396608
>>
>> http://alexbogusky.posterous.com/the-new-consumer-bill-of-rights
>>
>> ... adding "needs more symmetry of power for consumers though".
>>
>> Rather than critique or seek to improve Alex's Bill, I thought I'd post
>> something we've needed for awhile: a list of characteristics shared by VRM
>> tools. I did that here:
>>
>> http://blogs.law.harvard.edu/vrm/?p=872
>>
>> Here they are:
>>
>> VRM tools are personal. As with hammers, wallets and mobile phones, people
>> use them as individuals,. They are social only in secondary ways.
>> VRM tools help customers express intent. These include preferences,
>> policies, terms and means of engagement, permissions, requests and
>> anything else that’s possible in a free market (i.e. the open marketplace
>> surrounding any one vendor’s silo or walled garden for “managing” captive
>> customers).
>> VRM tools help customers engage. This can be with each other, or with any
>> organization, including (and especially) its CRM system.
>> VRM tools help customers manage. This includes both their own data and
>> systems and their relationships with other entities, and their systems.
>> VRM tools are substitutable. This means no vendor of VRM tools can lock
>> users in.
>> Suggestions and improvements welcome.
>>
>> Doc
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
> Serge Ravet
>
> Free our Data Now! Support the Internet of Subjects Manifesto!
>
> Join us at ePIC 2011, the 9th International ePortfolio and Identity
> Conference
> 11-13 July 2011, London -- www.epforum.eu
> ----------------------------------------
> tel +33 3 8643 1343
> mob +33 6 0768 6727
> Skype szerge
> www.iosf.org www.eife-l.org
> ----------------------------------------
>
>
>
>




Archive powered by MHonArc 2.6.19.