Text archives Help


RE: [projectvrm] NY Times: Letting Down Our Guard With Web Privacy


Chronological Thread 
  • From: "T.Rob" < >
  • To: "'Adrian Gropper'" < >, "'Devon M T Loffreto'" < >
  • Cc: "'Drummond Reed'" < >, "'Oren Samari'" < >, "'ProjectVRM list'" < >
  • Subject: RE: [projectvrm] NY Times: Letting Down Our Guard With Web Privacy
  • Date: Fri, 12 Apr 2013 23:15:58 -0400
  • Authentication-results: mailspamprotection.com; auth=pass smtp.auth=184.154.226.8

Sorry to revive an old thread, I’m deeply queued at the moment. 

 

Question: what exactly is a “secure email address”?  In an informal setting such as a list server or some blog posts, I’m as guilty as anyone of using the word “secure” as if it meant anything.  On a professional engagement I insist my customers define actual requirements in terms of the security service to be provided (authenticity, integrity, privacy, etc.), the threat to be mitigated (for example protection against data exfiltration by theft of a laptop versus by remote compromise of a server in a datacenter require VERY different controls), the audit and forensic capabilities, and so forth.

 

Second question: what is meant by “associates”?  That the key is bound to an X.509 cert where the Distinguished Name is the email address?  That the owner authenticates in person to register the key with their email address?  Don’t answer these, by the way.  I don’t want to derail this thread down that rabbit hole but am just pointing out the frame work within which that discussion would proceed.

 

The answer to the question of whether this 2-step process is adequate depends on the definitions of “associates” and of “secure”.  At a very high level it’s possible to answer “yes” but there’s a plethora of underlying assumptions required to arrive at that answer.  Assuming ALL of them are fulfilled it’s possible but whether it’s practical requires driving out all the assumptions identifying threats, controls and mitigations, then looking for a point where the cost/benefit/risk balance to the positive.

 

One example… is that DNS or DNSSEC?  Because there are weaponized tools that can hack enough servers in the DNS network that one must treat it as broken and hostile in the design of a security architecture.  One might stipulate that access always originates from within a closed network where DNS servers are known to be patched but that severely limits participation.  One might provide tools to check the patch status of local DNS servers but that approach is both brittle and incomplete.  One might specify DNSSEC but there’s very little of that available.  Finally, one might stipulate that the system must work despite broken DNS and that implies requirements for additional mitigating controls somewhere else – such as in the “associates” phase.

 

See where I’m going with this?  The benefits of scale allow for the CA to invest heavily in technical and human process controls over the entire key lifecycle.  This includes the cost of rigorous process controls and technology that provide for secure management of the key servers and revocation servers.  Normally we think of the CA as a signer but they are also a registry and revocation provider and these are as important to the system as provisioning.  The questions below must assume that *someone* is providing the human process rigor, registry and revocation services.  If it’s a CA we know it isn’t great but if it’s not a CA we should assume it’s worse and again look at whether additional mitigating controls are required.

 

Short answer – the question as posed cannot be answered with any degree of confidence.  If you start with the question of “what constitutes digital identity” and worked backward you’d almost certainly end up with the elements mentioned in the question:  a GUID, a (hopefully cryptographically strong) binding of the GUID to a real-world entity, and a registry component.  However it doesn’t follow that these will be the only elements of the solution.

 

-- T.Rob

 

 

From: [mailto: ] On Behalf Of Adrian Gropper
Sent: Sunday, March 31, 2013 4:20 PM
To: Devon M T Loffreto
Cc: Drummond Reed; Adrian Gropper; Oren Samari; ProjectVRM list
Subject: Re: [projectvrm] NY Times: Letting Down Our Guard With Web Privacy

 

Devon,

 

Would you consider the following 2-step process an example of sovereignty as you see it:

 

1- A person creates a key-pair and saves the private key in a personal server,

 

2- The person associates the public key with a secure email address and posts the cert via DNS?

 

Adrian



On Sunday, March 31, 2013, Devon M T Loffreto wrote:

I am interested in those questions Adrian.

 

I like what Drummond is saying here... specific to the "overall context of ownership and personal boundaries... THEN... relationship context..."

 

But I do not think it is an artificial dichotomy. And I think this is very material to all of our perspectives here in this conversation.

 

If we map personal sovereignty as a Human birth event, I believe that the dichotomy looks like an XY axis from the point of view of the sovereign Individual. Within the gift of a sovereign Human birth, personal security is a function of our root relationships which guarantee control of our personal freedoms as Individuals. In literal terms, the very American notion of personal sovereignty is to be guaranteed by a root "Guardian" relationship until an age of accountability wherein it is the root asset of an Individual's life to control the context of their choices. The whole process is a gift passed from one generation to the next... one that is "endowed by our Creator"... upon each of us equally...in Law. The problem is that we are mapping this process wrong currently, usurping personal sovereignty and replacing it with a form of administered sovereignty that has each of us recast as social liabilities rather than personal assets by default. Its literally un-American from the perspective of our founding intent... and ironically, it doesn't even work in Britain anymore.  

 

Personally sovereign data structures change the flow of accountability. Society is not formed and maintained in the King's/Federal center, it is originated and empowered at its democratic edges.

 

DNA, currency, root identity, market structure, governing Rights, etc... each of these are provided integrity in the same way. If we break that formula, there is no other. 

 

Its the ghost in the machine... 1+1 must = 3... with predictable regularity we have "faith" in.

 

Devon

 

 

 

On Sun, Mar 31, 2013 at 1:28 PM, Drummond Reed < "> > wrote:

On Sun, Mar 31, 2013 at 8:34 AM, Adrian Gropper < "> > wrote:

Thanks Oren. 

 

Is VRM ignoring the importance of context as described in Aquisti's work by focusing on relationship instead of ownership?

 

Health data again offers the most extreme example. People are made to feel like our health data belongs to the institution and typically we're asked to pay if we ask for a copy. Meanwhile, in the US we are all paying an extra $3,000 per year for healthcare we wouldn't want if we were better-informed. (IOM $750 B / 250 M people = $3,000) Even our doctors are feeling the impact of institutional data practices: http://thehealthcareblog.com/blog/2013/03/26/dear-hipaa-its-time/ Even more, doctors are universally held hostage by their IT vendors if they try to switch systems. 

 

On the other hand, Chris and Sean brought the http://idcubed.org work on personal data stores to our attention a few days ago. 

 

Is VRM ceding too much ground to the cloud? A focus on relationship rather than ownership or personal boundaries could be the wrong context.

 

Adrian, that seems like an artificial dichotomy. IMHO the whole idea of personal clouds and personal channels is to establish an overall context of ownership and personal boundaries, and THEN establish that personal data sharing takes place in the context of each relationship (or group of relationships) individual has.

 

=Drummond 

 

 

 



On Sunday, March 31, 2013, Oren Samari wrote:

- my first contribution, but really thought the group would enjoy the article below. forgive me if you've already seen this.


http://www.nytimes.com/2013/03/31/technology/web-privacy-and-how-consumers-let-down-their-guard.html?hpw&_r=0

Interesting article on privacy experiments in the context of individual behavior conducted by Alessandro Acquisti, a behavioral economist at Carnegie Mellon University. I encourage all to read, but some key takeaways for me were:

  • Context matters >> "If we have something — in this case, ownership of our purchase data — we are more likely to value it. If we don’t have it at the outset, we aren’t likely to pay extra to acquire it. Context matters."
  • Irrational Behavior >> "The results revealed the imperfection of human reasoning. Those who were offered the least control over who would see their answers seemed most reluctant to reveal themselves: among them, only 15 percent answered all 10 questions. Those who were asked for consent were nearly twice as likely to answer all questions. And among



--
Adrian Gropper MD




Archive powered by MHonArc 2.6.19.