Text archives Help


Re: [projectvrm] NY Times: Letting Down Our Guard With Web Privacy


Chronological Thread 
  • From: Devon M T Loffreto < >
  • To: ProjectVRM list < >
  • Cc: Adrian Gropper < >, Drummond Reed < >, "T.Rob" < >
  • Subject: Re: [projectvrm] NY Times: Letting Down Our Guard With Web Privacy
  • Date: Sat, 13 Apr 2013 14:42:55 -0400

One other aspect that does not get talked about much is the function of *currency* as a medium of expressing personal faith in the system that gives portability to our sovereign structure. Transactional currency is ripe for disruption... and that may be an emergent phenomenon already in process. It opens up a whole new spectrum of consideration for "security". It also very much involves the question "what is currency?"... especially in a global, decentralized, edge driven, digital, etc...

Trust and reputation gaming are generally just that... Drummond places the Trust Framework conversation in a context that I remain open to... and I recently read a paper that Scott David put into circulation from MIT and U of W that talked of "data leverage" as a result of this approach, and dependent on this trust framing as contractual reciprocity. (I dont have that in a linkable format...anyone?)

The challenge that remains is where does this structural shift start for "me"? And once that is being approached by enough of us as Individuals... what is the nature of opportunity exchange that is enabled that does not exist currently?

VRM is good at conceptualizing... but leverage and opportunity are almost non-existent in this community. That trend will need to reverse if solutions are to move into practical view.

Devon




On Sat, Apr 13, 2013 at 1:03 PM, T.Rob < " target="_blank"> > wrote:

We may be driving this toward a multi-layer trust system.  At one layer, we actually talk about the implementation details behind “associates” and “secure email address” and open up details of the security architecture to review, pen testing and white-box scanning.  At another layer, we provide the general public a way to verify that the review and testing are performed, drilling down into the details if desired, but in most cases looking for the assurance of a trust seal or reputation ranking.

 

That said, I’m not entirely sure trust seals are workable in the current implementation.  Over at Tresorit their privacy and cookie policy links to Zendesk’s privacy and cookie policy which displays the TRUSTe seal.  While Tresorit don’t actually claim TRUSTe qualifications, the wording “our cookie page” and click path could easily mislead site members into thinking Tresorit qualified for TRUSTe status.  But that’s a bit of a reach.  What really bugs me about TRUSTe and a couple others is the number of sites I’ve run into that completely fail the OWASP password management and authentication guidelines and still qualify for the seal.  Some combination of central trust seal and crowdsourced, reputation-based ranking of that seal on a per-site basis might improve the situation.

 

If this were the case, then your conditions might be expanded to 3 which include:

 

1-      A person creates a key-pair and saves the private key in a personal server,

2-      The person associates the public key with a secure email address and posts the cert via DNS,

3-      The provisioning operates within an ecosystem which provides ongoing assurance of the security architecture.

 

I’d happily conclude that discussion with an enthusiastic “yes!” 

 

Then there would be a rigorous process conducted among specialists to provide the assurances alluded to in the high-level description.

 

-- T.Rob

 

 

From: " target="_blank"> [mailto: " target="_blank"> ] On Behalf Of Adrian Gropper
Sent: Saturday, April 13, 2013 12:38 PM
To: T.Rob
Cc: Devon M T Loffreto; Drummond Reed; Oren Samari; ProjectVRM list


Subject: Re: [projectvrm] NY Times: Letting Down Our Guard With Web Privacy

 

{ T.Rob's answer draws heavily from the thread Patient ID Redux on http://lists.pde.cc/lists/arc/personal-clouds/2013-03/msg00241.html }

 

I appreciate T.Rob reviving of this thread and thoughtful reply. His point about how a CA can add value reminds us of the ongoing tension between centralized and p2p architectures.

 

I'm hoping to do better than conclude this thread with: "it depends". VRM is a very diverse endeavor and, even within the subset of VRM as applied to healthcare, there ought to be some principles that can inform our conversation.

 

Although I understand that security professionals prefer to deal with specific cases and clear assumptions, to what extent can we come up with a Digital ID Bill of Rights or otherwise develop a consensus around:

  • how do we help the muggle understand the tradeoff between absolute privacy and reputation?
  • how do we design systems that do not promote excessively coercive or hidden surveillance?
  • when systems do require strong identity (e.g.: a no-fly list or prescription drug monitoring program) how do these relate to the less coercive routine system?
  • where are service layering and substitutability absolutely required to promote market forces as an alternative to regulation?
  • what is the minimum of regulation required (from the dept of commerce or health in the US and the EU) to support our VRM vision?

Adrian

 

On Fri, Apr 12, 2013 at 11:15 PM, T.Rob < " target="_blank"> > wrote:

Sorry to revive an old thread, I’m deeply queued at the moment. 

 

Question: what exactly is a “secure email address”?  In an informal setting such as a list server or some blog posts, I’m as guilty as anyone of using the word “secure” as if it meant anything.  On a professional engagement I insist my customers define actual requirements in terms of the security service to be provided (authenticity, integrity, privacy, etc.), the threat to be mitigated (for example protection against data exfiltration by theft of a laptop versus by remote compromise of a server in a datacenter require VERY different controls), the audit and forensic capabilities, and so forth.

 

Second question: what is meant by “associates”?  That the key is bound to an X.509 cert where the Distinguished Name is the email address?  That the owner authenticates in person to register the key with their email address?  Don’t answer these, by the way.  I don’t want to derail this thread down that rabbit hole but am just pointing out the frame work within which that discussion would proceed.

 

The answer to the question of whether this 2-step process is adequate depends on the definitions of “associates” and of “secure”.  At a very high level it’s possible to answer “yes” but there’s a plethora of underlying assumptions required to arrive at that answer.  Assuming ALL of them are fulfilled it’s possible but whether it’s practical requires driving out all the assumptions identifying threats, controls and mitigations, then looking for a point where the cost/benefit/risk balance to the positive.

 

One example… is that DNS or DNSSEC?  Because there are weaponized tools that can hack enough servers in the DNS network that one must treat it as broken and hostile in the design of a security architecture.  One might stipulate that access always originates from within a closed network where DNS servers are known to be patched but that severely limits participation.  One might provide tools to check the patch status of local DNS servers but that approach is both brittle and incomplete.  One might specify DNSSEC but there’s very little of that available.  Finally, one might stipulate that the system must work despite broken DNS and that implies requirements for additional mitigating controls somewhere else – such as in the “associates” phase.

 

See where I’m going with this?  The benefits of scale allow for the CA to invest heavily in technical and human process controls over the entire key lifecycle.  This includes the cost of rigorous process controls and technology that provide for secure management of the key servers and revocation servers.  Normally we think of the CA as a signer but they are also a registry and revocation provider and these are as important to the system as provisioning.  The questions below must assume that *someone* is providing the human process rigor, registry and revocation services.  If it’s a CA we know it isn’t great but if it’s not a CA we should assume it’s worse and again look at whether additional mitigating controls are required.

 

Short answer – the question as posed cannot be answered with any degree of confidence.  If you start with the question of “what constitutes digital identity” and worked backward you’d almost certainly end up with the elements mentioned in the question:  a GUID, a (hopefully cryptographically strong) binding of the GUID to a real-world entity, and a registry component.  However it doesn’t follow that these will be the only elements of the solution.

 

-- T.Rob

 

 

From: " target="_blank"> [mailto: " target="_blank"> ] On Behalf Of Adrian Gropper
Sent: Sunday, March 31, 2013 4:20 PM
To: Devon M T Loffreto
Cc: Drummond Reed; Adrian Gropper; Oren Samari; ProjectVRM list
Subject: Re: [projectvrm] NY Times: Letting Down Our Guard With Web Privacy

 

Devon,

 

Would you consider the following 2-step process an example of sovereignty as you see it:

 

1- A person creates a key-pair and saves the private key in a personal server,

            

2- The person associates the public key with a secure email address and posts the cert via DNS?

 

Adrian



On Sunday, March 31, 2013, Devon M T Loffreto wrote:

I am interested in those questions Adrian.

 

I like what Drummond is saying here... specific to the "overall context of ownership and personal boundaries... THEN... relationship context..."

 

But I do not think it is an artificial dichotomy. And I think this is very material to all of our perspectives here in this conversation.

 

If we map personal sovereignty as a Human birth event, I believe that the dichotomy looks like an XY axis from the point of view of the sovereign Individual. Within the gift of a sovereign Human birth, personal security is a function of our root relationships which guarantee control of our personal freedoms as Individuals. In literal terms, the very American notion of personal sovereignty is to be guaranteed by a root "Guardian" relationship until an age of accountability wherein it is the root asset of an Individual's life to control the context of their choices. The whole process is a gift passed from one generation to the next... one that is "endowed by our Creator"... upon each of us equally...in Law. The problem is that we are mapping this process wrong currently, usurping personal sovereignty and replacing it with a form of administered sovereignty that has each of us recast as social liabilities rather than personal assets by default. Its literally un-American from the perspective of our founding intent... and ironically, it doesn't even work in Britain anymore.  

 

Personally sovereign data structures change the flow of accountability. Society is not formed and maintained in the King's/Federal center, it is originated and empowered at its democratic edges.

 

DNA, currency, root identity, market structure, governing Rights, etc... each of these are provided integrity in the same way. If we break that formula, there is no other. 

 

Its the ghost in the machine... 1+1 must = 3... with predictable regularity we have "faith" in.

 

Devon

 

 

 

On Sun, Mar 31, 2013 at 1:28 PM, Drummond Reed < " target="_blank"> > wrote:

On Sun, Mar 31, 2013 at 8:34 AM, Adrian Gropper < " target="_blank"> > wrote:

Thanks Oren. 

 

Is VRM ignoring the importance of context as described in Aquisti's work by focusing on relationship instead of ownership?

 

Health data again offers the most extreme example. People are made to feel like our health data belongs to the institution and typically we're asked to pay if we ask for a copy. Meanwhile, in the US we are all paying an extra $3,000 per year for healthcare we wouldn't want if we were better-informed. (IOM $750 B / 250 M people = $3,000) Even our doctors are feeling the impact of institutional data practices: http://thehealthcareblog.com/blog/2013/03/26/dear-hipaa-its-time/ Even more, doctors are universally held hostage by their IT vendors if they try to switch systems. 

 

On the other hand, Chris and Sean brought the http://idcubed.org work on personal data stores to our attention a few days ago. 

 

Is VRM ceding too much ground to the cloud? A focus on relationship rather than ownership or personal boundaries could be the wrong context.

 

Adrian, that seems like an artificial dichotomy. IMHO the whole idea of personal clouds and personal channels is to establish an overall context of ownership and personal boundaries, and THEN establish that personal data sharing takes place in the context of each relationship (or group of relationships) individual has.

 

=Drummond 

 

 

 



On Sunday, March 31, 2013, Oren Samari wrote:

- my first contribution, but really thought the group would enjoy the article below. forgive me if you've already seen this.


http://www.nytimes.com/2013/03/31/technology/web-privacy-and-how-consumers-let-down-their-guard.html?hpw&_r=0

Interesting article on privacy experiments in the context of individual behavior conducted by Alessandro Acquisti, a behavioral economist at Carnegie Mellon University. I encourage all to read, but some key takeaways for me were:

  • Context matters >> "If we have something — in this case, ownership of our purchase data — we are more likely to value it. If we don’t have it at the outset, we aren’t likely to pay extra to acquire it. Context matters."
  • Irrational Behavior >> "The results revealed the imperfection of human reasoning. Those who were offered the least control over who would see their answers seemed most reluctant to reveal themselves: among them, only 15 percent answered all 10 questions. Those who were asked for consent were nearly twice as likely to answer all questions. And among



--
Adrian Gropper MD



 

--
Adrian Gropper MD





Archive powered by MHonArc 2.6.19.