Text archives Help


RE: [projectvrm] Chris Savage's paper on Privacy


Chronological Thread 
  • From: Christopher Savage < >
  • To: Devon Loffreto < >, =Drummond Reed < >
  • Cc: Phil Windley < >, Doc Searls < >, Rob van Eijk < >, Guy Higgins < >, Adrian Gropper < >, "Elizabeth M. Renieris" < >, katherine < >, ProjectVRM list < >
  • Subject: RE: [projectvrm] Chris Savage's paper on Privacy
  • Date: Wed, 20 Feb 2019 14:01:48 +0000
  • Accept-language: en-US
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom= ;

Devon,

 

There’s a legal-philosophical tension, it seems to me, between the notion of “self-sovereignty” and the notion of “law.” 

 

How people behave in a given culture/civilization is normally determined in real time by internalized cultural norms that cause people violating them to feel icky or guilty or ashamed or afraid, so they don’t.

 

A given legal jurisdiction – a city, a state, a country – will write down some of those norms as legal obligations (“Thou shalt not kill” becomes elaborate rules about homicide, distinctions between 1st degree murder, 2nd degree murder, manslaughter, justifiable homicide, the civil tort of wrongful death, etc.). And they will also write down stuff that either isn’t really a direct cultural norm, or that is so detailed nobody could remember it.  Examples would be building codes and environmental regulations. You can trace those to norms (exercise for the reader), but the specifics do require choices to be made and the results written down.

 

Both law and norms, though, arise within a society and a culture – that is, a group of people who, collectively, assert and, when needed, enforce, their idea of “the rules” on folks around them.  The interactions among culture, law, politics, etc., are pretty complicated, but to a first approximation, Hobbes had it right: Government is Leviathan. Follow the rules or be punished, up to and including being killed if your rule violation is bad enough.  A slightly more self-sovereign way of looking at things came from Rousseau’s Social Contract, which hypothesized free, independent, monadic savages rationally concluding that they’re better off sacrificing their autonomy for the benefits of living in society.  That, of course, is ahistorical – we evolved in social groups and, indeed, it is our very social nature that has allowed us to conquer the world.  See E.O. Wilson, The Social Conquest of Earth.  But one can interpret Rousseau’s just-so story  as articulating the cumulative impact of several million years of evolutionary pressure – the early hominids who tried to go it alone (if there were any) died out. They were killed by other hominids (working in tribal bands!) or by natural predators, or couldn’t hunt and gather as effectively as those in groups and starved to death. And even the ones who lived didn’t have social skills (by hypothesis), so they couldn’t get dates and therefore couldn’t reproduce.  The result is that the only baby hominids who ended up in the line of Homo Sapiens were those who “got” (in a deep genetic way) not just the value of being social beings, but at least the rudiments of how to do it. (The details of how to do it vary by culture, which you learn by being born into, or at least living for a while within, that culture.)

 

So there’s a constant tension between our nature as separate individuals and our membership in whatever social groups we are part of. The point of conceiving individuals as having rights (as against society/the state, and/or as against other individuals) is trying to give some structure to the tension/balance between us-as-individual and us-as-member-of-society. But once one undertakes to strike that balance, any notion of full “self-sovereignty” seems to me to go by the wayside.

 

And, I’m not sure sorting this out is helped much by notions like, “give people as much freedom as possible,” because what is considered “possible” depends on the objective. Is it, “… as much freedom as possible, consistent with maximizing everybody’s wealth and happiness”? Is it, “… as much freedom as possible consistent with other people’s freedom”? Is it something else? 

 

But no matter the criterion, once those pesky other people enter the equation, no one individual can really be “sovereign” (other than, perhaps, over his or her own body and own private thoughts, alone).  And because “law” is just a formalization of a subset of cultural rules, “law” is never going to reflect or respect any strong notion of “self-sovereignty.”  Law is about articulating some of the rules that folks in the given state/the society are supposed to follow or, putting it more nastily, law is about when the state/society will bring the hammer down on individuals who break the rules.

 

Again, I’m very new to the notion of self-sovereign identity, so it’s highly likely I’m missing something obvious.   If so please tell me.

 

Thanks,

 

Chris S.

 

From: Devon Loffreto < >
Sent: Tuesday, February 19, 2019 10:15 AM
To: =Drummond Reed < >
Cc: Phil Windley < >; Doc Searls < >; Rob van Eijk < >; Guy Higgins < >; Adrian Gropper < >; Elizabeth M. Renieris < >; katherine < >; ProjectVRM list < >
Subject: Re: [projectvrm] Chris Savage's paper on Privacy

 

Lets go deeper evaluating the precedent of self-Sovereign human authority in establishing Rights, both human and civil in nature, and their connection to ID + signatory force events.

 

John Hancock -- is there a historical record of method used to validate this ID + signatory on Declaration of Independence, the foundational legal document upon which Constitutional Rights exist? As Englishman under British Sovereign Law, its likely this ID + signatory was established within this legal jurisdiction. Under what type of authority can its use instantiating a legal document outside of its Sovereign jurisdiction be validated? Common sense would suggest that the authority is self-Sovereign, belonging to the human known under British Sovereign Law as 

'John Hancock', validated by a "web of trust" of co-signatories with similarly structured ID origins. 

 

Does this mean that criminality is required to re-establish the precedence of Human Rights in dissolving Sovereign Law allegiances, backed by force/guns of course? Or is simply stating the self-evident truths and reasons compelling the dissolution of these allegiances by ID + signatory force all that is required? What does the law say? What precedents does the law interpret?

 

How does John Hancock give signatory force to a non-Sovereign document using a Sovereign Law ID + signatory... what stands that document up beyond innate force of common sense applied to Human Rights and self-Sovereign human authority?

 

In opening of document, shouldn't the text be less poetic and more structurally accurate to reflect the successful completion of "Declaring Independence" and forming a new Sovereign Constitutional Law upon its foundation so that all people affected/to be affected for generations to come will have an accurate relationship to the documents, such that:

 

"We the signatories in time, hold these truths to be self-evident, that all mankind is created equal, that they are endowed by their Creator with certain unalienable Human Rights, that among these are Life, Liberty, and the self-Sovereign pursuit of Happiness.That to secure these Rights, Governments are instituted among mankind, deriving their just powers from the direct consent of the Governed."..

 

 Holding history in contempt, for denial of Human Rights to all people who have not had the direct opportunity to give their signatory consent to this declaration & ensuing Constitutional authority would seem to need to be remedied, to include direct admission that excluding women and non-white men from definition of "men" in document has caused irreparable harm that can only be fixed by direct edit of the founding document, and corresponding changes to the nature of Constitutional authority that self-Sovereign Human Rights for all mankind compels.

 

The problem with modern lawyers is that they do not speak truth to power...they administer precedent upon broken foundations delivered by long history. It is not possible to arrive at a functional model of Human Rights and American Civil Rights without a direct edit of the foundational documents, in order to accurately inherit their intent from one generation to the next.

 

Without direct consent of the governed by signatory force bound to ID under Sovereign Law, recursively applied to each Individual in each generation, the result of these documents is simply more of the same kind of "taxation without representation" problems we are living through today. Privacy/Surveillance/ID theft and sale of vital records data by the Government is a tax without direct representation caused by an erroneous implementation of Sovereign Law and an omission of Human Rights from the observable truth of living on this planet.

 

Any lawyers working on that.. or just persisting within the flawed space as legal administrators?

 

Devon

 

On Mon, Feb 18, 2019 at 4:41 PM Devon Loffreto < "> > wrote:

The thing about the "Legal Frameworks for Humanity in the Digital Age" concept is that it touches on the root reality at play in calling SSI "Self-Sovereign Identity"... spawning a legal framework with direct human authority has precedent, its just that "We" thinkers don't like to discuss its actual structure and meaning. You simply can not get to self-Sovereign outcomes without acknowledging origin of authority donated to all socio-economic systems.. source of civil authority is self-Sovereign or the Constitutional framework of today does not exist..let alone one that respects Individuals as primary representatives of force and integrity in our own Governing system and its downstream derivatives. 

 

SSI and self-Sovereign human authority can not be separated as a structural objective. Its not possible without losing meaning.

 

SSI can only be as successful as it is in being representation of self-Sovereign human authority expressed as ID.

 

The origin of SSI is the origin of a "Legal Framework for Humanity in the Digital Age"... because unless Human Rights have real structural meaning, they, like privacy, are pretend notions.

 

Legislation and Trust are != 

 

The loss in fidelity caused by real systems propagating pretend Rights that have no structural meaning is doing irreparable harm to civil Society, and Humanity... still. Facebook is evil...still. ToS Agreements should get lawyers disbarred...they teach people to disregard the law... still. Meanwhile, its the people that are confused. Life is getting better by the numbers...read the data... but remember to interpret well, in the 1st world, McD's diets will kill you in 90 days, or severely impair your health if source of exclusive nutrients... and in 3rd world, peanut butter keeps babies alive. Everything in context.. You.

 

Devon

 

On Mon, Feb 18, 2019 at 4:20 PM =Drummond Reed < " target="_blank"> > wrote:

Phil, this is fabulous stuff IMHO. It jives with what Elizabeth points out in her submission for Rebooting the Web of Trust called 

Legal Frameworks for Humanity in the Digital Age. If we (as in, the community of us who really care about this) develop a legal framework for a privacy commons—and help give it some real foundational strength with SSI technology—then the prospects for Internet privacy suddenly look much brighter than they have in years.

 

On Mon, Feb 18, 2019 at 12:59 PM Phil Windley < " target="_blank"> > wrote:

I think separating consent from a relationship manager that has a consistent interface is a mistake. 

 

Kim Cameron’s Seventh law says:

 

7. Consistent Experience Across Contexts

 

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies. 

 

We need tools that not only give us the consistent experience, but also scale our intentions (as Doc points out). These tools have to keep a record for us that we can control and manage. 

 

Savage’s paper was the second time this week I’ve seen the word “commons” used in conjunction with privacy. Before I saw Savage’s paper, I read this article by John Evans on TechCrunch entitled “Privacy is a commons":

 

 

Evans makes the comparison between privacy and voting:

 

Privacy is like voting. An individual’s privacy, like an individual’s vote, is usually largely irrelevant to anyone but themselves … but the accumulation of individual privacy or lack thereof, like the accumulation of individual votes, is enormously consequential.

 

He concludes:

 

What I am saying is that selling privacy cheaply isn’t any better for society than letting it be seized without any compensation. In fact, if privacy commoditization leads to a more rapid degradation of the commons, it’s actually worse. Similarly, again, individual votes are essentially never that important … but would you think it OK for a company to purchase citizens’ voting rights for $20 per person per month? If we need to defend privacy as a commons — and we do — then we can’t start thinking of it as an individual asset to be sold to surveillance capitalists. It, and we, are more important than that.

 

I think this is a powerful argument because it makes a case for societal interest in individual privacy. 

 



On Feb 17, 2019, at 8:27 AM, Doc Searls < " target="_blank"> > wrote:

 

As long as each of us has to consent to as many different notices as the number of sites we visit, and have no way to scale our intentions across whole markets (as we do with plenty of protocols, so it's not impossible), the problem remains the same: notice & consent, in which we are always the subordinate second parties, is a structural fail that no innovation by the standing industry can provide. That includes CMPs.

 

Here's the one that's what came up first in my search for CMP on Google: https://www.consentmanager.net/  And here's how that company defines its category:

 

A CMP or Consent Management Provider collects the necessary consent from the user. Therefore each new user will see a "Consent Layer" - a notification on the website asking the user to accept or deny the usage of its data. The CMP can then pass the consent information to advertisers, networks, analytics tools and other vendors who want to use the data.

 

That "consent layer" is a surface of toxic shit on the commercial Web. And on the noncommercial Web too, since entities there tend to hire lawyers no less frozen in fear of GDPR enforcement than lawyers working for commercial firms.

 

When I look at typical CMP solutions, I see card tricks straight from the magician's handbook. And who retains the records of who consented to what? Yours is a zillion cookies, buried deep in your browser, nearly all made to obscure human comprehension.

 

We need to be able to protect and project our privacy personally, as first parties, with all the entities we meet on the Web, and do it at scale, just as we do in the natural world. Nothing less will do.

 

Privacy is one of the "what's" that's "in it for me" (WIIFM) when we wear clothes. Or when we shut the door to a bathroom stall. Yes, another person might be able to reach inside our clothes, or open a stall door to watch us, but they wouldn't, because we have well-established norms (and some laws) that say doing such things is flat-out wrong. Those norms are based on personal tech meant to protect and project privacy.

 

That it is normal today for sites we visit to reach inside our clothes online (which our browsers are, or should be), and opening our private doors (which their tracking beacons do) is no excuse. Hell, our browsers should be set NOT to reveal shit about us. Not where we've been, not even what computer or browser we're using. Nothing that can leave a fingerprint. (For more on that, see why Apple is dropping the Do Not Track setting on Safari.)

 

The longer I dig into this topic, the more convinced I am that we need to zero-base privacy around the real world model of how it's worked for dozens of millennia.

 

Doc



On Feb 17, 2019, at 3:26 AM, Rob van Eijk < " target="_blank"> > wrote:

 

The WIIFM aspect has played an important role in the consent model which we see in the GDPR.

 

WIIFM translates to a fair value proposition to which one can consent, or not.

 

The requirements for valid consent are more strict under the GDPR, but also under the cookie laws in Europe which refer directly to consent as stipulated in the GDPR.

 

The element of real choice, to accept the proposition or not, is IMHO at the heart of, e.g., the cookie wall discussion currently.

 

The discussion is not just an academic one.

 

In online advertising, innovation takes place by, e.g., Consent Management Providers.

 

In my view, CMPs are in a position to control the tag/script sequences.

 

Therefore, CMP's could take an ethical stand towards their clients and make consent fair.

 

My personal observation is that - over all - the defaults required for fair consent are shifting.

 

But bad actors may still abuse innovation to propagate consent unfairly through the ad-tech supply chain in an unprecedented way.

 

Rob

 

 

 

-----Original message-----
From: Guy Higgins
Sent: Saturday, February 16 2019, 11:22 pm
To: Adrian Gropper; Elizabeth Maria
Cc: Doc Searls; katherine; ProjectVRM list
Subject: Re: [projectvrm] Chris Savage's paper on Privacy

+1 

 

I think that Adrian has hit the nail on the head.  We need to help people understand the "What’s In It for Me” (WIIFM) aspect.  Once people appreciate how they are personally impacted, a significant number of people will begin to act and influence market behavior.  That kind of influence is important because laws and regulations change at glacial speed while technology and people’s response to technology change at orbital speeds.

 

Guy

 

From: Adrian Gropper < " target="_blank" title="This external link opens in a new window"> >
Date: Saturday, February 16, 2019 at 14:02
To: Elizabeth Maria < " target="_blank" title="This external link opens in a new window"> >
Cc: Doc Searls < " target="_blank" title="This external link opens in a new window"> >, katherine < " target="_blank" title="This external link opens in a new window"> >, ProjectVRM list < " target="_blank" title="This external link opens in a new window"> >
Subject: Re: [projectvrm] Chris Savage's paper on Privacy

 

It's the same as the business model for education. People have to be educated to shun surveillance capitalism for the sake of democracy and our children's future. People have to be given the opportunity to connect to the internet without censorship or rent seeking infrastructure providers. When will people start to be embarrassed to be using Facebook?

 

Adrian

 

(In transit but I have to jump in). 

 

I think some laws/regulations are aimed at this - it’s at the heart of data portability and access rights as well as data protection by design and default requirements. The problem is how long it takes those laws and their impact to be known/felt.

 

Also very few lawyers have the incentive to argue for more individual agency since they’re not representing individuals or common interest groups but most potent legal resources are going to the commercial interests that make lawyers the most money. It’s beyond frustrating to me. We need more lawyers looking out for the public interest. But even here in DC, the vast majority of knowledge lawyers I know (who get it) are working for policy shops/think tanks that take money from big tech. There is so much capture.

 

It’s also circular. I love Katherine’s point about rewarding those who are doing the right thing/doing good. When that becomes a good business model, it will attract more legal resources.  
 

Sent from my iPhone

 

 

+1 Katherine.

 

What stands out to me is the mention of information asymmetry in Savage's conclusion. Fixing this asymmetry will require technological agency and regulations that support or even require technological agency. I have not read the full Savage paper but I have yet to see any legal or regulatory scholar address what seems obvious to me and those of us involved in the self-sovereign movement. Without technological agency, information symmetry is just something that others (public or private) grant us when it suits them.

 

Where is the learned description of the technological commons for individual humans?

 

It's our job to describe and equip that. In fact, I need to be speaking about it at the Ostrom Workshop next Fall (as will Brett), and it'll be great if we have a lot to talk about.

 

Doc

 

 

Adrian

 

When business income goes up, government revenues go up. That’s why no one cares about the costs of the means to that end.

 

Some business’s incomes go up and have high employee retention, invest in personal development, bottom-up communication channels, etc. - they improve their human capital.

 

Others with growing income have high employee suicide rates, divorce rates, drug and alcohol addiction - their human capital isn’t improving, it is burned. 

 

Costs of burning human capital are often born by society and ultimately the government. But governments don’t measure these costs or see a difference in the income from a company that burns human capital vs. those that grow human capital. 

 

Companies who exploit personal data to make money, claim they are doing the same thing as any other business which adds value to commodities. 

 

The government doesn’t see the difference between turning a cocoa bean into chocolate and turning data into a “target.”

 

But there is a big difference.  We’re just starting to learn the costs of treating humanity like a commodity to both society and ultimately the government.

 

If accounting standards measured human capital growth and depreciation the way financial capital is measured, governments could tax companies to recover the costs for burning human capital. That would de-incentivize companies who exploit personal data to make money as well as other risks to human capital. It would also give the companies who do the right thing a level playing field so market forces may work again.

 

Katherine Warman Kern

 

 

Christopher Savage is an alpha telecom attorney in DC whose name at times has been floated as a possible candidate for FCC Chairman. I've known Chris since the middle of the last decade, and have admired his creative, tough, good-humored, sensible and engaging approaches to pretty much everything.

 

A paper he just published through Stanford Law School, Managing the Ambient Trust Commons: The Economics of Online Consumer Information Privacy, footnotes some of my writings and ProjectVRM. It more extensively cites the work of Elinor Ostrom, who was awarded a Nobel Prize in Economics for her groundbreaking work on the commons  Were there no Elinor Ostrom, there might have been no Creative Commons, Customer Commons, or  Ostrom Workshop at Indiana University, where at least two of us here (Brett Frischmann and myself) also have involvements.

 

Here's the abstract of Chris' paper:

 

Privacy interests arise from relationships of trust: people share information with those they trust and conceal things from those they don’t. Trust grows when it is respected and diminishes if it is betrayed. Firms in the online ecosystem need consumers to trust them, so the consumers keep coming online, being surveilled, viewing ads, and buying things. But those same entities make money by exploiting consumer trust—using the information they gain to develop individualized profiles that facilitate advertising that gets people to buy things they may not really want or need, at individualized rather than generally available prices. Trust and, thus, privacy, is therefore best viewed as a common-pool resource for the online ecosystem to manage, not as a commodity exchanged in a market between consumers and sellers. The common-pool resource model explains why online entities have incomprehensible privacy policies, why they accept regulation by the Federal Trade Commission, and why they recognize the seriousness of data breaches even as they reject any obligation to compensate consumers when a breach occurs. This model also clarifies the nature of the ongoing economic and political conflict between consumers and online entities about pervasive surveillance and the use of targeted ads. Market-based models, by contrast, do not fit these realities and, as a result, there is no reason to think that “market forces” will optimally equilibrate consumer and seller interests. Some modest regulatory correctives are therefore advisable.

 

And here's the Conclusion:

 

Market forces do not protect consumer privacy interests in the online economic ecosystem. Instead, this ecosystem is best conceived as a commons, in which consumer trust (from which privacy interests arise) is the managed common-pool resource—with online entities, aided by the FTC, acting as the commons managers. The choice of model matters. In the market model of privacy, whatever terms of engagement emerge between consumers and online entities regarding privacy and surveillance come with at least a weak presumption of optimality—that is, that they reflect a fair balancing of consumer and seller interests. In the commons model, however, there is no reason to think that the interests of consumers (whose level of trust is the common-pool resource) are being optimally balanced against those of sellers. Again, the economic objective of the commons managers is not to protect privacy; it is to surveil consumers, and use the data thus gleaned to make it easier to sell things—many of which, of course, consumers want, but others of which they would do better to do without. In this situation, there are some genuine and ongoing conflicts between consumers and the online ecosystem in which consumers increasingly spend time and money, conflicts that we cannot expect market forces to fairly equilibrate or optimize. In light of all this, we should consider some modest public education and regulatory efforts, outlined above. These proposals would begin both to address the information asymmetry problems and to empower consumers to enjoy online content, and transact business online, without having to sacrifice undue amounts of their privacy or their money.

 

The paper is 67 pages and 33,475 words long, so basically it's a book.

 

I look forward to reading and discussing it. If there is interest here, I can also invite Chris in to participate, or at least see if he's game.

 

Doc



--

 

Adrian Gropper MD

PROTECT YOUR FUTURE - RESTORE Health Privacy!
HELP us fight for the right to control personal health data.



--

 

Adrian Gropper MD

PROTECT YOUR FUTURE - RESTORE Health Privacy!
HELP us fight for the right to control personal health data.

 

 


 

--

Devon Loffreto

Founder/ Developer/ Mentor

kidOYO/ OYOclass.com

 


Important: This electronic mail message and any attached files contain information intended for the exclusive use of the party or parties to whom it is addressed and may contain information that is proprietary, privileged, confidential and/or exempt from disclosure under applicable law. If you are not an intended recipient, you are hereby notified that any viewing, copying, disclosure or distribution of this information may be subject to legal restriction or sanction. Please notify the sender, by electronic mail or telephone, of any unintended recipients and delete the original message without making any copies.


 

--

Devon Loffreto

Founder/ Developer/ Mentor

kidOYO/ OYOclass.com

 


Important: This electronic mail message and any attached files contain information intended for the exclusive use of the party or parties to whom it is addressed and may contain information that is proprietary, privileged, confidential and/or exempt from disclosure under applicable law. If you are not an intended recipient, you are hereby notified that any viewing, copying, disclosure or distribution of this information may be subject to legal restriction or sanction. Please notify the sender, by electronic mail or telephone, of any unintended recipients and delete the original message without making any copies.




Archive powered by MHonArc 2.6.19.