Current Berkman People and Projects

Keep track of Berkman-related news and conversations by subscribing to this page using your RSS feed reader. This aggregation of blogs relating to the Berkman Center does not necessarily represent the views of the Berkman Center or Harvard University but is provided as a convenient starting point for those who wish to explore the people and projects in Berkman's orbit. As this is a global exercise, times are in UTC.

The list of blogs being aggregated here can be found at the bottom of this page.

May 26, 2017

Cyberlaw Clinic - blog
Congratulations, HLS Class of 2017!

Version 2This past Wednesday — May 24th, 2017 — marked Class Day at Harvard Law School, which takes place each year one day before the University-wide commencement ceremonies.  It’s one of our favorite days of the year here at the Cyberlaw Clinic, because it gives us the chance to host an annual get-together for graduating Clinic alums and their families and friends.  

Version 3ThiThis year’s celebration came at the end of a fantastic Class Day lineup, with the esteemed Sally Yates offering an inspirational set of remarks, our good friend Mark Wu accepting the 2017 Albert M. Sacks-Paul A. Freund Award for Teaching Excellence, and Cyberlaw Clinic alum (and 2016-17 Berkman Klein Center Fellow) Crystal Nwaneri receiving the David Westfall Memorial Award for Community Leadership.

Version 2It also came at the end of a great academic year for us in the Clinic, with the addition of two phenomenal new teacher-practitioners (Jessica Fjeld and Mason Kortz) to the Clinic team; collaborations with a slate of Clinic Advisors that included Nani Jansen ReventlowErica Tennyson, and Waide Warner; amicus filings on issues ranging from the right of estate administrators to access a decedent’s emails to the scope of the Massachusetts Anti-SLAPP statute; and some high-profile forays into questions about Internet jurisdiction and the territorial reach of online takedown orders.  All of this happened against the backdrop of our usual array of discrete client advising projects, which had our students addressing everything from the risk of liability for researchers under the Computer Fraud and Abuse Act; to copyright and fair use in the context of archival digitization efforts; to the impacts of global law reform efforts on free expression; to privacy concerns raised by innovative classroom technologies.

The Cyberlaw Clinic team extends its heartfelt congratulations to graduates in the Harvard Law School class of 2017 and wishes them well in all of their future endeavors!

Photos of graduating students Richard Gadsden (HLS JD ’17) and Miranda Means (HLS JD ’17), graduating student Jennifer Luh (HLS JD ’17) and HLS Clinical Professor Susan Crawford, and graduating students Alicia Solow-Niederman (HLS JD ’17) and Nehaa Chaudhari (HLS LLM ’17), courtesy of the Berkman Klein Center for Internet & Society‘s intrepid Digital Media Producer, Dan Jones.

by Clinic Staff at May 26, 2017 02:46 PM

Miriam Meckel
Die Furcht vor großem Denken

Mit den 54 Milliarden Euro Steuermehreinnahmen kann man die Bürger entlasten. Oder Deutschland digital anschlussfähig machen.

Wo ist er hin, der Mut mal einen richtig großen Schritt Richtung Zukunft zu gehen? Wir könnten Deutschland in die Poleposition bringen. Mal eine politische Entscheidung der Vernunft treffen zugunsten der Generationen, die nach uns kommen. Mal nicht am Tisch der großkoalitionären Kompromisse am kleinsten gemeinsamen Nenner kleben bleiben.

Bis zum Jahr 2021 wird der Staat nach Berechnungen des Arbeitskreises Steuerschätzung 54 Milliarden Euro mehr Steuereinnahmen verbuchen als bislang geplant. Das ist eine gute Nachricht. Die schlechte folgt noch über dem Strich. CSU-Chef Horst Seehofer verspricht für die kommende Legislaturperiode eine „große, wuchtige Steuerreform“. Hörte man von irgendeiner politischen Seite wenigstens mal eine große wuchtige Idee, wie man das Geld zugunsten zukünftigen Wohlstands und gleichbleibender Wettbewerbsfähigkeit einsetzen könnte, es gäbe wenigstens die Chance auf eine Debatte, wie Deutschland seine Zukunft denken will. Aber die Ambitionen zu solchen visionären Gedanken liegen bei allen Parteien auch im Monat Mai noch im Winterschlaf.

Unterm Strich kriegen wir also wieder Weihnachten im Sommer – einen Wahlkampf der Steuerentlastungspläne, der sich nur um die Frage dreht: Wer verspricht mehr? Doch die Versprechen werden in den Runden zur Aushandlung einer nächsten Koalition einige der Gaben sein, die so lange unter dem Weihnachtsbaum ein- und wieder ausgewickelt werden, bis nur Packpapier übrig bleibt.

Es ist richtig, den Bürgerinnen und Bürgern etwas zurückzugeben, wenn sich die Staatsfinanzen besser gestalten als erwartet. Aber es gibt auch Probleme, die sich nicht allein im Vertrauen darauf lösen lassen, dass jeder Einzelne schon freiwillig dazu beitragen wird. Dazu gehört Deutschlands miserabler Fortschritt bei der Digitalisierung. Allein 23.000 Gewerbegebiete werden ihrem Namen nicht mehr gerecht, weil dort digital nichts läuft. Die Leitung, auf der die Politik in Berlin steht, gibt’s da gar nicht.

Warum hat niemand den Mut, mal groß zu denken? 80 Milliarden Euro wird es kosten, ganz Deutschland mit einem schnellen Glasfasernetz auszustatten. Jetzt ist die Zeit, das zu tun. Und dabei gibt es gleich noch die Chance auf eine kleine politische Gewinnmitnahme. Immer wieder wird am deutschen Handelsbilanzüberschuss herumkritisiert. Mit solch einem Investitionsvolumen hätte man ein handfestes Argument, um es den Dauernörglern entgegenzuhalten. Schließlich: Auch für die Bildung wäre das ein Riesenschritt. Deutschland hat keine Ressourcen außer hervorragend ausgebildete junge Menschen, die nächsten Unternehmerinnen der Digitalzeit.

Bundeskanzlerin Angela Merkel hat den großen Steuerentlastungsplänen zwar eine Absage erteilt. 15 Milliarden Euro sind mit ihr drin. Auf die Ideen für digitales Wachstum warten wir noch. Auf jedes Investitionstöpfchen kommt ein Deckel. Auf jedes Köpfchen leider auch.

wiwo.de

by Miriam Meckel at May 26, 2017 01:35 PM

May 23, 2017

Berkman Center front page
Can We Talk?: An Open Forum on Disability, Technology, and Inclusion

Subtitle

featuring Professors Elizabeth Ellcessor and Meryl Alper with guests

Parent Event

Berkman Klein Luncheon Series

Event Date

May 23 2017 12:00pm to May 23 2017 12:00pm
Thumbnail Image: 
Pictured are Professor Elizabeth Ellcessor and Professor Meryl Alper

Tuesday, May 23 at 12:00 pm
Berkman Center for Internet & Society at Harvard University

Video, audio, and a transcript will be available at this page soon.

This event was co-hosted by the Berkman Klein Center for Internet & Society at Harvard University and Harvard Law School Dean of Student's Office, Accessibility Services.

Can we talk? The question (a favorite prompt of the late comedian Joan Rivers) evokes a feeling of being intimately and sometimes uncomfortably open, frank, and honest, both with others and ourselves. This event, a conversation between Prof. Elizabeth Ellcessor (Indiana University) and Prof. Meryl Alper (Northeastern University, Berkman Klein Center​), points the question at the topic of disability, technology, and inclusion in public and private, and in digital and digitally-mediated spaces. Ryan Budish (Berkman Klein Center) and Dylan Mulvin (Microsoft Research) will serve as discussants.

Can we talk?, with respect to different degrees of potential access (in its social, cultural, and political forms) that new media constrains and affords for individuals with disabilities. Can we talk?, with respect to who does and does not take part in the ongoing research, development, and critique of accessible communication technologies. Can we talk?, with respect to whether or not talking, or its corollary "voice," is an adequate metaphor for conversation, participation, and agency?

Alper and ​Ellcessor and will draw upon their recent respective books, ​Giving Voice: Mobile Communication, Disability, and Inequality (MIT Press, 2017) and ​Restricted Access: Media, Disability, and the Politics of Participation (NYU Press, 2016). Both books will be available for purchase and signing.

If you have any questions about arriving at or getting into this event, please do not hesitate to reach out to Carey Andersen at candersen@cyber.law.harvard.edu or at 617-495-7547. Wasserstein Hall, Room 3018 is fully accessible.

About Elizabeth

Elizabeth (Liz) Ellcessor is an assistant professor in the Media School at Indiana University, Bloomington.

Her research focuses on the ways that digital media technologies can both expand and limit people’s access to culture and civil society. Bringing together cultural studies, disability studies, and critical media industry studies, she uses a range of qualitative and historical methods. Focusing on those on the margins–particularly people with disabilities–exposes gaps in mainstream narratives about technological progress, user participation, and engagement with mediated culture.

Additionally, Liz has conducted research on performances of online identity, including social media celebrity, activism, and deception.

Liz teaches a range of courses, from introductory undergraduate courses in media studies to specialized doctoral seminars. Her courses aim to make the familiar strange, providing new details and perspectives with which students can reconsider taken for granted elements of their digitally mediated lives. Additionally, she uses strategies of universal design to make courses accessible for as many students as possible, incorporating captioned content, flexible assignment structures and timelines, and multiple forms of student participation.

Liz is a founding co-chair of the Media, Science, and Technology Studies scholarly interest group of the Society for Cinema and Media Studies.

About Meryl

Meryl Alper is an Assistant Professor of Communication Studies at Northeastern University and a Faculty Associate with the Berkman Klein Center for Internet and Society at Harvard University. Prior to joining the faculty at Northeastern, she earned her doctoral and master’s degrees from the Annenberg School for Communication and Journalism at the University of Southern California. She also holds a bachelor’s degree in Communication Studies and History from Northwestern University, as well as a certificate in Early Childhood Education from UCLA.

Alper’s research explores the social implications of communication technologies for individuals with disabilities, children, and families. In particular, she studies the opportunities and challenges that media and technology provide young people with developmental disabilities and their families in the digital age. She integrates theoretical, empirical, and archival methods in this work and employs a historical, sociological, and critical/cultural perspective.

Alper has worked for over a decade in the children’s media industry. As an undergraduate at Northwestern, she was Lab Assistant Manager in the NSF-funded Children’s Digital Media Center/Digital-Kids Lab and interned in the Education & Research Department at Sesame Workshop in New York. Post graduation, she worked in Los Angeles as a Research Manager for Nick Jr., conducting formative research for the Emmy-nominated educational preschool television series Ni Hao, Kai-lan and The Fresh Beat Band.

Alper is the author of Digital Youth with Disabilities (MIT Press, 2014) and Giving Voice: Mobile Communication, Disability, and Inequality (MIT Press, 2017). Her research has been published in New Media & Society, International Journal of Communication, and IEEE Annals of the History ofComputing, among other journals. She has been awarded four Top Paper awards by the International Communication Association for her sole-authored work across multiple ICA divisions. Her research and popular writing has also been featured in a range of venues, including The GuardianThe AtlanticMotherboard, and Wired.

About Ryan

Ryan Budish is a Senior Researcher at the Berkman Klein Center.  Ryan joined the Berkman Klein Center in 2011 as a Fellow and the Project Director of Herdict.  In his time at Berkman Klein, Ryan has contributed policy and legal analysis to a number of projects and reports, and he has led several significant initiatives relating to Internet censorship, corporate transparency about government surveillance, and multistakeholder governance mechanisms.

 

About Dylan

Dylan Mulvin is a Postdoctoral Researcher at Microsoft Research New England and a member of the Social Media Collective. He joined the collective after completing his PhD at McGill University. Dylan is a historian of technology, media, and computing whose work investigates the design and maintenance of new technologies.  He examines how engineers, scientists, technicians, and bureaucrats make decisions about how to develop shared understandings of the world.

by candersen at May 23, 2017 04:00 PM

May 22, 2017

PRX
We’ve Moved!

We have officially moved our blog presence over to Medium.  Follow us there to get all the latest news about PRX, the Podcast Garage, Radiotopia and much more.  See you there!

The post We’ve Moved! appeared first on PRX.

by Maggie Taylor at May 22, 2017 08:43 PM

MediaBerkman
How to regulate the future of finance
US market regulators offer perspectives on the benefits and risks of the financial technology revolution from distributed ledgers, p2p marketplaces and the use of AI in the financial system. Moderated by Patrick Murck -- Fellow at the Berkman Klein Center for Internet & Society -- the panel discusses the challenge of regulating through disruption and how federal agencies can modernize their approach to keep up with innovation. John Schindler is an Economist for the Board of Governors of the Federal Reserve System. Jeffrey Bandman is the FinTech Advisor at the U.S. Commodity Futures Trading Commission. Valerie A. Szczepanik is an Assistant Director in the Asset Management Unit of the Division of Enforcement at the U.S. Securities and Exchange Commission (SEC). More info on this event here: https://cyber.harvard.edu/events/2017/luncheon/05/Fintech

by the Berkman Klein Center at May 22, 2017 05:05 PM

May 19, 2017

Benjamin Mako Hill
Children’s Perspectives on Critical Data Literacies

Last week, we presented a new paper that describes how children are thinking through some of the implications of new forms of data collection and analysis. The presentation was given at the ACM CHI conference in Denver last week and the paper is open access and online.

Over the last couple years, we’ve worked on a large project to support children in doing — and not just learning about — data science. We built a system, Scratch Community Blocks, that allows the 18 million users of the Scratch online community to write their own computer programs — in Scratch of course — to analyze data about their own learning and social interactions. An example of one of those programs to find how many of one’s follower in Scratch are not from the United States is shown below.

Last year, we deployed Scratch Community Blocks to 2,500 active Scratch users who, over a period of several months, used the system to create more than 1,600 projects.

As children used the system, Samantha Hautea, a student in UW’s Communication Leadership program, led a group of us in an online ethnography. We visited the projects children were creating and sharing. We followed the forums where users discussed the blocks. We read comment threads left on projects. We combined Samantha’s detailed field notes with the text of comments and forum posts, with ethnographic interviews of several users, and with notes from two in-person workshops. We used a technique called grounded theory to analyze these data.

What we found surprised us. We expected children to reflect on being challenged by — and hopefully overcoming — the technical parts of doing data science. Although we certainly saw this happen, what emerged much more strongly from our analysis was detailed discussion among children about the social implications of data collection and analysis.

In our analysis, we grouped children’s comments into five major themes that represented what we called “critical data literacies.” These literacies reflect things that children felt were important implications of social media data collection and analysis.

First, children reflected on the way that programmatic access to data — even data that was technically public — introduced privacy concerns. One user described the ability to analyze data as, “creepy”, but at the same time, “very cool.” Children expressed concern that programmatic access to data could lead to “stalking“ and suggested that the system should ask for permission.

Second, children recognized that data analysis requires skepticism and interpretation. For example, Scratch Community Blocks introduced a bug where the block that returned data about followers included users with disabled accounts. One user, in an interview described to us how he managed to figure out the inconsistency:

At one point the follower blocks, it said I have slightly more followers than I do. And, that was kind of confusing when I was trying to make the project. […] I pulled up a second [browser] tab and compared the [data from Scratch Community Blocks and the data in my profile].

Third, children discussed the hidden assumptions and decisions that drive the construction of metrics. For example, the number of views received for each project in Scratch is counted using an algorithm that tries to minimize the impact of gaming the system (similar to, for example, Youtube). As children started to build programs with data, they started to uncover and speculate about the decisions behind metrics. For example, they guessed that the view count might only include “unique” views and that view counts may include users who do not have accounts on the website.

Fourth, children building projects with Scratch Community Blocks realized that an algorithm driven by social data may cause certain users to be excluded. For example, a 13-year-old expressed concern that the system could be used to exclude users with few social connections saying:

I love these new Scratch Blocks! However I did notice that they could be used to exclude new Scratchers or Scratchers with not a lot of followers by using a code: like this:
when flag clicked
if then user’s followers < 300
stop all.
I do not think this a big problem as it would be easy to remove this code but I did just want to bring this to your attention in case this not what you would want the blocks to be used for.

Fifth, children were concerned about the possibility that measurement might distort the Scratch community’s values. While giving feedback on the new system, a user expressed concern that by making it easier to measure and compare followers, the system could elevate popularity over creativity, collaboration, and respect as a marker of success in Scratch.

I think this was a great idea! I am just a bit worried that people will make these projects and take it the wrong way, saying that followers are the most important thing in on Scratch.

Kids’ conversations around Scratch Community Blocks are good news for educators who are starting to think about how to engage young learners in thinking critically about the implications of data. Although no kid using Scratch Community Blocks discussed each of the five literacies described above, the themes reflect starting points for educators designing ways to engage kids in thinking critically about data.

Our work shows that if children are given opportunities to actively engage and build with social and behavioral data, they might not only learn how to do data analysis, but also reflect on its implications.

This blog-post and the work that it describes is a collaborative project by Samantha Hautea, Sayamindu Dasgupta, and Benjamin Mako Hill. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson from MIT CSAIL. Financial support came from the US National Science Foundation.

by Benjamin Mako Hill at May 19, 2017 12:51 AM

May 18, 2017

David Weinberger
Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

The post Indistinguishable from prejudice appeared first on Joho the Blog.

by davidw at May 18, 2017 09:21 PM

Juan Carlos De Martin
"Università futura" tour (fase 3)
A maggio sono continuate le presentazioni pubbliche di "Università futura".

Abbiamo iniziato il 2 maggio a Milano, prima alla Statale (con numerosi rappresentanti degli studenti) e a seguire al Politecnico, alla presenza del Rettore Ferruccio Resta.

Il 9 maggio è stato invece il momento dell'Università di Torino, per una discussione coi colleghi Ugo Pagallo, Massimo Durante, Peppino Ortoleva, Franca Roncarolo, Barbara Gagliardi e Anna Masera.

Il 15 maggio sono stato invece all'Università Federico II di Napoli, per una presentazione col Rettore Gaetano Manfredi (attuale presidente CRUI) e i colleghi Giorgio Ventre, Roberto Delle Donne e Guido Trombetti (ex Rettore e ex Presidente CRUI).

Infine presenterò "Università futura" al Salone del Libro di Torino giovedì 18 maggio alle ore 15:30, insieme all'amico e collega Massimo Durante.

by Juan Carlos De Martin at May 18, 2017 08:04 AM

May 17, 2017

Justin Reich
Teaching 21st Century Skills Requires More Than Just Technology
To foster the 21st-century skills of communication and collaboration in students requires more than just access to Google docs.

by Beth Holland at May 17, 2017 09:51 PM

Miriam Meckel
Der Neuling und die Etablierte

Emmanuel Macrons Sieg ist ein Weckruf für Europa: Weiter so geht nicht mehr. Das sollte auch Berlin beherzigen.

Es gibt zwei Arten von Kriegsreportern, die lebensgefährdet sind. Solche, die neu dabei sind, und solche, die schon lange dabei sind. So ähnlich ist das auch in der europäischen Politik. Emmanuel Macron, dem jungen wirtschaftsliberalen Sieger der französischen Präsidentschaftswahl, gebührt der Respekt und Dank all derer, die zu Recht gefürchtet haben, was bei anderem Ausgang der Wahl auf dem Spiel gestanden hätte. Europa nämlich. 60 Jahre harte Arbeit an der politischen Befriedung durch Integration, am wirtschaftlichen Wachstum durch den Binnenmarkt. Macron ist frisch im Geschäft. Er muss von nun an galant auf den Kompromisslinien balancieren, die er nicht selbst gezogen hat. Und er könnte schneller abstürzen, als selbst Bösgläubige sich wünschen mögen. Angela Merkel, die erfahrene Regierungschefin, die sich mithilfe eines lebenden Sparbuchs in Gestalt ihres Finanzministers durch die Euro-Krise gerettet hat, ist lange im Amt. Mit fast zwölf Jahren zu lange, glaubt manch ein Beobachter. Im Mai 2010 sagte sie: „Scheitert der Euro, scheitert Europa.“ Sieben Jahre später wissen wir, dass Europa auch ganz anders scheitern kann als durch den Euro. Durch Kriege in anderen Teilen der Welt, die eine millionenfache Flucht gen Westen auslösen, durch Terror, durch Ermüdungsbrüche im Gerüst der EU. Auch durch hartnäckigen Nationalismus und populistische Bewegungen, die den Bürgerinnen und Bürgern versprechen, man könne die Zeit zurückdrehen und die Welt vom Fortschritt der vergangenen Jahrzehnte „befreien“.

Der Neuling und die Etablierte, sie müssen miteinander arbeiten. Nur über die deutsch-französische Achse wird eine Übersetzung gelingen, die den europäischen Wachstums- und Integrationsmotor wieder anspringen lässt. Macron will dazu zwei Dinge angehen: Reformen im eigenen Land, auch um endlich wieder die Stabilitätskriterien einhalten zu können. Aber er fordert ebenso eine fundamentale Reform für die politische Führung der Euro-Zone. Mittelfristig muss die Europäische Union (EU) in der Verteidigungs-, Sicherheits- und Migrationspolitik enger zusammenarbeiten. So weit, so gut. Wenn es aber an die Finanzierung des Ganzen geht, drehen in Berlin sofort wieder alle am Rad. Keine 24 Stunden nach der Wahl sendete die Bundeskanzlerin ihr Warnsignal Richtung Paris: keine Änderung der deutschen Politik. Der Wahlsieg Macrons hat eine echte Bewegung erzeugt. Es wird schwer genug sein, das Momentum zu halten und weiterzuentwickeln. Ihn hochleben zu lassen für den Sieg, um ihm noch im Jubelsprung öffentlich ins Knie zu schießen, ist schlicht bigott. Offenbar fehlt es in der Bundesregierung an echter Erkenntnis und Überzeugung, dass diese Chance eines europäischen Neuanfangs so schnell nicht wieder kommt. Europa ist nicht selbstverständlich. Es ist dem Teufel des Retronationalismus gerade noch mal von der Gabel gesprungen. Wer glaubt, das hieße „weiter so“, agiert blasiert. Man kann sich Europa sparen, wenn man meint, es ginge auch ohne. Ersparen kann man es sich nicht.

by Miriam Meckel at May 17, 2017 07:03 AM

May 16, 2017

Berkman Center front page
How to regulate the future of finance?

Subtitle

featuring John Schindler from the Federal Reserve, Jeff Bandman from the CFTC, and Valerie Szczepanik from the SEC

Parent Event

Berkman Klein Luncheon Series

Event Date

May 16 2017 12:00pm to May 16 2017 12:00pm
Thumbnail Image: 

Tuesday, May 16 at 12:00 pm
Berkman Center for Internet & Society at Harvard University

US market regulators offer perspectives on the benefits and risks of the financial technology revolution from distributed ledgers, p2p marketplaces and the use of AI in the financial system. Moderated by Patrick Murck -- Fellow at the Berkman Klein Center for Internet & Society -- the panel discusses the challenge of regulating through disruption and how federal agencies can modernize their approach to keep up with innovation.

Patrick Murck is a Fellow at the Berkman Klein Center for Internet & Society.

John Schindler is an Economist for the Board of Governors of the Federal Reserve System.

Jeffrey Bandman is the FinTech Advisor at the U.S. Commodity Futures Trading Commission. 

Valerie A. Szczepanik is an Assistant Director in the Asset Management Unit of the Division of Enforcement at the U.S. Securities and Exchange Commission (SEC). 

Download original audio and video from this event.

Subscribe to the Berkman Klein events podcast to have audio from all our events delivered straight to you!

Photo credit to Zach Copley

by candersen at May 16, 2017 04:00 PM

Zeynep Tufekci
My book, Twitter and Tear Gas, is out! News and Details!

Dear Friends,

My book, Twitter and Tear Gas: The Power and Fragility of Networked Protest, is officially out today, as of May 16th! It is published by Yale University Press, and it weaves stories with conceptual work. It is both a quasi-historical account of some 21st century mass protests, but also engages theories social movements, public sphere and technology. I tried to write it with as much narrative structure as possible to make it readable to broadest audiences.  

Some news: there will be a free creative commons copy of my book. It will be available as a free PDF download in addition to being sold as a bound book. This is with the hopes that anyone who wants to read it can do so without worrying about the cost. However, this also means that I need to ask that a few people who can afford to do so to please consider purchasing a copy. This is not just so that Yale University Press can do this for more authors, but also because if it is not sold (at least a little bit!) in the initial few weeks, bookstores will not stock it and online algorithms will show it to fewer people. No sales will mean less visibility, and less incentive for publishers to allow other authors creative commons copies. 

Another request I have is that, if you do read it  (and especially if you liked it, heh!), please consider leaving a review on Amazon or Goodreads. This is not an attempt to inflate its reviews but a request to help me fight back the inevitable attempts to suppress books like mine that talk about about repressive governments and censorship and other hot-button topics. (If you follow me on Twitter, you can see that I constantly do engage with negative criticisms and welcome feedback, good or bad). Given my topics, my book is likely to be targeted by a deliberate campaign to suppress its visibility because the trolls who game algorithms know that books that are negatively rated are shown to fewer people, and also that even when people know that some reviews are just shills, the initial impression means something. If you search for a book on Google, all you see is the number of stars with no context whether a good deal of those are one-star reviews that are purposefully malicious, and are not by readers of the book. This has happened to other books like this, and I’ve already started seeing a few signs targeting my book. A flood of actual reviews not only fights that off and averages out the “one star ratings” of trolls, it signals that it’s not worth the effort to try to torpedo it this way. (On a side note! What a world!)

I negotiated the creative commons copy with my (wonderful!) publisher Yale University Press because I really wanted to do what I could to share my insights as broadly as I could about social movements and the networked public sphere. If I make a penny more from this book because it sells well by some miracle, I will donate every extra penny to groups supporting refugees, and if I ever meet you in person and you purchased a copy of the book in support, please let me know and I’ll buy the coffee or beer. 😀 This isn’t at all about money for me.

Encouraging more free creative commons copies is especially important people in developing countries for whom book delivery and cost is an issue, and I was such a person until I came over to the United States. I was never able to afford or find all the books I wanted to read. This also helps undergraduate and graduate students help pay less for books that they need.  If some of you buy this book, publishers can feel more empowered to let other authors also provide free copies online. It is not as easy as “just blog your material.” I do a lot of that, but writing a book that is coherent and more readable takes a lot of effort and editing from the publisher, and they can’t just do that for free.

So if you can afford it: please consider purchasing my book. Amazon link is here, and Yale University Press link with other options for purchase is here. The book’s own website, where the creative commons copy lives, can be found here. You can also keep up with what’s next, what else I’m doing and more on my newsletter.

Thank you so much to everyone who has supported me and interacted with me through the years. If you do end up reading the book, please do know that I would love to hear from you. I may need to fight back trolls online, but I truly appreciate feedback and consider it a gift. My deep gratitude also  goes to everyone striving for positive social change who welcomed me into their lives over the years.

The book is, rightfully, dedicated to my wonderful grandmother whose love and devotion, as I say in the dedication, “made everything else possible.”

best,
-zeynep

Some Reviews of Twitter and Tear Gas: The Power and Fragility of Networked Protest

Inside Higher Ed:

If you’re interested in what’s happening in the world today, this book is a fascinating read. Even if you’re not, it’s an unusually informative book about digital platforms usually examined apart from political life. Social interactions in the digital world in the context of political activity is insightfully explored through this wonderfully readable academic study.

Publishers Weekly:

This insightful and analytical account of mass protest in the 21st century focuses on the “intertwined” power and weaknesses of new technologies that can be used to galvanize large numbers of people. … This comprehensive, thought-provoking work makes a valuable contribution to understanding recent political developments and provides a clear path by which grassroots organizers can improve future efforts.

Financial Times:

The author is also insightful on how governments and politicians are moving from censorship, no easy task on social media, to attention-grabbing and misinformation. “Just as attention is under-appreciated as a resource for social movements, distractions and ignorance are under-appreciated as methods of repression through denial of attention,” she writes. Sowing cynicism is a powerful tool against protest: “If everything is in doubt, while the world is run by secret cabals that successfully manipulate everything behind the scenes, why bother?”  …  Twitter and Tear Gas is packed with evidence on how social media has changed social movements, based on rigorous research and placed in historical context.

by zeynep at May 16, 2017 01:32 PM

May 15, 2017

David Weinberger
[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

The post [liveblog][AI] AI and education lightning talks appeared first on Joho the Blog.

by davidw at May 15, 2017 06:26 PM

[liveblog][AI] Perspectives on community and AI

Chelsea Barabas is moderating a set of lightning talks at the AI Advance, aat Berkman Klein and MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Lionel Brossi recounts growing up in Argentina and the assumption that all boys care about football. He moved to Chile which is split between people who do and do not watch football. “Humans are inherently biased.” So, our AI systems are likely to be biased. Cognitive science has shown that the participants in their studies tend to be WEIRD: western, educated, industrialized, rich and developed. Also straight and white. He references Kate Crawford‘s “AI’s White Guy Problem.” We need not only diverse teams of developers, but also to think about how data can be more representative. We also need to think about the users. One approach is work on goal centered design.

If we ever get to unbiased AI, Borges‘ statement, “The original is unfaithful to the translation” may apply.

Chelsea: What is an inclusive way to think of cross-border countries?

Lionel: We need to co-design with more people.

Madeline Elish is at Data and Society and an anthropology of technology grad student at Columbia. She’s met designers who thought it might be a good to make a phone run faster if you yell at it. But this would train children to yell at things. What’s the context in which such designers work? She and Tim Hwang set about to build bridges between academics and businesses. They asked what designers see as their responsibility for the social implications of their work. They found four core challenges:

1. Assuring users perceive good intentions
2. Protecting privacy
3. Long term adoption
4. Accuracy and reliability

She and Tim wrote An AI Pattern Language [pdf] about the frameworks that guide design. She notes that none of them were thinking about social justice. The book argues that there’s a way to translate between the social justice framework and, for example, the accuracy framework.

Ethan Zuckerman: How much of the language you’re seeing feels familiar from other hype cycles?

Madeline: Tim and I looked at the history of autopilot litigation to see what might happen with autonomous cars. We should be looking at Big Data as the prior hype cycle.

Yarden Katz is at the BKC and at the Dept. of Systems Biology at Harvard Medical School. He talks about the history of AI, starting with 1958 claim about translation machine. 1966: Minsky Then there was an AI funding winter, but now it’s big again. “Until recently, AI was a dirty word.”

Today we use it schizophrenically: for Deep Learning or in a totally diluted sense as something done by a computer. “AI” now seems to be a branding strategy used by Silicon Valley.

“AI’s history is diverse, messy, and philosophical.” If complexit is embraced, “AI” might not be a useful caregory for policy. So we should go basvk to the politics of technology:

1. who controls the code/frameworks/data
2. Is the system inspectable/open?
3. Who sets the metrics? Who benefits from them?

The media are not going to be the watchdogs because they’re caught up in the hype. So who will be?

Q: There’s a qualitative difference in the sort of tasks now being turned over to computers. We’re entrusting machines with tasks we used to only trust to humans with good judgment.

Yarden: We already do that with systems that are not labeled AI, like “risk assessment” programs used by insurance companies.

Madeline: Before AI got popular again, there were expert systems. We are reconfiguring our understanding, moving it from a cognition frame to a behavioral one.

Chelsea: I’ve been involved in co-design projects that have backfired. These projects have sometimes been somewhat extractive: going in, getting lots of data, etc. How do we do co-design that are not extractive but that also aren’t prohibitively expensive?

Nathan: To what degree does AI change the dimensions of questions about explanation, inspectability, etc.

Yarden: The promoters of the Deep Learning narrative want us to believe you just need to feed in lots and lots of data. DL is less inspectable than other methods. DL is not learning from nothing. There are open questions about their inductive power.


Amy Zhang and Ryan Budish give a pre-alpha demo of the AI Compass being built at BKC. It’s designed to help people find resources exploring topics related to the ethics and governance of AI.

The post [liveblog][AI] Perspectives on community and AI appeared first on Joho the Blog.

by davidw at May 15, 2017 05:41 PM

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

The post [liveblog] AI Advance opening: Jonathan Zittrain and lightning talks appeared first on Joho the Blog.

by davidw at May 15, 2017 03:22 PM

ProjectVRM
CustomerTech

doc-017-018_combined_med

We now have a better name for VRM than VRM: customertech.

Hashtag, #customertech.

We wouldn’t have it without adtech (3+million results), martech (1.85m) , fintech (22+m) and regtech (.6m), all of which became hot stuff in the years since we started ProjectVRM in 2006. Thanks to their popularity, customertech makes full sense of what VRM has always been about.

The term came to us from Iain Henderson, a fellow board member of Customer Commons, in response to my request for help prepping for a talk I was about to give at the Martech conference in San Francisco last Thursday. Among other hunks of good advice, Iain wrote “martech needs customertech.”

That nailed it.

So I vetted customertech in my talk, and it took. The audience in the huge ballroom was attentive and responsive.

The talk wasn’t recorded, but @xBarryLevine in Martech Today wrote up a very nice report on it, titled MarTech Conference: Doc Searls previews ‘customer tech’:The marketing writer/researcher has helped set up a ‘Customer Commons’ to provide some of the automated ‘contracts’ between customers and brands.

One problem we’ve had with VRM as a label is an aversion by VRM developers to using it, even as they participate in VRM gatherings and participate in our mailing list (of about 600 members). It doesn’t matter why.

It does matter that martech likes customertech, and understands it instantly. In conversations afterwards, martech folk spoke about it knowingly, without ever having encountered it before. It was like, “Of course, customertech. Tech the customer has.”

I highly recommend to VRM developers that they take to it as well. I can’t think of anything that will help the cause more.

The word alone should also suggest a symbol or an illustration better than VRM ever did.

This doesn’t mean, by the way, that we are retiring VRM, since Vendor Relationship Management earned its Wikipedia entry (at that link), and is one of the most important things customertech can do.

Meanwhile, a hat tip to Hugh MacLeod of Gapingvoid, for the image above. He drew it for a project we both worked on, way back in ’04.

by Doc Searls at May 15, 2017 03:08 PM

May 14, 2017

Panagiotis Metaxas
Artificial Intelligence, your brain, and other things you cannot trust about politics

A few days ago the Center for Research on Computation and Society organized a workshop with the provocative title “Six Reasons Fake News is the End of the World as we Know It“. I call it provocative because, whether “fake news” is a new thing or not, has been discussed a lot lately. Not all of us agree on what it is, or how novel it is. Some point out that it is as old as newspapers, others see it as something that mainly appeared last year. Yet others doubt that it is even a phenomenon worth discussing and that, instead of fake news, we should talk instead about specific categories such as false news, misinformation, disinformation, and propaganda.

Accepting the challenge, I gave a talk with an equally provocative, I would like to believe, title:  “Artificial Intelligence, your brain, and other things you cannot trust about politics“. You can follow my talk in the video below, but let me give you a list of the “things” that I discussed in the talk:

what-you-cannot-trusst-about-politics

I hope you find it interesting and do your own thinking about what we can trust when it comes to politics. Importantly, we need to figure out how to solve the problems of online misinformation and propaganda that seem to be all around us these days.

Or, to learn how to live with them, which is what I think will happen.

by metaxas at May 14, 2017 01:24 AM

May 12, 2017

MediaBerkman
Zeynep Tufekci on Twitter and Tear Gas: The Power and Fragility of Networked Protest
Berkman Klein Faculty Associate, Zeynep Tufekci joins us to talk about her new book, Twitter and Tear Gas: The Power and Fragility of Networked Protest. To understand a thwarted Turkish coup, an anti–Wall Street encampment, and a packed Tahrir Square, we must first comprehend the power and the weaknesses of using new technologies to mobilize large numbers of people. An incisive observer, writer, and participant in today’s social movements, Zeynep Tufekci explains in this accessible and compelling book the nuanced trajectories of modern protests—how they form, how they operate differently from past protests, and why they have difficulty persisting in their long-term quests for change. Tufekci speaks from direct experience, combining on-the-ground interviews with insightful analysis. She describes how the internet helped the Zapatista uprisings in Mexico, the necessity of remote Twitter users to organize medical supplies during Arab Spring, the refusal to use bullhorns in the Occupy Movement that started in New York, and the empowering effect of tear gas in Istanbul’s Gezi Park. These details from life inside social movements complete a moving investigation of authority, technology, and culture—and offer essential insights into the future of governance. About Zeynep Zeynep Tufekci is an assistant professor at the University of North Carolina, Chapel Hill at the School of Information and Library Science with an affiliate appointment in the Department of Sociology. She is also currently also a Fellow at the Berkman Center for Internet and Society at Harvard University. She was previously an assistant professor of sociology at the University of Maryland, Baltimore County. Her research revolves around the interaction between technology and social, cultural and political dynamics. She is particularly interested in collective action and social movements, complex systems, surveillance, privacy, and sociality. For more info on this event visit: https://cyber.harvard.edu/events/2017/luncheon/05/Tufekci

by the Berkman Klein Center at May 12, 2017 05:27 PM

May 11, 2017

MediaBerkman
Ifeoma Ajunwa on The Quantified Worker
What are the rights of the worker in a society that seems to privilege technological innovation over equality and privacy? How does the law protect worker privacy and dignity given technological advancements that allow for greater surveillance of workers? What can we expect for the future of work; should privacy be treated as merely an economic good that could be exchanged for the benefit of employment? In this talk Berkman Klein fellow Ifeoma Ajunwa looks at how the law and private firms respond to job applicants or employees perceived as “risky,” and the organizational behavior in pursuit of risk reduction by private firms, as well as ethical issues arising from how firms off-set risk to employees. For more info on this event, visit: https://cyber.harvard.edu/events/2017/luncheon/05/Ajunwa

by the Berkman Klein Center at May 11, 2017 05:40 PM

David Weinberger
[liveblog] St. Goodall

I’m in Rome at the National Geographic Science Festival
, co-produced by Codice Edizioni which, not entirely coincidentally published, the Italian version of my book Took Big to Know. Jane Goodall is giving the opening talk to a large audience full of students. I won’t try to capture what she is saying because she is talking without notes, telling her personal story.

She embodies an inquiring mind capable of radically re-framing our ideas simply by looking at the phenomena. We may want to dispute her anthropomorphizing of chimps but it is a truth that needed to be uncovered. For example, she says that when she got to Oxford to get a graduate degree — even though she had never been to college — she was told that she should’t have given the chimps names. But this, she says, was because at the time science believed humans were unique. Since then genetics has shown how close we are to them, but even before that her field work had shown the psychological and behavioral similarities. So, her re-framing was fecund and, yes, true.

At a conference in America in 1986, every report from Africa was about the decimation of the chimpanzee population and the abuse of chimpanzees in laboratories. “I went to this conference as a scientist, ready to continue my wonderful life, and I left as an activist.” Her Tacare Institute
works with and for Africans. For example, local people are equipped with tablets and phones and mark chimp nests, downed trees, and the occasional leopard. (Takari provides scholarships to keep girls in school, “and some boys too.”)

She makes a totally Dad joke about “the cloud.”

It is a dangerous world, she says. “Our intellects have developed tremendously.” “Isn’t it strange that this most intellectual creature ever is destroying its home.” She calls out the damage done to our climate by our farming of animals. “There are a lot of reasons to avoid eating a lot of meat or any, but that’s one of them.”

There is a disconnect between our beautiful brains and our hearts, she says. Violence, domestic violence, greed…”we don’t think ‘Are we having a happy life?'” She started “Roots and Shoots
” in 1991 in Tanzania, and now it’s in 99 countries, from kindergartens through universities. It’s a program for young people. “We do not tell the young people what to do.” They decide what matters to them.

Her reasons for hope: 1. The reaction to Roots and Shoots. 2. Our amazing brains. 3. The resilience of nature. 4. Social media, which, if used right can be a “tremendous tool for change.” 6. “The indomitable human spirit.” She uses Nelson Mandela as an example, but also refugees making lives in new lands.

“It’s not only humans that have an indomitable spirit.” She shows a brief video of the release of a chimp that left at least some wizened adults in tears:

She stresses making the right ethical choices, a phrase not heard often enough.

If in this audience of 500 students she has not made five new scientists, I’ll be surprised.

The post [liveblog] St. Goodall appeared first on Joho the Blog.

by davidw at May 11, 2017 10:24 AM

May 10, 2017

Miriam Meckel
Reaktion und Gegenreaktion

Es gelingt keinem politischen Kopf, Fortschritt als Chance zu erklären. Der gesellschaftliche Geist, er ist auf Grund gelaufen.

Es ist eine Selbstverständlichkeit, die das dritte Newton’sche Gesetz heute atmet. Aktion und Reaktion stehen in Wechselwirkung. Jede Aktion erzeugt eine gleich große Reaktion, die auf den Verursacher zurückwirkt. Das ist Physik. Erst einmal. Und es ist auch Politik oder vielmehr: Gesellschaftskunde. Die Banalität dieses Gesetzes wird nämlich ganz schnell böse, wenn man sie aus der Mechanik der Körperphysik ins Licht der Zeitläufte zerrt.

Da stehen sich dann ganz andere Kräfte gegenüber, die der liberalen Ordnung und der Verteidigung von Vergangenem. Sie kämpfen sehr hör- und sichtbar gegeneinander, vor allem an den Fronten der Globalisierung. Die britische Entscheidung zum Austritt aus der EU, die längst in der Illusionswelt des Gestrigen stecken geblieben ist, die Wahl Donald Trumps zum US-Präsidenten aufgrund wirtschaftspolitischer Versprechen aus den Anfängen des Industriezeitalters und der Aufstieg rechter Parteien in vielen europäischen Ländern zeugen davon, dass gerüstet wird für die politische Wechselwirkung. Der Gegenschlag gegen den vermeintlich aktionistisch vorangetriebenen Liberalismus steht bevor. So wünschen es sich die Kadetten der politischen Nostalgie.

Vielleicht werden sie in diesem Jahrzehnt einen interimistischen Sieg davontragen. Das kann geschehen, wusste schon Joseph de Maistre, Angehöriger des katholischen Adels, der angesichts der Französischen Revolution Ende des 18. Jahrhunderts zum ersten Mal das reaktionäre Denken als Gegenstrategie zum Terror der Revolution beschrieb. Er war Vertreter der Gegenaufklärung. Die setzt, ganz im Newton’schen Sinne ein, wenn Nostalgie von der Hilfe zum Hindernis wird.

Psychologische Forschung zeigt, dass Nostalgie den Wandel erst erträglich macht. Aus der Erinnerung an das, was einst gut und schön war, entsteht Vergewisserung: Es kann wieder so werden. Aber Politik ist eben nicht Physik. Und so kann die Gegenreaktion auch ganz unproportional ausfallen. Dann wird Nostalgie zur Ideologie im Kampf ums Gestern.

Zurzeit gelingt es keinem führenden politischen Kopf, die Geschichte des Fortschritts über die Chancen zu erzählen, die mit ihm verbunden sind. Der Geist der Modernisierung ist irgendwie auf Grund gelaufen. Einst schrieb der kolumbianische Philosoph Nicolás Gómez Dávila: „Der Reaktionär ist die Reaktion auf den Aktionär.“ Den Satz eines weiteren radikalen Vertreters der Gegenaufklärung versteht man heute kaum mehr, weil „Aktionär“ in der deutschen Sprache nur mehr für Anteilseigner steht.

Warum eigentlich überlassen es die Aktionäre allein der Politik, überzeugend vom Fortschritt zu erzählen? Als Anteilseigner an den Unternehmen sind sie die wichtigsten Protagonisten der Modernisierung. Ihre Treuepflicht gilt nicht nur dem Unternehmen, an dem sie beteiligt sind. Sie gilt auch dem größeren Ganzen. Noch nie ist die Wirtschaft am Reaktionär gewachsen.

by Miriam Meckel at May 10, 2017 01:42 PM

May 09, 2017

Benjamin Mako Hill
Surviving an “Eternal September:” How an Online Community Managed a Surge of Newcomers

Attracting newcomers is among the most widely studied problems in online community research. However, with all the attention paid to challenge of getting new users, much less research has studied the flip side of that coin: large influxes of newcomers can pose major problems as well!

The most widely known example of problems caused by an influx of newcomers into an online community occurred in Usenet. Every September, new university students connecting to the Internet for the first time would wreak havoc in the Usenet discussion forums. When AOL connected its users to the Usenet in 1994, it disrupted the community for so long that it became widely known as “The September that never ended”.

Our study considered a similar influx in NoSleep—an online community within Reddit where writers share original horror stories and readers comment and vote on them. With strict rules requiring that all members of the community suspend disbelief, NoSleep thrives off the fact that readers experience an immersive storytelling environment. Breaking the rules is as easy as questioning the truth of someone’s story. Socializing newcomers represents a major challenge for NoSleep.

Number of subscribers and moderators on /r/NoSleep over time.

On May 7th, 2014, NoSleep became a “default subreddit”—i.e., every new user to Reddit automatically joined NoSleep. After gradually accumulating roughly 240,000 members from 2010 to 2014, the NoSleep community grew to over 2 million subscribers in a year. That said, NoSleep appeared to largely hold things together. This reflects the major question that motivated our study: How did NoSleep withstand such a massive influx of newcomers without enduring their own Eternal September?

To answer this question, we interviewed a number of NoSleep participants, writers, moderators, and admins. After transcribing, coding, and analyzing the results, we proposed that NoSleep survived because of three inter-connected systems that helped protect the community’s norms and overall immersive environment.

First, there was a strong and organized team of moderators who enforced the rules no matter what. They recruited new moderators knowing the community’s population was going to surge. They utilized a private subreddit for NoSleep’s staff. They were able to socialize and educate new moderators effectively. Although issuing sanctions against community members was often difficult, our interviewees explained that NoSleep’s moderators were deeply committed and largely uncompromising.

That commitment resonates within the second system that protected NoSleep: regulation by normal community members. From our interviews, we found that the participants felt a shared sense of community that motivated them both to socialize newcomers themselves as well as to report inappropriate comments and downvote people who violate the community’s norms.

Finally, we found that the technological systems protected the community as well. For instance, post-throttling was instituted to limit the frequency at which a writer could post their stories. Additionally, Reddit’s “Automoderator”, a programmable AI bot, was used to issue sanctions against obvious norm violators while running in the background. Participants also pointed to the tools available to them—the report feature and voting system in particular—to explain how easy it was for them to report and regulate the community’s disruptors.

This blog post was written with Charlie Kiene. The paper and work this post describes is collaborative work with Charlie Kiene and Andrés Monroy-Hernández. The paper was published in the Proceedings of CHI 2016 and is released as open access so anyone can read the entire paper here. A version of this post was published on the Community Data Science Collective blog.

by Benjamin Mako Hill at May 09, 2017 06:32 PM

Berkman Center front page
Twitter and Tear Gas with Zeynep Tufekci

Subtitle

The Power and Fragility of Networked Protest

Teaser

Join us for this firsthand account and incisive analysis of modern protest, revealing internet-fueled social movements’ greatest strengths and frequent challenges.

Parent Event

Berkman Klein Luncheon Series

Event Date

May 9 2017 12:00pm to May 9 2017 12:00pm
Thumbnail Image: 

Tuesday, May 9, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University

Berkman Klein Faculty Associate, Zeynep Tufekci joins us to talk about her new book, Twitter and Tear Gas: The Power and Fragility of Networked Protest.

To understand a thwarted Turkish coup, an anti–Wall Street encampment, and a packed Tahrir Square, we must first comprehend the power and the weaknesses of using new technologies to mobilize large numbers of people. An incisive observer, writer, and participant in today’s social movements, Zeynep Tufekci explains in this accessible and compelling book the nuanced trajectories of modern protests—how they form, how they operate differently from past protests, and why they have difficulty persisting in their long-term quests for change.

Tufekci speaks from direct experience, combining on-the-ground interviews with insightful analysis. She describes how the internet helped the Zapatista uprisings in Mexico, the necessity of remote Twitter users to organize medical supplies during Arab Spring, the refusal to use bullhorns in the Occupy Movement that started in New York, and the empowering effect of tear gas in Istanbul’s Gezi Park. These details from life inside social movements complete a moving investigation of authority, technology, and culture—and offer essential insights into the future of governance.

About Zeynep

Zeynep Tufekci is an assistant professor at the University of North Carolina, Chapel Hill at the School of Information and Library Science with an affiliate appointment in the Department of Sociology. She is also currently also a Fellow at the Berkman Center for Internet and Society at Harvard University. She was previously an assistant professor of sociology at the University of Maryland, Baltimore County. Her research revolves around the interaction between technology and social, cultural and political dynamics. She is particularly interested in collective action and social movements, complex systems, surveillance, privacy, and sociality.

Links

Download original audio and video from this event.

Subscribe to the Berkman Klein events podcast to have audio from all our events delivered straight to you!

by candersen at May 09, 2017 04:00 PM

May 07, 2017

David Weinberger
Predicting the tides based on purposefully false models

Newton showed that the tides are produced by the gravitational pull of the moon and the Sun. But, as a 1914 article in Scientific American pointed out, if you want any degree of accuracy, you have to deal with the fact that “the earth is not a perfect sphere, it isn’t covered with water to a uniform­ form depth, it has many continents and islands and sea passages of peculiar shapes and depths, the earth does not travel about the sun in a circular path, and earth, sun and moon are not always in line. The result is that two tides are rarely the same for the same place twice running, and that tides dif­fer from each other enormously in both times and in amplitude.”

So, we instead built a machine of brass, steel and mahogany. And instead of trying to understand each of the variables, Lord Kelvin postulated “a very respectable number” of fictitious suns and moons in various positions over the earth, moving in unrealistically perfect circular orbits, to account for the known risings and fallings of the tide, averaging readings to remove unpredictable variations caused by weather and “freshets.” Knowing the outcomes, he would nudge a sun or moon’s position, or add a new sun or moon, in order to get the results to conform to what we know to be the actual tidal measurements. If adding sea serpents would have helped, presumably Lord Kelvin would have included them as well.

The first mechanical tide-predicting machines using these heuristics were made in England. In 1881, one was created in the United States that was used by the Coast and Geodetic Survey for twenty-seven years.

Then, in 1914, it was replaced by a 15,000-piece machine that took “account of thirty-seven factors or components of a tide” (I wish I knew what that means) and predicted the tide at any hour. It also printed out the information rather than requiring a human to transcribe it from dials. “Unlike the human brain, this one cannot make a mistake.”

This new model was more accurate, with greater temporal resolution. But it got that way by giving up on predicting the actual tide, which might vary because of the weather. We simply accept the unpredictability of what we shall for the moment call “reality.” That’s how we manage in a world governed by uniform laws operating on unpredictably complex systems.

It is also a model that uses the known major causes of average tides — the gravitational effects of the sun and moon — but that feels fine about fictionalizing the model until it provides realistic results. This makes the model incapable of being interrogated about the actual causes of the tide, although we can tinker with it to correct inaccuracies. In this there is a very rough analogy — and some disanalogies — with some instances of machine learning.

The post Predicting the tides based on purposefully false models appeared first on Joho the Blog.

by davidw at May 07, 2017 03:28 PM

May 06, 2017

Miriam Meckel
Business Feminismus?

Gruppenselfie mit (von links unten im Uhrzeigersinn) Kanadas Außenministerin Crystia Freeland, Familienministerin Manuela Schwesig, IWF-Chefin Christine Lagarde, Königin Maxima der Niederlande, Bundeskanzlerin Angela Merkel, Nicola Leibinger-Kammüller (Trumpf), Ivanka Trump, Anne Finucane (Bank of America), MM

Wer Feminismus über wirtschaftlichen Erfolg definiert, nimmt angeblich die Gleichberechtigung nicht so ernst. So ein Quatsch.

Das war ein Moment für die Ewigkeit. Der Augenblick, als Bundeskanzlerin Angela Merkel auf der Bühne eines Berliner Hotels vor Hunderten Gästen des W20-Gipfels ins Stocken geriet. Auslöser war die Frage, ob sie sich selbst als Feministin bezeichnen würde. Es hat dann ein paar Wortrunden gedauert, bis eine Annäherung stattfand zwischen der Bundeskanzlerin und dem Begriff. Es war der Beginn einer Freundschaft, gespickt mit Resten an Misstrauen. Zumindest wollte die Kanzlerin die Frage mit nach Hause nehmen „ob ich Feministin bin oder nicht“.

Diese Frage muss jede Frau und jeder Mann für sich beantworten. Wer für Gleichberechtigung und Selbstbestimmung der Frau eintritt und diese Überzeugung nicht vor der Umwelt verheimlicht, darf sich so nennen. Dass manch eine(r) davor zurückschreckt, hat wenig mit dem Wort, aber viel mit den Schubladen zu tun, in denen Gedachtes manchmal gerne abgelegt wird. Kleinliche Auslegeordnungen aber haben noch nie weit geführt.

Es war richtig, dass sich die Runde mit Kanzlerin, IWF-Chefin Christine Lagarde, Ivanka Trump und anderen auf Frauen und Unternehmertum konzentriert hat. Die Zahlen sprechen eine klare Sprache. Nur jedes zehnte Start-up in Deutschland wird von einer Frau gegründet. Weltweit sind 70 Prozent der von Frauen gegründeten Unternehmen unterfinanziert, weil die Gründerinnen keinen Zugang zu Krediten und anderen Finanzierungsmöglichkeiten haben. Und wären Frauen endlich im gleichen Umfang wie Männer erwerbstätig, unser weltweites Bruttosozialprodukt könnte bis 2025 um 28 Billionen Dollar wachsen. Wenn aus der W20- Runde nun also ein Fonds zur Förderung von Unternehmerinnen in Entwicklungsländern hervorgehen soll, dann ist das zumindest schon mal ein konkreter Ansatz.

Es ist aber aus der Runde mit der Kanzlerin flugs ein neuer Begriff entstanden, mit dem wir jetzt offenbar zwischen dem guten, richtigen und dem falschen, bösen Feminismus unterscheiden lernen sollen. Business-Feminismus heißt das neue Schmähwort, und es trifft alle, die Frauen auch zum Wachstumsmotor der Wirtschaft machen wollen. Da könnte ich dann doch ein bisschen sauer werden: Wie sonst, bitte, soll es gelingen, Frauen die gleichen Chancen und Möglichkeiten zu bieten, als dadurch, ihnen Rahmenbedingungen für Unternehmertum zu schaffen. Und zwar alles, was dazugehört: Bildung, Kinderbetreuung, Kapital und so weiter. Das bringt Wachstum, volkswirtschaftlich und ganz individuell.

Am eigenen unternehmerischen Tun wächst ein Mensch ungemein. Um das zu erkennen, muss man keine Feministin sein. Das schafft man als Realistin. Als solche erkennt man auch leicht, dass Begriffe, wie Menschen, mit der Zeit gehen. Der Feminismus hat mal mit „mein Bauch gehört mir“ angefangen. Wir dürfen den Bedeutungsraum getrost ergänzen. „Mein Unternehmen gehört mir“ zählt heute zum Repertoire derer, die sich Feministinnen nennen.

 

by Miriam Meckel at May 06, 2017 04:52 PM

May 04, 2017

Justin Reich
To Measure Change, We Need to Move Beyond Quantitative Research
To measure change in education, we need to move beyond only using quantitative research methods.

by Beth Holland at May 04, 2017 08:05 PM

Center for Research on Computation and Society (Harvard SEAS)
Postdoc Yang Liu Publishes in the 18th ACM Conference on Economics and Computation (EC-17)
May 4, 2017

CRCS Postdoc Yang Liu Publishes in the 18th ACM Conference on Economics and Computation (EC-17)

Yang Liu and Yiling Chen. Machine Learning Aided Peer Prediction. ACM EC 2017, Cambridge, United States.

by Gabriella Fee at May 04, 2017 06:26 PM

May 03, 2017

MediaBerkman
Digital Rights and Online Harassment in the Global South
Nighat Dad discusses the state of freedom of expression, privacy, and online harassment in the global south, with a particular focus on Pakistan, where she is based. Dad is the Executive Director of the Digital Rights Foundation (DRF), a nonprofit that seeks to protect the freedom and security of all people online, with a particular focus on women and human rights defenders. In late 2016, DRF launched a cyber harassment hotline, and Dad will present key findings from a recently released report [LINK: http://digitalrightsfoundation.pk/cyber-harassment-helpline-completes-its-four-months-of-operations/] on the first four months of its operation. The report affords up-to-the-moment insights on significant challenges facing internet users in Pakistan and throughout the region. About Nighat Nighat Dad is the Executive Director of Digital Rights Foundation, Pakistan. She is an accomplished lawyer and a human rights activist. Nighat is one of the pioneers who have been campaigning around access to open internet in Pakistan and globally. She has been actively campaigning and engaging at a policy level on issues focusing on Internet Freedom, Women and Technology, Digital Security, and Women’s empowerment. Nighat has been named in TIME's Next Generation Leaders List, and has won Atlantic Council Freedom of Expression Award, and also Human Rights Tulip Award for her work in digital rights and freedom. She is also an Affiliate at Berkman Klien Centre for the year 2016-2017 For more info on this event visit: https://cyber.harvard.edu/events/2017/luncheon/05/Dad

by the Berkman Klein Center at May 03, 2017 05:18 PM

Internet Access as a Basic Service: Inspiration from our Canadian Neighbors
Deemed the modern equivalent of building roads or railways, connecting every person and business to high-speed internet is on the minds of policymakers, advocates, and industry players. Under the leadership of Mr. Jean-Pierre Blais, the Canadian Radio-television and Telecommunications Commission (“CRTC”) ruled in December 2016 that broadband internet access is a basic and vital service, thus ensuring that broadband internet joins the ranks of local phone service. The CRTC’s announced reforms will impact over 2 million Canadian households, especially those in remote and isolated areas. The policy aims to ensure that internet download speeds of 50mbps and upload speeds of 10mbps are available to 90% of Canadian homes and business by 2021. Join the Berkman Klein Center and the HLS Canadian Law Student Association as Mr. Blais speaks about broadband, internet, and the future of connectivity in Canada and around the world. About Jean-Pierre Blais Before joining the CRTC, Mr. Blais was Assistant Secretary of the Treasury Board Secretariat’s Government Operations Sector. In this capacity, he provided advice on the management oversight and corporate governance of various federal departments, agencies and crown corporations. From 2004 to 2011, he was Assistant Deputy Minister of Cultural Affairs at the Department of Canadian Heritage. While there, he created the Task Force on New Technologies to study the impact of the Internet and digital technologies on Canada’s cultural policies. In addition, he served as Director of the Canadian Television Fund. His responsibilities also included cultural trade policy and international policies and treaties, such as the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expression. As the Director of Investment from 2004 to 2011, he reviewed transactions in the cultural sector under the Investment Canada Act and provided advice to the Minister of Canadian Heritage. Mr. Blais also served as Assistant Deputy Minister of International and Intergovernmental Affairs at the Department of Canadian Heritage. He played a pivotal role in the rapid adoption of the UNESCO Anti-Doping Convention and in garnering international support for the World Anti-Doping Agency’s Anti-Doping Code. Moreover, he represented the Government of Canada on the Vancouver 2010 Winter Games Bid Corporation. As the CRTC’s Executive Director of Broadcasting from 1999 to 2002, he notably oversaw the development of a licensing framework for new digital pay and specialty services and led reviews of major ownership transactions. He previously was a member of the Legal Directorate, serving as General Counsel, Broadcasting and Senior Counsel. From 1985 to 1991, Mr. Blais was an attorney with the Montreal-based firm Martineau Walker. Mr. Blais holds a Master of Laws from the University of Melbourne in Australia, as well as a Bachelor of Civil Law and a Bachelor of Common Law from McGill University. He is a member of the Barreau du Québec and the Law Society of Upper Canada. His term ends on June 17, 2017. For more info on this event visit: https://cyber.harvard.edu/events/luncheons/2017/04/Blais

by the Berkman Klein Center at May 03, 2017 01:01 PM

Digital Expungement: Rehabilitation in the Digital Age
The concept of criminal rehabilitation in the digital age is intriguing. How can we ensure proper reintegration into society of individuals with a criminal history that was expunged by the state when their wrongdoings remain widely available through commercial vendors (data brokers) and online sources like mugshot websites, legal research websites, social media platforms, and media archives? What are constitutional and pragmatic challenges to ensure digital rehabilitation? Is there a viable solution to solve this conundrum? About Eldar Eldar Haber is an Associate Professor (Senior Lecturer) at the Faculty of Law, Haifa University and a Faculty Associate at the Berkman-Klein Center for Internet & Society at Harvard University. He earned his Ph.D. from Tel-Aviv University and completed his postdoctoral studies as a fellow at the Berkman-Klein Center. His main research interests consist of various facets of law and technology including cyber law, intellectual property law (focusing mainly on copyright), privacy, civil rights and liberties, and criminal law. His works were published in various flagship law reviews worldwide, including top-specialized law and technology journals of U.S. universities such as Harvard, Yale and Stanford. His works were presented in various workshops and conferences around the globe, and were cited in academic papers, governmental reports, the media, and U.S. Federal courts. For more info on this event visit: https://cyber.harvard.edu/events/luncheons/2017/04/Haber

by the Berkman Klein Center at May 03, 2017 12:50 PM

May 02, 2017

Berkman Center front page
The Quantified Worker

Subtitle

with Berkman Klein Fellow, Ifeoma Ajunwa

Teaser

To apply to Futurecorp, please submit your resume, list of references, and a genetic profile. Once hired, we'll make an appointment for you to receive a sub-dermal tracking microchip.

Parent Event

Berkman Klein Luncheon Series

Event Date

May 2 2017 12:00pm to May 2 2017 12:00pm
Thumbnail Image: 

Tuesday, May 2, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University

What are the rights of the worker in a society that seems to privilege technological innovation over equality and privacy? How does the law protect worker privacy and dignity given technological advancements that allow for greater surveillance of workers?  What can we expect for the future of work; should privacy be treated as merely an economic good that could be exchanged for the benefit of employment?

About Ifeoma

I am currently a Fellow at the Berkman Klein Center at Harvard for the 2016-2017 year. I will be an Assistant Professor at Cornell University’s Industrial and Labor Relations School (ILR), (with affiliations in Sociology and Law) starting July, 2017.

I hold a Ph.D. from the Sociology Department of Columbia University in the City of New York (emphasis on Organizational Theory and Law and Society). My doctoral research on reentry was supported by a grant from the National Science Foundation (NSF).

I am interested in how the law and private firms respond to job applicants or employees perceived as “risky.” I look at the legal parameters for the assessment of such risk and also the organizational behavior in pursuit of risk reduction by private firms. I examine the sociological processes in regards to how such risk is constructed and the discursive ways such risk assessment is deployed in the maintenance of inequality. I also examine ethical issues arising from how firms off-set risk to employees.

My dissertation was an ethnography of a reentry organization that catered to the  formerly incarcerated. In the sum of my published research, I’ve focused on three populations: 1) the formerly incarcerated, 2) carriers of genetic disease, and, 3) workers with perceived unhealthy lifestyles (obesity, smoking, etc.). Thus, my research is at the intersection of organizational theory, management/business law, privacy, health law, and antidiscrimination law.

My most recent article, Limitless Worker Surveillance, with Kate Crawford and Jason Schultz is forthcoming from the California Law Review. The Article has been downloaded more than 2,000 times on SSRN and was endorsed by the NYTimes Editorial Board. In addition to the California Law Review, my articles have been published in the Harvard Business Review, the Fordham Law Review, the Harvard Civil Rights-Civil Liberties Law Review, the Ohio State Law Review, and in the Journal of Law, Medicine, and Ethics, among others.

I have  a book contract with Cambridge University Press for a book (“The Quantified Worker,” forthcoming 2018) that will examine the role of technology in the workplace and its effects on management practices as moderated by employment and privacy laws.

Download original audio and video from this event.

Subscribe to the Berkman Klein events podcast to have audio from all our events delivered straight to you!

by candersen at May 02, 2017 04:00 PM

Panagiotis Metaxas
The Real “Fake News”

The following is a blog post that Eni Mustafaraj has recently published in The Spoke. We reproduce it here with permission.

fake_news_post

Fake news has always been with us, starting with The Great Moon Hoax in 1835. What is different now is the existence of a mass medium, the Web, that allows anyone to financially benefit from it.

Etymologists typically track the change of a word’s meaning over decades, sometimes even over centuries. Currently, however, they find themselves observing a new president and his administration redefine words and phrases on a daily basis. Case in point: “fake news.” One would have to look hard to find an American who hasn’t heard this phrase in recent months. The president loves to apply it as a label to news organizations that he doesn’t agree with.

But right before its most recent incarnation, the phrase “fake news” had a different meaning. It referred to factually incorrect stories appearing on websites with names such as DenverGuardian.com or TrumpVision365.com that mushroomed in the weeks leading up to the 2016 U.S. Presidential Election. One such story—”FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide”—was shared more than a half million times on Facebook, despite being entirely false. The website that published it, DenverGuardian.com, was operated by a man named Jestin Coler, who, when tracked down by persistent NPR reporters after the election, admitted to being a liberal who “enjoyed making a mess of the people that share the content”. He didn’t have any regrets.

Why did fake news flourish before the election? There are too many hypotheses to settle on a single explanation. Economists would explain it in terms of supply and demand. Initially, there were only a few such websites, but their creators noticed that sharing fake news stories on Facebook generated considerable pageviews (the number of visits on the page) for them. Their obvious conclusion: there was a demand for sensational political news from a sizeable portion of the web-browsing public. Because pageviews can be monetized by running Google ads alongside the fake stories, the response was swift: an industry of fake news websites grew quickly to supply fake content and feed the public’s demand. The creators of this content were scattered all over the world. As BuzzFeed reported, a cluster of more than 100 fake news websites was run by individuals in the remote town of Ceres, in the Former Yugoslav Republic of Macedonia.

How did the people in Macedonia manage to spread their fake stories on Facebook and earn thousands of dollars in the process? In addition to creating a cluster of fake news websites, they also created fake Facebook accounts that looked like real people and then had these accounts subscribe to real Facebook groups, such as “Hispanics for Trump” or “San Diego Berniecrats”, where conversations about the election were taking place. Every time the fake news websites published a new story, the fictitious accounts would share them in the Facebook groups they had joined. The real people in the groups would then start spreading the fake news article among their Facebook followers, successfully completing the misinformation cycle. These misinformation-spreading techniques were already known to researchers, but not to the public at large. My colleague Takis Metaxas and I discovered and documented one such technique used on Twitter all the way back in the 2010 Massachusetts Senate election between Martha Coakley and Scott Brown.

There is an important takeaway here for all of us: fake news doesn’t become dangerous because it’s created or because it is published; it becomes dangerous when members of the public decide that the news is worth spreading. The most ingenious part of spreading fake news is the step of “infiltrating” groups of people who are most susceptible to the story and will fall for it.  As explained in this news article, the Macedonians tried different political Facebook groups, before finally settling on pro-Trump supporters.

Once “fake news” entered Facebook’s ecosystem, it was easy for people who agreed with the story and were compelled by the clickbait nature of the headlines to spread it organically. Often these stories made it to the Facebook’s Trending News list. The top 20 fake news stories about the election received approximately 8.7 million views on Facebook, 1.4 million more views than the top 20 real news stories from 19 of the major news websites (CNN, New York Times, etc.), as an analysis by BuzzFeed News demonstrated. Facebook initially resisted the accusation that its platform had enabled fake news to flourish. However, after weeks of intense pressure from media and its user base, it introduced a series of changes to its interface to mitigate the impact of fake news. These include involving third-party fact-checkers to assign a “Disputed” label to posts with untrue claims, suppressing posts with such a label (making them less visible and less spreadable) and allowing users to flag stories as fake news.

It’s too early to assess the effect these changes will have on the sharing behavior of Facebook users. In the meantime, the fake news industry is targeting a new audience: the liberal voters. In March, the fake quote “It’s better for our budget if a cancer patient dies more quickly,” attributed to Tom Price, the Secretary of Health and Human Services, appeared on a website titled US Political News, operated by an individual in Kosovo. The story was shared over 80,000 times on Facebook.

Fake news has always been with us, starting with The Great Moon Hoax in 1835. What is different now is the existence of a mass medium, the Web, that allows anyone to monetize content through advertising. Since the cost of producing fake news is negligible, and the monetary rewards substantial, fake news is likely to persist. The journey that fake news takes only begins with its publication. We, the reading public who share these stories, triggered by headlines engineered to make us feel outraged or elated, are the ones who take the news on its journey. Let us all learn to resist such sharing impulses.

by metaxas at May 02, 2017 03:19 AM

May 01, 2017

Center for Research on Computation and Society (Harvard SEAS)
Postdoc Yang Liu Publishes in the 26th International Joint Conference on Artificial Intelligence (IJCAI-17)
May 1, 2017

CRCS Postdoc Yang Liu Publishes in the 26th International Joint Conference on Artificial Intelligence (IJCAI-17):

Yang Liu and Mingyan Liu. Crowd Learning: Improving Online Decision Making Using Crowdsourced Data. IJCAI 2017, Melbourne, Australia.

by Gabriella Fee at May 01, 2017 07:04 PM

Postdoc Nisarg Shah Publishes in the 18th ACM Conference on Economics and Computation (EC-17)
May 1, 2017

CRCS Postdoc Nisarg Shah has two papers published in the 18th ACM Conference on Economics and Computation (EC-17):

Fair Public Decision Making, with Vincent Conitzer and Rupert Freeman

Peer Prediction with Heterogeneous Users, with Arpit Agarwal, Debmalya Mandal, and David C. Parkes

by Gabriella Fee at May 01, 2017 07:01 PM

Postdoc Fei Fang's Dissertation Selected as Runner-Up for IFAAMAS Victor Lessor Distinguished Dissertation Award
May 1, 2017

CRCS Postdoc Fei Fang's dissertation, “Towards Addressing Spatio-Temporal Aspects in Security Games,” was selected as the runner-up for the IFAAMAS Victor Lessor Distinguished Dissertation Award.

by Gabriella Fee at May 01, 2017 06:48 PM

Postdoc Nisarg Shah Awarded 2016 IFAAMAS Victor Lesser Distinguished Dissertation Award
May 1, 2017

CRCS Postdoc Nisarg Shah was awarded the 2016 IFAAMAS Victor Lesser Distinguished Dissertation Award for his dissertation entitled "Optimal Social Decision Making."

Abstract: How can computers help ordinary people make collective decisions about real-life dilemmas, like which restaurant to go to with friends, or even how to divide an inheritance? In this talk, I will present an optimization-driven approach that draws on ideas from AI, theoretical computer science, and economic theory, and illustrate it through my research in computational social… Read more about Postdoc Nisarg Shah Awarded 2016 IFAAMAS Victor Lesser Distinguished Dissertation Award

by Gabriella Fee at May 01, 2017 06:40 PM

Celebration of Computer Science at Harvard in Honor of Harry Lewis

 

On Wednesday, April 19th, CRCS hosted a celebration of computer science at Harvard in honor of Harry Lewis, Gordon McKay professor of computer science and former dean of the College. Friends, family, students, and colleagues gathered to celebrate professor Lewis' 70th birthday and the news that - in his words - he would "someday be retiring." They spoke of the myriad ways professor Lewis has enriched their lives through his personal investment in their success, his emphasis on character over knowledge, and his insistence on integrity over popularity. The day-long celebration… Read more about Celebration of Computer Science at Harvard in Honor of Harry Lewis

by Gabriella Fee at May 01, 2017 04:57 PM

April 30, 2017

ProjectVRM
Our radical hack on the whole marketplace

In Disruption isn’t the whole VRM story, I visited the Tetrad of Media Effects, from Laws of Media: the New Science, by Marshall and Eric McLuhan. Every new medium (which can be anything from a stone arrowhead to a self-driving car), the McLuhans say, does four things, which they pose as questions that can have multiple answers, and they visualize this way:

tetrad-of-media-effects

The McLuhans also famously explained their work with this encompassing statement: We shape our tools and thereafter they shape us.

This can go for institutions, such as businesses, and whole marketplaces, as well as people. We saw that happen in a big way with contracts of adhesion: those one-sided non-agreements we click on every time we acquire a new login and password, so we can deal with yet another site or service online.

These were named in 1943 by the law professor Friedrich “Fritz” Kessler in his landmark paper, “Contracts of Adhesion: Some Thoughts about Freedom of Contract.” Here is pretty much his whole case, expressed in a tetrad:

contracts-of-adhesion

Contracts of adhesion were tools industry shaped, was in turn shaped by, and in turn shaped the whole marketplace.

But now we have the Internet, which by design gives everyone on it a place to stand, and, like Archimedes with his lever, move the world.

We are now developing that lever, in the form of terms any one of us can assert, as a first party, and the other side—the businesses we deal with—can agree to, automatically. Which they’ll do it because it’s good for them.

I describe our first two terms, both of which have potentials toward enormous changes, in two similar posts put up elsewhere: 

— What if businesses agreed to customers’ terms and conditions? 

— The only way customers come first

And we’ll work some of those terms this week, fittingly, at the Computer History Museum in Silicon Valley, starting tomorrow at VRM Day and then Tuesday through Thursday at the Internet Identity Workshop. I host the former and co-host the latter, our 24th. One is free and the other is cheap for a conference.

Here is what will come of our work:
personal-terms

Trust me: nothing you can do is more leveraged than helping make this happen.

See you there.

 

by Doc Searls at April 30, 2017 04:18 PM

April 28, 2017

Berkman Center front page
[TODAY] Digital Rights and Online Harassment in the Global South

Subtitle

featuring Berkman Klein Affiliate, Nighat Dad

Teaser

An inside look at the challenges facing women, human rights defenders, and other internet users in Pakistan, from online harassment to privacy and free expression.

Event Date

May 3 2017 12:00pm to May 3 2017 12:00pm
Thumbnail Image: 

Wednesday, May 3, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
23 Everett Street, Second Floor Conference Room, Cambridge, MA
RSVP required to attend in person
Event will be live webcast on this page at 12:00 pm

Nighat Dad will speak on the state of freedom of expression, privacy, and online harassment in the global south, with a particular focus on Pakistan, where she is based. Dad is the Executive Director of the Digital Rights Foundation (DRF), a nonprofit that seeks to protect the freedom and security of all people online, with a particular focus on women and human rights defenders.

In late 2016, DRF launched a cyber harassment hotline, and Dad will present key findings from a recently released report [LINK: http://digitalrightsfoundation.pk/cyber-harassment-helpline-completes-its-four-months-of-operations/] on the first four months of its operation. The report affords up-to-the-moment insights on significant challenges facing internet users in Pakistan and throughout the region. 

About Nighat

Nighat Dad is the Executive Director of Digital Rights Foundation, Pakistan. She is an accomplished lawyer and a human rights activist. Nighat is one of the pioneers who have been campaigning around access to open internet in Pakistan and globally. She has been actively campaigning  and  engaging at a policy level on issues focusing on Internet Freedom, Women and Technology, Digital Security, and Women’s empowerment. Nighat has been named in TIME's Next Generation Leaders List, and has won Atlantic Council Freedom of Expression Award, and also Human Rights Tulip Award for her work in digital rights and freedom. She is also an Affiliate at Berkman Klien Centre for the year 2016-2017

 

Loading...

by doyolu at April 28, 2017 08:59 PM

MediaBerkman
The International State of Digital Rights, a Conversation with the UN Special Rapporteur
UN Special Rapporteur on the Right to Freedom of Opinion and Expression, David Kaye, is joined in conversation by Nani Jansen Reventlow, a Fellow at the Berkman Klein Center and Adviser to the Cyberlaw Clinic, about his upcoming thematic report on digital access and human rights, as well as the most burning issues regarding free speech online and digital rights including encryption, fake news, online gender-based abuse and the global epidemic of internet censorship. More on this event here: https://cyber.harvard.edu/events/2017/04/DavidKaye

by the Berkman Klein Center at April 28, 2017 04:59 PM

Miriam Meckel
Von Deutschland lernen

Ivanka Trump gilt als wahre First Lady der USA. Noch vor ihrem Vater, Präsident Donald Trump, war sie in Deutschland zu Gast. In ihrem ersten Interview mit einem nicht-amerikanischen Medium verrät die 35-Jährige, welche Absichten hinter ihrem Besuch stecken.

Jeder redet über die Trumps – aber live bestaunen kann man ein Mitglied der Präsidentenfamilie erst kommende Woche in Deutschland. Dann wird Ivanka Trump, First Daughter der USA und wohl einflussreichste Beraterin von US-Präsident Donald Trump, anreisen – und in Berlin auf Einladung von Kanzlerin Angela Merkel am W20-Gipfel im Rahmen der deutschen G20-Präsidentschaft teilnehmen sowie ein Siemens-Werk in der Hauptstadt besichtigen.

Trump, 35, polarisiert: Sie wirkt moderner, emanzipierter und weltoffener als ihr Vater. Andererseits hielt sie diesem im Wahlkampf trotz diverser Skandale die Treue und berät ihn nun gemeinsam mit Ehemann Jared Kushner im Weißen Haus.

Besonderes Interesse zeigt Trump an Wirtschafts- und Ausbildungsthemen. Mit der WirtschaftsWoche sprach sie im ersten Interview mit einem nicht-amerikanischen Medium über die Motive hinter ihrem Deutschlandbesuch.

Hier geht es zum Interview mit Ivanka Trump zur Berufsausbildung in Deutschland, Erfindergeist als Treiber der Wirtschaft und ihre eigene Vorbildrolle als Unternehmerin.

by Miriam Meckel at April 28, 2017 06:53 AM

Justin Reich
Use Design Thinking to Mitigate Bias and Resistance to Change
To thwart resistance to change and mitigate bias, consider design thinking to provide teachers with an opportunity to learn.

by Beth Holland at April 28, 2017 01:46 AM

April 27, 2017

Cyberlaw Clinic - blog
Student Commentary on the Clinic’s Internet Jurisdiction Work

asnThe HLS Clinical and Pro Bono programs blog currently features a post by spring 2017 Cyberlaw Clinic student (and graduating Harvard Law School 3L) Alicia Solow-Niederman.  The piece highlights Alicia’s work this semester with Clinic Assistant Director Vivek Krishnamurthy and our friend and Clinic advisor Nani Jansen Reventlow. Alicia was part of a team that helped to tackle some complex questions about online jurisdiction, preparing a working paper along with student Javier Careaga Franco (LL.M ’17) entitled “Here, There, or Everywhere?.” The paper offers a methodology and taxonomy aimed at clarifying principles to govern the geographic scope of orders to remove online content.

by Clinic Staff at April 27, 2017 07:07 PM

MediaBerkman
Holding Hospitals Hostage: From HIPAA to Ransomware
In 2016, more than a dozen hospitals and healthcare organizations were targeted by ransomware attacks that temporarily blocked crucial access to patient records and hospital systems until administrators agreed to make ransom payments to the perpetrators. Emerging online threats such as ransomware are forcing hospitals and healthcare providers to revisit and re-evaluate the existing patient data protection standards, codified in the Health Insurance Portability and Accountability Act, that have dictated most healthcare security measures for more than two decades. This talk looks at how hospitals are grappling with these new security threats, as well as the ways that the focus on HIPAA compliance has, at times, made it challenging for these institutions to adapt to an emerging threat landscape. About Dr. Wolff Josephine Wolff is an assistant professor in the Public Policy department at RIT and a member of the extended faculty of the Computing Security department. She is a faculty associate at the Harvard Berkman Center for Internet & Society and a fellow at the New America Cybersecurity Initiative. Wolff recieved her PhD. in Engineering Systems Division and M.S. in Technology and Policy from the Massachusetts Institute of Technology, as well as her A.B. in Mathematics from Princeton University. Her research interests include cybersecurity law and policy, defense-in-depth, security incident reporting models, economics of information security, and insurance and liability protection for computer security incidents. She researches cybersecurity policy with an emphasis on the social and political dimensions of defending against security incidents, looking at the intersection of technology, policy, and law for defending computer systems and the ways that technical and non-technical computer security mechanisms can be effectively combined, as well as the ways in which they may backfire. Currently, she is working on a project about a series of cybersecurity incidents over the course of the past decade, tracing their economic and legal aftermath and their impact on the current state of technical, social, and political lines of defense. She writes regularly about cybersecurity for Slate, and her writing has also appeared in The Atlantic, Scientific American, The New Republic, Newsweek, and The New York Times Opinionator blog. For more information on this event visit: https://cyber.harvard.edu/events/digitalhealth/2017/04/Wolff

by the Berkman Klein Center at April 27, 2017 06:02 PM

Berkman Center front page
Digital Health @ Harvard, April 2017 – Holding Hospitals Hostage: From HIPAA to Ransomware

Subtitle

featuring Dr. Josephine Wolff

Teaser

For hospitals and healthcare providers, data protection efforts have long been driven by the Health Insurance Portability and Accountability Act. This talk will look at recent trends in the online threats facing hospitals and consider how effective HIPAA is at addressing these threats, and how it has shaped the state of healthcare data security--for better and for worse.

Parent Event

Digital Health @ Harvard | Brown Bag Lunch Series

Event Date

Apr 27 2017 12:00pm to Apr 27 2017 12:00pm
Thumbnail Image: 

This is a talk in the monthly Digital Health @ Harvard Brown Bag Lunch Series, which is co-hosted by the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics and the Berkman Klein Center for Internet & Society at Harvard University.

Thursday, April 27, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University
23 Everett Street, Second Floor, Cambridge, MA

In 2016, more than a dozen hospitals and healthcare organizations were targeted by ransomware attacks that temporarily blocked crucial access to patient records and hospital systems until administrators agreed to make ransom payments to the perpetrators. Emerging online threats such as ransomware are forcing hospitals and healthcare providers to revisit and re-evaluate the existing patient data protection standards, codified in the Health Insurance Portability and Accountability Act, that have dictated most healthcare security measures for more than two decades. This talk will look at how hospitals are grappling with these new security threats, as well as the ways that the focus on HIPAA compliance has, at times, made it challenging for these institutions to adapt to an emerging threat landscape.

About Dr. Wolff

Josephine Wolff is an assistant professor in the Public Policy department at RIT and a member of the extended faculty of the Computing Security department. She is a faculty associate at the Harvard Berkman Center for Internet & Society and a fellow at the New America Cybersecurity Initiative.

Wolff recieved her PhD. in Engineering Systems Division and M.S. in Technology and Policy from the Massachusetts Institute of Technology, as well as her A.B. in Mathematics from Princeton University.

Her research interests include cybersecurity law and policy, defense-in-depth, security incident reporting models, economics of information security, and insurance and liability protection for computer security incidents. She researches cybersecurity policy with an emphasis on the social and political dimensions of defending against security incidents, looking at the intersection of technology, policy, and law for defending computer systems and the ways that technical and non-technical computer security mechanisms can be effectively combined, as well as the ways in which they may backfire. Currently, she is working on a project about a series of cybersecurity incidents over the course of the past decade, tracing their economic and legal aftermath and their impact on the current state of technical, social, and political lines of defense. She writes regularly about cybersecurity for Slate, and her writing has also appeared in The Atlantic, Scientific American, The New Republic, Newsweek, and The New York Times Opinionator blog.

Download original audio and video from this event.

Subscribe to the Berkman Klein events podcast to have audio from all our events delivered straight to you!

by ahancock at April 27, 2017 12:00 PM

Harry Lewis
The AS11 staff, then and now
Applied Sciences 11 was the original name for CS50, a course I created in 1981, before Harvard had either courses called "Computer Science" or an undergraduate degree by that name. AS11 wasn't a renaming of Nat Sci 110, it was a whole new enterprise, an attempt to be systematic and scientific about the introduction to the science of computing, more rigorous than Nat Sci 110 and with less of the Santa-suit lecture stunts that I had pulled in Nat Sci 110. It is hard to remember how thinly staffed we were in those days. Not only did I not get a leave term or even summer support to prepare the new course, I actually taught AM108 (now CS121) simultaneously in the fall of 1981. And the previous term hadn't been a light one--I was teaching AM110 (now CS51). (My whole teaching record, and an almost complete list of my TFs, is here.)

The second year I taught the course, the midterm was on Hallowe'en, and I invited the TFs over to my house for dinner after we finished grading. Margo Seltzer--now my colleague two doors down but then an undergraduate--arranged for everyone to show up with tweed jackets, mustaches, and pipes. (I still wear tweed jackets, but the pipe and mustache are long gone.) Here is the group photo, about which I blogged five years ago.
A remarkable number returned for the Celebration of Computer Science on my 70th birthday.
Left to right, Ted Nesson, Lisa Hellerstein, Phillip Stern, Michael Massimilla, HRL, Craig Partridge, Christoph Freytag, Margo Seltzer, Larry Lebowitz, John Thielens, John Ramsdell, Phil Klein. Rony Sebok also showed up, a few minutes too late to make it into the picture, and Larry Denenberg and boo gershun, who didn't make it into the original picture, were also at the event. So that is 14 of the original 23 came back 35 years after the fact (no more than 22 are still living). Sweet!

Thanks everyone!

by Harry Lewis (noreply@blogger.com) at April 27, 2017 01:59 AM

April 26, 2017

MediaBerkman
A More Perfect Internet: Promoting Digital Civility and Combating Cyber-Violence
This event is co-sponsored by the Human Rights Program at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University. This talk addresses a range of issues relating to digital incivility with en emphasis on cyber-violence. What are the most common negative behaviors online? How are these perceived and experienced by users? What is cyber-violence? Who does it target? What steps can be taken to prevent such behaviors? How should they be addressed once they've occurred? What challenges does the legal system face when dealing with cyber-violence related offenses? Professor Carrillo draws from the Cyber-Violence Project he co-directs at GW Law School to offer responses to these and related questions. About Arturo Arturo J. Carrillo is Professor of Law, Director of the International Human Rights Clinic, and Co-Director of the Global Internet Freedom & Human Rights Project at The George Washington University Law School. Before joining the faculty, Professor Carrillo served as the acting director of the Human Rights Clinic at Columbia Law School, where he was also Lecturer in Law and the Henkin Senior Fellow with Columbia’s Human Rights Institute. Prior to entering the academy in 2000, he worked as a legal advisor in the Human Rights Division of the United Nations Observer Mission to El Salvador (ONUSAL), as well as for non-governmental organizations in his native Colombia, where he also taught international law and human rights. From 2005 to 2010, Professor Carrillo was a senior advisor on human rights to the U.S. Agency on International Development (USAID) in Colombia. Professor Carrillo’s expertise is in public international law; Information and Communication Technologies (ICTs) and human rights, especially Internet freedom; transitional justice; human rights and humanitarian law; and comparative clinical legal education. He is the author of a number of publications in English and Spanish on these topics. His recent article, "Having Your Cake and Eating It Too? Zero-rating, Net Neutrality and International Law," was published by the Stanford Technology Law Review (Fall 2016). As part of his clinical practice, Professor Carrillo has litigated extensively in U.S. courts and before regional human rights tribunals. Professor Carrillo received a BA from Princeton University, a JD from The George Washington University, and an LLM from Columbia University. For more info on this event visit: https://cyber.harvard.edu/node/99846

by the Berkman Klein Center at April 26, 2017 04:19 PM

April 25, 2017

Berkman Center front page
The International State of Digital Rights, a Conversation with the UN Special Rapporteur

Subtitle

David Kaye in conversation with Nani Jansen Reventlow

Teaser

Join the UN Special Rapporteur on the Right to Freedom of Opinion and Expression, David Kaye, in conversation with Berkman Klein Center Fellow, Nani Jansen Reventlow.

Event Date

Apr 25 2017 4:00pm to Apr 25 2017 4:00pm
Thumbnail Image: 

Tuesday, April 25, 2017 at 4:00 pm
Berkman Klein Center for Internet & Society at Harvard University

This event is co-sponsored by The Human Rights Program at Harvard Law School and the Berkman Klein Center for Internet & Society at Harvard University.

UN Special Rapporteur on the Right to Freedom of Opinion and Expression, David Kaye, is joined in conversation by Nani Jansen Reventlow, a Fellow at the Berkman Klein Center and Adviser to the Cyberlaw Clinic, about his upcoming thematic report on digital access and human rights, as well as the most burning issues regarding free speech online and digital rights including encryption, fake news, online gender-based abuse and the global epidemic of internet censorship.

The Special Rapporteur also speaks about his work in both national and international free speech cases, after which the audience asks questions. 

About David Kaye

David Kaye, a clinical professor of law at the University of California, Irvine, is the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, appointed by the UN Human Rights Council in June 2014. His rapporteurship has addressed, among other topics, encryption and anonymity as promoters of freedom of expression, the protection of whistleblowers and journalistic sources, and the roles and responsibilities of private Internet companies. Early in his career he was a lawyer in the U.S. State Department, handling issues such as the applicability of the Geneva Conventions in the wake of the attacks of September 11, 2001. His academic research and writing have focused on accountability for serious human rights abuses, international humanitarian law, and the international law governing use of force. A member of the Council on Foreign Relations and former member of the Executive Council of the American Society of International Law, he has also published essays in such publications as Foreign Affairs, The New York Times, Foreign Policy, JustSecurity and The Los Angeles Times.

About Nani Jansen Reventlow

Nani Jansen Reventlow is an Associate Tenant at Doughty Street Chambers and a 2016-2017 Fellow at the Berkman Klein Center for Internet & Society at Harvard University. She is a recognised international lawyer and expert in human rights litigation responsible for groundbreaking freedom of expression cases across several national and international jurisdictions. 

Between 2011 and 2016, Nani has overseen the litigation practice of the Media Legal Defence Initiative (MLDI) globally, leading or advising on cases before various national and international courts. At the Berkman Klein Center, Nani's work focuses on cross-disciplinary collaboration in litigation that challenges barriers to free speech online. She also acts as an Advisor to the Cyberlaw Clinic.

Links

Download original audio and video from this event.

Subscribe to the Berkman Klein events podcast to have audio from all our events delivered straight to you!

by candersen at April 25, 2017 08:00 PM

Next Gen Podcast Distribution Protocols:

Subtitle

Innovation and Governance in Open Development Initiatives

Teaser

The goals of the symposium include furthering cooperation among various players in the world of podcast creation and distribution and consideration of recommendations on standards, enhancements, extensions, and other methods to support the growth of podcasting as an open and inclusive medium.

Event Date

May 11 2017 8:30am to May 11 2017 5:00pm
Thumbnail Image: 

Thursday, May 11, 2017, 8:30 am - 5:00 pm
Berkman Klein Center for Internet & Society at Harvard University
Harvard Law School campus, Wasserstein Hall
1585 Massachusetts Avenue, Cambridge, MA​
Milstein East (room 2036, second floor)

Registration is limited. Please sign up here.


Presented by the Berkman Klein Center for Internet & Society at Harvard University and the Tow Center for Digital Journalism at the Columbia Journalism School, in Collaboration with the syndicated.media Open Working Group.


On May 11, 2017, the Berkman Klein Center for Internet & Society and Tow Center for Digital Journalism will host and facilitate a symposium, in collaboration with the syndicated.media open working group, to address the process of developing standards that support the distribution of syndicated audio content.  The event will look back at the evolution of the RSS protocol and look forward at the need for new technical infrastructure to support an expanding podcast distribution landscape.  Participants will have the opportunity to engage in both higher-level policy discussions and technical deep-dives throughout the course of this one-day event.

The goals of the symposium include furthering cooperation among various players in the world of podcast creation and distribution and consideration of recommendations on standards, enhancements, extensions, and other methods to support the growth of podcasting as an open and inclusive medium.  It will bring together academic, non-profit, and commercial constituencies to address, among other things:

  • the history of media protocols;

  • promises and pitfalls associated with open development initiatives;

  • rights issues relevant to openly syndicated content;

  • questions of governance and stakeholder engagement; and

  • technical planning and implementation for next generation podcast distribution

The symposium will mix talks and panels that generally address these issues (curated by the Berkman Klein and Tow Center teams) with opportunities for breakouts that allow deeper dives into technical questions around distribution protocols for podcasts and other forms of serialized media (facilitated by members of the syndicated.media community).

Registration is limited; sign up here.

The symposium will be followed by a separate, two-day “Audio for Good”  event, co-hosted by PRX, RadioPublic, and the HBS Digital Initiative. Applications to participate can be submitted here.


About the Hosts

The Berkman Klein Center for Internet & Society is a research center based at Harvard University.  The Center’s Center's mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions.  Berkman Klein is a research center, premised on the observation that what it seeks to learn is not already recorded. The Center’s method is to build out into cyberspace, record data, self-study, and share. Its mode is entrepreneurial nonprofit.

The Tow Center for Digital Journalism, established in 2010, provides journalists with the skills and knowledge to lead the future of digital journalism and serves as a research and development center for the profession as a whole. Operating as an institute within Columbia University’s Graduate School of Journalism, the Tow Center is poised to take advantage of a unique combination of factors to foster the development of digital journalism. Its New York location affords access to cutting-edge technologists, a strong culture of journalism and multiple journalism and communication schools, with outstanding universities attached to them. The Tow Center is where technology and journalism meet, and where education and practice meet.

Syndicated.media is a community-driven working group with a mission to ensure that podcasting grows to meet the needs of listeners, creators, producers, publishers, advertisers, and developers, without sacrificing the groundwork that has been established to make it an open and inclusive medium. The goal of the working group is to develop clear and comprehensive standards and best practices. The group now includes more than 100 representatives from a growing number of podcast industry stakeholders, including international participants, and intends to incrementally release updates to existing standards and recommendations for new proposals.

Photo courtesy of Alba Cobra

by candersen at April 25, 2017 04:54 PM

Digital Expungement: Rehabilitation in the Digital Age

Subtitle

with Berkman Klein Faculty Associate, Eldar Haber

Teaser

Can digital technology lead to the extinction of criminal rehabilitation? How should policymakers strike a balance between protecting civil rights and public safety while ensuring the reintegration into society of individuals with expunged criminal history?

Parent Event

Berkman Klein Luncheon Series

Event Date

Apr 25 2017 12:00pm to Apr 25 2017 12:00pm
Thumbnail Image: 

Tuesday, April 25, 2017 at 12:00 pm
Berkman Klein Center for Internet & Society at Harvard University

The concept of criminal rehabilitation in the digital age is intriguing. How can we ensure proper reintegration into society of individuals with a criminal history that was expunged by the state when their wrongdoings remain widely available through commercial vendors (data brokers) and online sources like mugshot websites, legal research websites, social media platforms, and media archives? What are constitutional and pragmatic challenges to ensure digital rehabilitation? Is there a viable solution to solve this conundrum?

About Eldar

Eldar Haber is an Associate Professor (Senior Lecturer) at the Faculty of Law, Haifa University and a Faculty Associate at the Berkman-Klein Center for Internet & Society at Harvard University. He earned his Ph.D. from Tel-Aviv University and completed his postdoctoral studies as a fellow at the Berkman-Klein Center. His main research interests consist of various facets of law and technology including cyber law, intellectual property law (focusing mainly on copyright), privacy, civil rights and liberties, and criminal law. His works were published in various flagship law reviews worldwide, including top-specialized law and technology journals of U.S. universities such as Harvard, Yale and Stanford. His works were presented in various workshops and conferences around the globe, and were cited in academic papers, governmental reports, the media, and U.S. Federal courts.

Download original audio and video from this event.

Subscribe to the Berkman Klein events podcast to have audio from all our events delivered straight to you!

by candersen at April 25, 2017 04:00 PM

Meeting 21st Century Municipal Internet Access Needs

Subtitle

Perspectives from Boston City Hall and Brookline on City and Regional Infrastructure Planning

Teaser

Hosted by Responsive Communities, a project of the Berkman Klein Center for Internet & Society at Harvard University.

Event Date

Apr 25 2017 10:00am to Apr 25 2017 12:30pm
Thumbnail Image: 

Tuesday, April 25, 2017 between 10:00 am - 12:30 pm
Harvard Law School campus, Wasserstein Hall

This event is hosted by Responsive Communities, a project of the Berkman Klein Center for Internet & Society at Harvard University.

At this free public event, Jascha Franklin-Hodge, Boston's CIO, will describe the city's ongoing efforts at fostering private sector competition in providing high-speed wired and wireless Internet access. And Kevin Stokes, CIO of Brookline, will discuss the opportunities and challenges in trying to work across institutional and state agency boundaries to obtain fiber-optic network access to boost local bandwidth and reduce costs. Municipal and state officials are invited to attend and then participate in a discussion about best practices and opportunities for collaboration. The event will conclude with an audience Q&A and bag lunch.

10:00-10:05: Introductory remarks: Waide Warner and David Talbot, Responsive Communities, Berkman Klein Center

10:05-⁠10:20: Boston's strategy: Jascha Franklin–Hodge, City of Boston

10:20-10:35: Efforts at inter-agency collaboration: Kevin Stokes, Town of Brookline

10:35-11:15: Open discussion between speakers and invited leaders from municipalities and state agencies and authorities

11:15-⁠11:30: Audience Q&A

11:30-⁠12:30: Bag lunch and networking
 

ABOUT THE SPEAKERS

As Boston’s CIO, Jascha Franklin-Hodge works to enhance online service delivery, empower city employees with effective digital tools, and improve access to technology and Internet access service across all Boston neighborhoods. His efforts in Boston include mapping 175 miles of existing city-owned conduit to decrease costs of network deployments, streamlining processes and permitting associated with investment in broadband infrastructure, and ensuring that city infrastructure projects accommodate future network construction. Today five wired and wireless broadband providers serve residents in the city. Franklin-Hodge is now beginning to examine how to prepare for next-generation wireless deployments.

Kevin Stokes has served as CIO for the Town of Brookline and its public schools for 12 years. With municipal and school bandwidth needs rising sharply, Stokes wants wider access to fiber-optic networks and the ability to directly reach wholesale bandwidth available in Boston.  Brookline sits near locations with MBTA and Mass DOT fiber optic lines, as well as hospitals and universities with fiber-optic networks. Stokes, like other municipal CIOs, would like to identify decision-makers and negotiate agreements with public and nonprofit network owners. 

by candersen at April 25, 2017 12:00 PM

April 24, 2017

Cyberlaw Clinic - blog
First Circuit Hears Oral Argument in Unusual Copyright Case

On April 6, 2017, Cyberlaw Clinic students attended oral argument in a First Circuit copyright appeal involving a curious set of facts and legal issues. The case pitted Richard Goren, a Massachusetts attorney, against Xcentric Ventures, LLC, the owner of an online consumer review website known as the Ripoff Report. Goren was upset by a review of his services posted on Ripoff Report by Christian DuPont, the defendant in a prior case where Goren had represented the plaintiff. Goren initially sued Dupont in Massachusetts state court, alleging that Dupont’s review was defamatory. Dupont failed to appear, and thus defaulted. After obtaining a default judgment, Goren requested that Xcentric remove the posting. Xcentric refused, citing the Ripoff Report’s strict “no removal policy.”

Here’s where the dispute gets weird. Upset by Xcentric’s response, Goren obtained amended relief from the same state court that presided over the defamation suit. This amended relief purported to assign Dupont’s copyright in the post to Goren, and to make Goren Dupont’s “attorney-in-fact” to effectuate the transfer. After obtaining a copyright registration, Goren sued Xcentric in federal district court, alleging inter alia that Xcentric had infringed Goren’s newfound proprietary rights as the post’s “owner.”

Goren’s strategy was dubious. He attempted to use copyright law as a backdoor to remedy the alleged defamation. This amounted to a misuse of copyright to censor speech, which is ironic given that copyright law is meant to incentivize the distribution of creative works to the public. Unfortunately, Goren’s strategy is not unprecedented. Similar attempts to use copyright as a means of censorship have been rejected in both the Eleventh and Ninth Circuits. See Katz v. Google Inc., 802 F.3d 1178, 1184 (11th Cir. 2015); Garcia v. Google, Inc., 786 F.3d 733, 736 (9th Cir. 2015) (“[A] weak copyright claim cannot justify censorship in the guise of authorship.”)

Xcentric defended the lawsuit by arguing that the copyright assignment was involuntary, and thus invalid under Section 201(e) of the Copyright Act. See 17 U.S.C. § 201(e). Section 201(e) prohibits involuntary transfers of copyright ownership. According to Xcentric, this meant that the Massachusetts state judge lacked authority to grant Goren the relief he had obtained. Instead, ownership should never have left Dupont’s hands. Since the transfer of ownership resulted from a default judgment in a state defamation suit, Xcentric argued that the transfer was involuntary, and thus invalid under Section 201(e). The district court agreed with Xcentric’s view of the law. It granted Xcentric summary judgment, acknowledging that 201(e) voided the purported transfer of ownership.

Goren appealed the district court’s ruling to the First Circuit, which heard oral argument earlier this month. It seems very likely that the First Circuit will affirm on the Section 201(e) issue. At oral argument, the First Circuit panel appeared to accept the logic of Xcentric’s argument, without questioning its counsel about 201(e). The panel instead focused on a separate attorneys’ fees issue, indicating that it might be leaning toward an affirmance on the merits.

Beyond the copyright claim, Goren also contended that Xcentric’s conduct violated state defamation and competition laws. The district court held that Section 230 of the Communications Decency Act gave Xcentric immunity from those claims. Section 230 states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c). As an online forum for user reviews, the Ripoff Report qualifies as an “interactive computer service” under the CDA. Dupont, as the post’s author, was the sole “information content provider.” Because of 230’s immunity provision, the district court held that Xcentric could not be found liable for a post it didn’t author. Instead, Goren’s only remedy was against Dupont—against whom Goren has already obtained a default judgment.

While the law clearly immunizes Xcentric from liability for defamation, the policy questions raised by this case are a bit murkier. The CDA was meant to facilitate the growth of the internet as a medium for communication, by ensuring that websites hosting user-generated content could operate without threat of liability for their users’ acts. Congress determined that the threat of liability against such sites was too onerous, given the obvious financial disincentive that the threat of money damages would provide. However, the CDA also gives sites immunity from equitable relief, which means that they can refuse to remove defamatory posts with impunity. This puts individuals like Goren in a difficult situation: Goren obtained a state court defamation judgment against Dupont, but he lacks any practical recourse to have the post removed from the internet.

While immunity from money damages makes sense, does the same hold true for equitable relief? It isn’t obvious that well-tailored equitable relief would pose too-stringent a burden on websites hosting user-generated content. While this policy consideration certainly doesn’t justify Goren’s misuse of copyright law, it might make his strategy a bit more understandable.

Given the law as it stands, the First Circuit will likely affirm the district court’s rulings in favor of Xcentric. While there might be room for debate on the CDA policy question, the law is straightforward. The law is similarly clear on the copyright issue: the purported transfer of ownership cannot be sustained given 201(e)’s ban on involuntary transfers. Goren’s attempt to hijack copyright law might have been clever, but it was suspicious from the start. For the reasons discussed above, his attempt will ultimately fail.  

Leo Angelakos is a 3L at Harvard Law School and a continuing student in the Cyberlaw Clinic during the spring semester 2017.

by Leo Angelakos at April 24, 2017 06:13 PM

John Palfrey
Statement regarding past abuse at Andover

Today, we know that many schools, including Andover, have not always lived up to our commitment to protect students in our care. Over the past year, independent investigators from Sanghavi Law Office have been carrying out a review of all reports of sexual misconduct at our school. We have repeatedly asked community members to share concerns or information they may have with these independent investigators. In August 2016, I sent a public letter to the Andover community about what we knew at that time. Since then, we have received further reports and have referred them all for review to the investigators. On campus, we remain focused on ensuring that we do right by the students we have the privilege to teach today.

Matters related to past teacher misconduct are currently appearing in the press. We take these matters extremely seriously. Our hearts go out to all those who were harmed at our school and at all schools in the past. At Andover, we are committed to learning as much as we can about our school’s past, offering support and acknowledgment for survivors of sexual misconduct, and ensuring the safety and security of all students on our campus today. The harms done to students in the past must never be repeated.


by jgpalfrey at April 24, 2017 06:05 PM

Jeffrey Schnapp
TEDwards

It’s easy to poke fun at some of the tics and tropes that have come to define TED over the course of its 32 years of “spreading ideas that matter.” But the fact remains that TED has had an enormous impact and the TED stage is one of the world’s leading communications and innovation platforms, now fully global, interconnected with a multiplicity of television, radio, and web-based channels, and followed by audiences that number in the tens of millions.

My robotic side kick Gita and I joined the community of the TED speakers last week in Vancouver, Canada to talk about the role of human-centered robotics in the future of light mobility: movability as we like to call it at Piaggio Fast Forward, which is to say, mobility with a playful and functionally meaningful difference. Here’s what the ludic Piaggio Dictionary at the end of FuturPiaggio has to say about movability:

Movibilità / (Movability) = a Piaggio core value since its foundation, movability means a full-spectrum approach to the problems associated with human mobility that encompasses ships, aircraft, trains, cars, buses, trains, motorcycles, scooters, mopeds, marine outboards, and even bicycles (from Bianchi, once under Piaggio ownership, to the Piaggio e-Bike).

Here are some photographs of our contribution to session nine of #TED2017 (thank you to the TED staff for these and for the professionalism of the entire production):

by jeffrey at April 24, 2017 02:34 PM

April 21, 2017

Harry Lewis
Birthday stuff
I turned 70 on April 19. I made the decision some time ago to creep toward retirement around now. So I am giving up my role as Director of Undergraduate Studies in CS, a role I have had most years since even before there was a CS undergraduate major. I will be teaching half time for the next two years (I have already blogged about the cool new Classics of Computer Science course I will be teaching). I then have a year of saved sabbatical, so will transition to Research Professor or some such title on July 1, 2020.

To mark the moment, and to celebrate what has happened to the field of CS at Harvard and elsewhere in the years since I started teaching at Harvard in 1974, SEAS put on a big celebration on my birthday. Many of my former students and teaching fellows attended, and there was a terrific program of talks. You can watch all six hours of it if you are a beggar for punishment! Here is the video -- thanks to the CS50 team for producing it and getting it up so quickly. (If you just want to hear what I, Bill Gates, and Mark Zuckerberg had to say, go to about 20 minutes from the end.)

And Harvard Magazine has a nice report on the event. Thanks to everyone, and especially to Margo Seltzer, David Parkes, and Henry Leitner for their roles in putting this together.

We were able to reproduce a facsimile of A 30th Anniversary Family Photo, which I will post when I get it.

In the meantime, here is another classic -- six women computer scientists of the class of 1980 all came back for the celebration. That really means a lot to me! From left to right, Jeanette Hung, Jennifer (Greenspan) Hurwitz, Betty (Ryan) Tylko, Diane (Wasserman) Feldman, HRL, Christine (Ausnit) Hood, and boo gershun. Thanks!


by Harry Lewis (noreply@blogger.com) at April 21, 2017 11:05 PM

Stuart Shieber - The Occasional Pamphlet
WWHD?
… Harry Lewis…
…personal role model…
Image of Harry Lewis courtesy of Harvard John A. Paulson School of Engineering and Applied Sciences

This past Wednesday, April 19, was a celebration of computer science at Harvard, in honor of the 70th birthday of my undergraduate adviser, faculty colleague, former Dean of Harvard College, baseball aficionado, and personal role model Harry Lewis. The session lasted all day, with talks and reminiscences from many of Harry’s past students, myself included. For those interested, my brief remarks on the topic of “WWHD?” (What Would Harry Do?) can be found in the video of the event.

By the way, the “Slow Down” memo that I quoted from is available from Harry’s website. I recommend it for every future college first-year student.

by Stuart Shieber at April 21, 2017 07:26 PM

Justin Reich
Projects that Learn
Every effort to improve instruction and learning in schools is an opportunity for professional development for educators and school leaders.

by Justin Reich at April 21, 2017 12:43 PM

April 20, 2017

David Weinberger
Mail from Xpeditr

Xpeditr has really overestimated the size of my wine cellar.

wine cellar

The post Mail from Xpeditr appeared first on Joho the Blog.

by davidw at April 20, 2017 10:56 PM

Feeds In This Planet