Class Discussion

From Cyberlaw: Internet Points of Control NYU Course Wiki
Jump to navigation Jump to search

Post your comments about today's class here..

Tuesday, March 11

Rough consensus and normative design

  • The discussion today, and in the reading, seemed to describe the designers of proto-Internet and Internet technologies as largely driven by non-normative, technocratic design principles that made choices and decisions based on consensus as to how best to solve a particular problem. Part of the problem I see with this is that the lack of strong normative goals in their design is inferred, at least in part, from a lack of public discussion of the normative goals on parallel track to the technical consensus building.
  • I guess I wonder to what extent these developers were just implementing normative, though perhaps unquestioned, goals that they all shared--though shared culture, nationality, training, gender, race, etc. Maybe the lack of an identification layer is a technical decision, but maybe it's also the result normative view point that anonymous speech is more important than prevent hate speech, etc. Just because they didn't talk about that doesn't seem to be to say conclusively that the design choices of what the network would afford wasn't always a moral or social, in addition to technical, norms formation process. --Mgalese 18:12, 10 March 2008 (EDT)
  • Don't assume that technocratic design principles are separate from strong normative goals; "rough consensus and running code" is a normative choice about how to make decisions and what values to prioritize. erin 15:05, 11 March 2008 (EDT)
  • Right. I guess that makes sense, since rough consensus is at the very least a statement about the rights of minority viewpoints. It strikes me that our "Eden" story of internet isn't really a technocratic Eden free of policy, but of policy choices unexamined and unsurfaced.--Mgalese 15:29, 11 March 2008 (EDT)
  • I am curious how the early internet ethos developed. To some extent the decisionmaking-by-consensus arises from the non-hierarchical relationship of all the participants. Like customary international law, when nobody has the power to enforce rules against everyone else, all you have are norms generated by actual practice. But you are also right that the developers generally came from the same cultural background, which definitely impacted their choices about how to work together. erin 16:26, 11 March 2008 (EDT)
  • Elites and hierarchy: I think it's absolutely true that the "framers" were a self-selected, largely homogeneous group of people, but that's only part of the answer. They could proudly claim to focus entirely on the technology in part because they already shared so many values, but I also think, as Erin said, that they were in fact making affirmative normative policy decisions, and were simply doing so silently because, even among this group, many or most didn't have standing to engage in the debate. I don't believe for a second that it was an egalitarian enterprise. It has been my experience that engineers collaborating for long periods of time tend to stratify into "decision-makers" and "contributors" in very pronounced ways, with a few elites lording over the rest. Granted, they all may start equal, and the stratification usually (though not always) happens in a meritocratic way on the basis of technical skill, but once it happens it can often stifle debate about non-technical matters. Once the elites are established, the only basis upon which contributors tend to be able to challenge them is objective technical arguments. The normative decisions get made, almost in a vacuum, by the elites who are only holding court on technical discussions. But they are being made. I'm painting a darker picture than I think is likely to have happened, but I am really skeptical about all this talk of group ethos, and think it's at least as likely that it is instead rose-tinted rear-view analysis. --Michael Schakow 07:45, 12 March 2008 (EDT)

Thursday, March 13

  • Check out this article, in which Tim Berners-Lee (inventor of the world wide web) talks up the radical new future of the Internet as a semantic web. Tim Berners-Lee: Google could be superseded by the semantic Web In one part, an engineer from Google is quoted discussing the problems they're having engineering a semantic web: "In 2006, Peter Norvig, Google's director of research, noted some challenges to building a semantic Web, such as creating the metadata, agreeing on standards, and gaming the system. 'We deal with millions of Web masters who can't configure a server, can't write HTML. It's hard for them to go to the next step . . . The third problem is one of deception. We deal every day with people who try to rank higher in the results and then try to sell someone Viagra when that's not what they are looking for. With less human oversight with the Semantic Web, we are worried about it being easier to be deceptive.'"
  • Isn't it interesting how now, in trying to re-engineer the web, they feel the need to delve into the topics they so proudly claimed to ignore before, like deception and user technical ignorance. I think this just goes to show the original design was more a function of the limited number of people who were on the network, and less about enduring design philosophies. --Michael Schakow 22:05, 12 March 2008 (EDT)
  • "Why is software, which is now essential for everyday living, not held to the same standard as cars and children's toys?" This is a question that David Banisar writes in his article, "Save the Net, Sue a Software Maker." I guess my response, which seems too simple an answer for him not to have realized, is that when software crashes, for the most part, people don't DIE. Defective children's toys generally cause a few bodily injuries, sometimes a death or two; and car crashes account for X number of deaths a year (insert readily available statistics I'm too lazy to find here). "But most of those car crashes are caused by user error," Mr. Banisar might reply. And I too would reply - as are most of computer crashes. If you have an out of the box Dell, you can't run 30 word files, 15 instances of firefox, iTunes, Weather Desktop, Instant Messenger, and play World of Warcraft at the same time. Something will crash. Similarly true for business computers where people learn how to open new files but not close old ones. Also - visiting that unprotected porn site while at work may give your company a virus. Bottom line: people are dumb, behind a wheel or behind a keyboard. Lighten up on the manufacturers. Mike M. 08:28, 13 March 2008 (EDT)
    • I think you are mischaracterizing Banisar's point. He doesn't care whether the fault is user error or design that allows for user error, but that the entire burden of correcting it falls on the user. He is making a claim that the way to achieve greater safety is to put liability on the manufacturers regardless, because manufacturers are the cheapest cost avoiders. Not necessarily because the manufacturers are somehow morally responsible for designing products that allow users to misuse them (although he does seem sympathetic to that view). Are we going to discuss these readings in class? --erin 13:31, 13 March 2008 (EDT)
      • Two points in response: A) Even if we shift the liability burdens to the software manufacturers, its hard to say whether the amount of "software safety" would go up. There's a certain amount of bugs and errors can escape even the most stringent testing team, based on sheer numbers. If certain issues would only come up in 1 out of 100,000 uses, and 100 testers test 100, 200, or even 500 times, its entirely possible the bug will be missed. These bugs are caught, fleshed out, and corrected, when put to the mass market - the millions of "testers" that comprise the public. Video game developers have already come up with a system of testing that allows for tons of people to participate - its called an "Open Beta" - in which they entice gamers to participate by giving them a free sneak preview of the game a couple of months before its release. I doubt a similar system would work with business apps, because not many people really want to test Microsoft Word 2009 in their spare time: they don't care if the Paper Clip's options includes ransom note templates and other helpful features. Therefore, the only way to actually get a fair market of "testers" in these more normal applications is to release it on the market, get the complaints, and patch it up. It may not even be possible for them to do it in-house, pre-release.
      • B) To ask these companies to pull off this sort of testing in house would be prohibitively expensive, would slow down the progress of software development, and could exponentially raise the cost of software beyond a Calebresian level; i.e., where its more cost effective to have the security holes than to fix them pre-launch. Would we rather have to rewrite our hard-drives when we get that nasty worm, or pay $400 dollars for Microsoft Word, $950 for Microsoft Office, $3500 for Adobe's Publishing Suite, $10,000 for Adobe Premiere or Final Cut Pro? Those important files can be backed up on a thumb drive for $50; on an external hard drive for about $100-$200; or even on Google's mail servers for free (currently 6.5 GB of free space per account and climbing). So, my argument is that the entire burden SHOULD fall on the user because the consequences of bad coding are not as dire as he makes it out to be (i.e., it doesn't generally KILL people), it would be too expensive to do so, it would slow down progress, and its really easy as a user to make the decisions to avoid these problems. Mike M. 14:20, 13 March 2008 (EDT)
      • Basically my response is similar to Mike's, but a different angle: assumption of risk. If there's really a market for flawless software, it will be made. As it is, what people demand is flawed software for cheaper, so that's what's made. People know, when buying software, that it will crash and that there are vulnerabilities. If you really want flawless software, pay more for the corporate version that has better security. You can always buy insurance separately, and if there's a big enough market, companies will themselves guarantee the reliability of software (for a price). --Dsiffert 04:08, 19 March 2008 (EDT)
  • And we haven't talked much about The Law yet, but since we've been spending so much time on Wikipedia I have to mention my favorite anecdote about the problematic intersection of the (hierarchical, authority-based) law with the (polyarchical?) Wikipedia. A friend of mine was writing a brief and needed a citation for some technical issues. She went to Wikipedia, but saw that the article didn't mention what she wanted to use it to support. So she edited it. My friend, being ethical, then found a different source to cite. Someone less scrupulous could easily have written her own authority. --erin 13:31, 13 March 2008 (EDT)

Network Effects of Security Flaws

  • Looking at Picker's paper, I was unconvinced by the numbers he plugs into his model. I suspect, as I guess Mike is implying above, that that the cost of "securing" the OS would be higher than almost anyone's actual utility. But I think Picker's model has a even bigger whole. Imagine C3. C3, when they look at their payoffs, looks like C1--which is just to say that C1 doesn't suffer from flaws. But everyone else looks at C3 and sees C3 imposing an externality on them of $20. (That's a computer that gets botted by a low resource consumption worm.) How do we fix this? In Picker's model, the other C's of the world cannot make direct payments to C3 feasibly. They may not even be in privity with the OS manufacturer. (Your Linux server get's DDOSed by a botnet.) They don't get the benefit of insurance. They can sue the manufacturer, but many of the types of problems that create these network effects result from end-user actions. So we'd be put into the position of suing Mom and Pops.--Mgalese 16:34, 13 March 2008 (EDT)
    • I think Picker's response is pretty simple: You spotted an externality. The solution isn't to over-regulate and create both deadweight and inefficient allocations by mandating software companies create more security than demanded. The externality is in the user who gets worms by not protecting him/herself. According to Coase/Calabresi, you just allocate the right to the lower transaction cost. So, in this case, you hold users liable if they get a worm that hurts others. There's no reason not to be able to sue them. Better yet, the government can centralize the liability right by compiling lists of those whose computers are being used improperly, and just slap down fines. That closes the externality without forcing computer manufacturers to give out more security than is efficient. --Dsiffert 04:08, 19 March 2008 (EDT)

Errata

  • In case any of you are interested, Eric von Hippel, whose research into user-generated innovation JZ mentions in Chapter 4 of the book, will be presenting a paper at the Innovation Policy Colloquium on April 8 (in Furman 324 at 4-6pm). --erin 13:31, 13 March 2008 (EDT)


Tuesday, March 25

Barrett v. Rosenthal

  • The Barrett court seems to sidestep a key ambiguity in its discussion of "passive" versus "active" use. The question is not what type of user the re-publisher (defendant) is, but whether the original author is an "information content provider" ("ICP"). 230(c) only exempts users from liability for information provided by another ICP. 230(f)(3) defines ICP as "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service." (emphasis added). Thus it's a plausible reading of the statute that taking materials created offline by someone else and republishing them to the internet (i.e. providing them through the internet) would not be reached by 230. In Barrett, this all turns on whether email is an "interactive computer service," which it pretty clearly is. --erin 09:39, 25 March 2008 (EDT)
  • A new CDA 230 decision just came down, Doe v Friendfinder. Among other things it holds that a state law claim under right of publicity is *not* barred by CDA 230, at variance from the 9th Circuit. (Right of publicity may qualify as intellectual property -- and therefore may not be displaced.) -Jz 12:50, 28 March 2008 (EDT)
  • Here's a new CDA 230 decision, hot off the presses from the 9th Cir. in the Roommates case.[1]--Mgalese 14:45, 3 April 2008 (EDT)

Chinese Censorship

Here's an interesting take on why we might think that Chinese censorship of the internet isn't going to change, stop, or even trigger widespread protest. [1] --Mgalese 17:18, 28 March 2008 (EDT)

Thursday, March 27

  • Evidently Mark Cuban has officially pronounced Al Gore's invention dead: http://www.dailytech.com/article.aspx?newsid=11287. Maybe he's just jealous of the Google guys. (On the other hand, maybe he's just confusing the internet's impending fate with that of his Mavericks. Ziiiing!) Anyway, I find his remarks about YouTube especially interesting in the wake of our recent discussion of online copyright infringement. -- Jon C.
  • We didn't get around to comparing the broad scope of CDA 230 to the DMCA's more complex immunity scheme. One way to explain the difference is the lack of a defamation victim lobby akin to the well-funded RIAA/MPAA copyright industry groups, so the service providers' lobbying for immunity didn't have as much resistance in the non-IP context. Is there any other plausible reason that CDA 230 is so simple and the DMCA imposes so many restrictions on liability?

Monday, March 31

Tuesday, April 1

  • Grokster: Did Breyer just make a huge mistake? He compares Grokster's 10% legitimate use with Sony's 9% authorized use. But the Sony court held that much of the unauthorized use was indeed legitimate, under fair use. Assumedly, this number far exceeded 9%. Does this not destroy Breyer's parallel saying that they can pass the same "significant non-infringing use" test? Counting Sony's non-authorized fair use, Sony's non-infringing use is far more significant than Grokster's, isn't it? --Dsiffert 14:32, 1 April 2008 (EDT)

Monday, April 7

  • This project seemed germane to our discussion today: Global Education & Learning Community. It's essentially an online community and repository where teachers and students can collaborate to produce curriculum and instructional materials, in any subject, that are "open sourced" to the world. Seems like if MIT or other universities are in fact actively trying to disavow ownership of curriculum as delivered by professors, instead of assigning all rights to the professors they ought to "open source" it in some similar way. Michael Schakow 10:53, 10 April 2008 (EDT)
  • If a researcher at university discovered the cure for cancer such a cure would most probably be patented for the benefit of either the university or a pharmaceutical company that had funded the research - that is the dismal fate of our society. Or maybe it would be surreptitiously veiled as cancer is so profitable in treatment and not in cure. Pharmaceutical companies and their late successes are a good example of the predicament of intellectual property, supposedly protecting innovation but in actuality entrenching its opposite. They spend twice more on marketing and disease mongering, such as "restless legs syndrome"that exercised-starved and sugar-hyped Americans will spend money on, than on life saving cures . There is profit in treatments - particularly for chronic complications - and not for cures. Certainly not for cures that afflict Africans and Asians (although maybe more research on malaria will come through mosquito migrations from global warming?). I am sceptical that IP fosters innovation when the argument is proffered by the very people that stand to benefit from the institution of intellectual property. And we very well know that the corporations that benefit have only one legal mandate- to increase the profits of their shareholders. Further, that "free market" economics springs efficiency and innovation et al - is also extremely questionable. The market is a social space that we have constructed and property is a legal fiction in that regard. While we learn that "property" is a "private" right, I would like to see how one can have a house without a court - a public institution - accepting the validity of the deed or of contracts for that matter?

Intellectual property - particularly its infection of academic institutions - is not fostering creativity and innovation but preventing its spread. With education increasingly privatised, even in the antipodean world, many people that may be society's greatest innovators may never have this talent nurtured. And some will be debt, all the better for a docile work force.

Thursday, April 10

  • Acquiring and using a legitimate password doesn't violate the DMCA's anti-circumvention provisions. I.M.S. Inquiry Management Systems, Ltd. v. Berkshire Information Systems, Inc., 307 F.Supp.2d 521 (SDNY 2004).
  • "We agree that plaintiff's allegations do not evince circumvention as that term is used in the DMCA. Circumvention requires either descrambling, decrypting, avoiding, bypassing, removing, deactivating or impairing a technological measure qua technological measure. In the instant matter, defendant is not said to have avoided or bypassed the deployed technological measure in the measure's gatekeeping capacity. The Amended Complaint never accuses defendant of accessing the e-Basket system without first entering a plaintiff-generated password."
More precisely and accurately, what defendant avoided and bypassed was permission to engage and move through the technological measure from the measure's author. Unlike the CFAA, a cause of action under the DMCA does not accrue upon unauthorized and injurious access alone; rather, the DMCA “targets the circumvention of digital walls guarding copyrighted material.” FN12 Universal Studios, 273 F.3d 429, 443 (emphasis in original).
...
Defendant is alleged to have accessed plaintiff's protected website without plaintiff's authorization. Defendant did not surmount or puncture or evade any technological measure to do so; instead, it used a password intentionally issued by *533 plaintiff to another entity. As an analogy to Universal Studios, the password defendant used to enter plaintiff's webservice was the DVD player, not the DeCSS decryption code, or some alternate avenue of access not sponsored by the copyright owner (like a skeleton key, or neutralizing device). Plaintiff, however, did not authorize defendant to utilize the DVD player. Plaintiff authorized someone else to use the DVD player, and defendant borrowed it without plaintiff's permission. Whatever the impropriety of defendant's conduct, the DMCA and the anti-circumvention provision at issue do not target this sort of activity." --erin 10:45, 10 April 2008 (EDT)
  • Since the humming in class indicated some interest in learning a bit more about the counter-intuitive (but oh-so-cool) way that public key encryption works, I've taken the liberty of writing up a small page on the Wiki that attempts to explain the basic intuition behind Diffie's insight. You can also find a more detailed and technical explanation on Wikipedia --KHickey 17:11, 10 April 2008 (EDT)

"Third, Alice sends to Bob ga (mod p), and Alice sends gb (mod p) to Bob. Eve now knows ga, gb, and g, but she can’t easily figure out either a or b, because of the one-way function problem. This is the rub for Eve."

Is that supposed to read that Bob sends Alice gb??

  • It was indeed meant to read Bob sends Alice gb, and not vice versa. Oops. Subsequently corrected, though this is of perhaps minimal utility considering class is now over. But the wiki still lives, for the moment. --KHickey 00:32, 20 April 2008 (EDT)

Thursday, April 17

  • So sad that class is ending this morning; but it's as if South Park knew what was coming. Check out last night's episode, a parody of The Grapes Of Wrath, where the town wakes to a world in which the Internet is no more. Oy! South Park: "Over Logging" (warning: inappropriate material within!) Michael Schakow 08:42, 17 April 2008 (EDT)
  • I think the Shirky reading assigned for today makes a fantastic point in the last section, "Four Things To Design For," but I think it makes it too complicated. Here's the pearl of wisdom I think is lurking there: the level of access, ownership, control and functionality that individual members have to online groups ought to scale inversely with the level of anonymity, or pseudonymity, that member has. Shirky correctly captures two concepts: 1) the solution is not to create all-or-nothing -- completely anonymous or nakedly identified -- access systems for group members, because that loses a huge amount of valuable participation in the middle; and 2) the answer is also not expecting any group to long function well if it is composed of completely anonymous members. Michael Schakow 08:56, 17 April 2008 (EDT)