Class Discussion

From Cyberlaw: Internet Points of Control NYU Course Wiki
Jump to navigation Jump to search

Post your comments about today's class here..

Tuesday, March 11

Rough consensus and normative design

  • The discussion today, and in the reading, seemed to describe the designers of proto-Internet and Internet technologies as largely driven by non-normative, technocratic design principles that made choices and decisions based on consensus as to how best to solve a particular problem. Part of the problem I see with this is that the lack of strong normative goals in their design is inferred, at least in part, from a lack of public discussion of the normative goals on parallel track to the technical consensus building.
  • I guess I wonder to what extent these developers were just implementing normative, though perhaps unquestioned, goals that they all shared--though shared culture, nationality, training, gender, race, etc. Maybe the lack of an identification layer is a technical decision, but maybe it's also the result normative view point that anonymous speech is more important than prevent hate speech, etc. Just because they didn't talk about that doesn't seem to be to say conclusively that the design choices of what the network would afford wasn't always a moral or social, in addition to technical, norms formation process. --Mgalese 18:12, 10 March 2008 (EDT)
  • Don't assume that technocratic design principles are separate from strong normative goals; "rough consensus and running code" is a normative choice about how to make decisions and what values to prioritize. erin 15:05, 11 March 2008 (EDT)
  • Right. I guess that makes sense, since rough consensus is at the very least a statement about the rights of minority viewpoints. It strikes me that our "Eden" story of internet isn't really a technocratic Eden free of policy, but of policy choices unexamined and unsurfaced.--Mgalese 15:29, 11 March 2008 (EDT)
  • I am curious how the early internet ethos developed. To some extent the decisionmaking-by-consensus arises from the non-hierarchical relationship of all the participants. Like customary international law, when nobody has the power to enforce rules against everyone else, all you have are norms generated by actual practice. But you are also right that the developers generally came from the same cultural background, which definitely impacted their choices about how to work together. erin 16:26, 11 March 2008 (EDT)
  • Elites and hierarchy: I think it's absolutely true that the "framers" were a self-selected, largely homogeneous group of people, but that's only part of the answer. They could proudly claim to focus entirely on the technology in part because they already shared so many values, but I also think, as Erin said, that they were in fact making affirmative normative policy decisions, and were simply doing so silently because, even among this group, many or most didn't have standing to engage in the debate. I don't believe for a second that it was an egalitarian enterprise. It has been my experience that engineers collaborating for long periods of time tend to stratify into "decision-makers" and "contributors" in very pronounced ways, with a few elites lording over the rest. Granted, they all may start equal, and the stratification usually (though not always) happens in a meritocratic way on the basis of technical skill, but once it happens it can often stifle debate about non-technical matters. Once the elites are established, the only basis upon which contributors tend to be able to challenge them is objective technical arguments. The normative decisions get made, almost in a vacuum, by the elites who are only holding court on technical discussions. But they are being made. I'm painting a darker picture than I think is likely to have happened, but I am really skeptical about all this talk of group ethos, and think it's at least as likely that it is instead rose-tinted rear-view analysis. --Michael Schakow 07:45, 12 March 2008 (EDT)

Thursday, March 13

  • Check out this article, in which Tim Berners-Lee (inventor of the world wide web) talks up the radical new future of the Internet as a semantic web. Tim Berners-Lee: Google could be superseded by the semantic Web In one part, an engineer from Google is quoted discussing the problems they're having engineering a semantic web: "In 2006, Peter Norvig, Google's director of research, noted some challenges to building a semantic Web, such as creating the metadata, agreeing on standards, and gaming the system. 'We deal with millions of Web masters who can't configure a server, can't write HTML. It's hard for them to go to the next step . . . The third problem is one of deception. We deal every day with people who try to rank higher in the results and then try to sell someone Viagra when that's not what they are looking for. With less human oversight with the Semantic Web, we are worried about it being easier to be deceptive.'"
  • Isn't it interesting how now, in trying to re-engineer the web, they feel the need to delve into the topics they so proudly claimed to ignore before, like deception and user technical ignorance. I think this just goes to show the original design was more a function of the limited number of people who were on the network, and less about enduring design philosophies. --Michael Schakow 22:05, 12 March 2008 (EDT)
  • "Why is software, which is now essential for everyday living, not held to the same standard as cars and children's toys?" This is a question that David Banisar writes in his article, "Save the Net, Sue a Software Maker." I guess my response, which seems too simple an answer for him not to have realized, is that when software crashes, for the most part, people don't DIE. Defective children's toys generally cause a few bodily injuries, sometimes a death or two; and car crashes account for X number of deaths a year (insert readily available statistics I'm too lazy to find here). "But most of those car crashes are caused by user error," Mr. Banisar might reply. And I too would reply - as are most of computer crashes. If you have an out of the box Dell, you can't run 30 word files, 15 instances of firefox, iTunes, Weather Desktop, Instant Messenger, and play World of Warcraft at the same time. Something will crash. Similarly true for business computers where people learn how to open new files but not close old ones. Also - visiting that unprotected porn site while at work may give your company a virus. Bottom line: people are dumb, behind a wheel or behind a keyboard. Lighten up on the manufacturers. Mike M. 08:28, 13 March 2008 (EDT)
    • I think you are mischaracterizing Banisar's point. He doesn't care whether the fault is user error or design that allows for user error, but that the entire burden of correcting it falls on the user. He is making a claim that the way to achieve greater safety is to put liability on the manufacturers regardless, because manufacturers are the cheapest cost avoiders. Not necessarily because the manufacturers are somehow morally responsible for designing products that allow users to misuse them (although he does seem sympathetic to that view). Are we going to discuss these readings in class? --erin 13:31, 13 March 2008 (EDT)
      • Two points in response: A) Even if we shift the liability burdens to the software manufacturers, its hard to say whether the amount of "software safety" would go up. There's a certain amount of bugs and errors can escape even the most stringent testing team, based on sheer numbers. If certain issues would only come up in 1 out of 100,000 uses, and 100 testers test 100, 200, or even 500 times, its entirely possible the bug will be missed. These bugs are caught, fleshed out, and corrected, when put to the mass market - the millions of "testers" that comprise the public. Video game developers have already come up with a system of testing that allows for tons of people to participate - its called an "Open Beta" - in which they entice gamers to participate by giving them a free sneak preview of the game a couple of months before its release. I doubt a similar system would work with business apps, because not many people really want to test Microsoft Word 2009 in their spare time: they don't care if the Paper Clip's options includes ransom note templates and other helpful features. Therefore, the only way to actually get a fair market of "testers" in these more normal applications is to release it on the market, get the complaints, and patch it up. It may not even be possible for them to do it in-house, pre-release.
      • B) To ask these companies to pull off this sort of testing in house would be prohibitively expensive, would slow down the progress of software development, and could exponentially raise the cost of software beyond a Calebresian level; i.e., where its more cost effective to have the security holes than to fix them pre-launch. Would we rather have to rewrite our hard-drives when we get that nasty worm, or pay $400 dollars for Microsoft Word, $950 for Microsoft Office, $3500 for Adobe's Publishing Suite, $10,000 for Adobe Premiere or Final Cut Pro? Those important files can be backed up on a thumb drive for $50; on an external hard drive for about $100-$200; or even on Google's mail servers for free (currently 6.5 GB of free space per account and climbing). So, my argument is that the entire burden SHOULD fall on the user because the consequences of bad coding are not as dire as he makes it out to be (i.e., it doesn't generally KILL people), it would be too expensive to do so, it would slow down progress, and its really easy as a user to make the decisions to avoid these problems. Mike M. 14:20, 13 March 2008 (EDT)
      • Basically my response is similar to Mike's, but a different angle: assumption of risk. If there's really a market for flawless software, it will be made. As it is, what people demand is flawed software for cheaper, so that's what's made. People know, when buying software, that it will crash and that there are vulnerabilities. If you really want flawless software, pay more for the corporate version that has better security. You can always buy insurance separately, and if there's a big enough market, companies will themselves guarantee the reliability of software (for a price). --Dsiffert 04:08, 19 March 2008 (EDT)
  • And we haven't talked much about The Law yet, but since we've been spending so much time on Wikipedia I have to mention my favorite anecdote about the problematic intersection of the (hierarchical, authority-based) law with the (polyarchical?) Wikipedia. A friend of mine was writing a brief and needed a citation for some technical issues. She went to Wikipedia, but saw that the article didn't mention what she wanted to use it to support. So she edited it. My friend, being ethical, then found a different source to cite. Someone less scrupulous could easily have written her own authority. --erin 13:31, 13 March 2008 (EDT)

Network Effects of Security Flaws

  • Looking at Picker's paper, I was unconvinced by the numbers he plugs into his model. I suspect, as I guess Mike is implying above, that that the cost of "securing" the OS would be higher than almost anyone's actual utility. But I think Picker's model has a even bigger whole. Imagine C3. C3, when they look at their payoffs, looks like C1--which is just to say that C1 doesn't suffer from flaws. But everyone else looks at C3 and sees C3 imposing an externality on them of $20. (That's a computer that gets botted by a low resource consumption worm.) How do we fix this? In Picker's model, the other C's of the world cannot make direct payments to C3 feasibly. They may not even be in privity with the OS manufacturer. (Your Linux server get's DDOSed by a botnet.) They don't get the benefit of insurance. They can sue the manufacturer, but many of the types of problems that create these network effects result from end-user actions. So we'd be put into the position of suing Mom and Pops.--Mgalese 16:34, 13 March 2008 (EDT)
    • I think Picker's response is pretty simple: You spotted an externality. The solution isn't to over-regulate and create both deadweight and inefficient allocations by mandating software companies create more security than demanded. The externality is in the user who gets worms by not protecting him/herself. According to Coase/Calabresi, you just allocate the right to the lower transaction cost. So, in this case, you hold users liable if they get a worm that hurts others. There's no reason not to be able to sue them. Better yet, the government can centralize the liability right by compiling lists of those whose computers are being used improperly, and just slap down fines. That closes the externality without forcing computer manufacturers to give out more security than is efficient. --Dsiffert 04:08, 19 March 2008 (EDT)

Errata

  • In case any of you are interested, Eric von Hippel, whose research into user-generated innovation JZ mentions in Chapter 4 of the book, will be presenting a paper at the Innovation Policy Colloquium on April 8 (in Furman 224 at 4-6pm). --erin 13:31, 13 March 2008 (EDT)


Tuesday, March 25

Barrett v. Rosenthal

  • The Barrett court seems to sidestep a key ambiguity in its discussion of "passive" versus "active" use. The question is not what type of user the re-publisher (defendant) is, but whether the original author is an "information content provider" ("ICP"). 230(c) only exempts users from liability for information provided by another ICP. 230(f)(3) defines ICP as "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service." (emphasis added). Thus it's a plausible reading of the statute that taking materials created offline by someone else and republishing them to the internet (i.e. providing them through the internet) would not be reached by 230. In Barrett, this all turns on whether email is an "interactive computer service," which it pretty clearly is. --erin 09:39, 25 March 2008 (EDT)
  • A new CDA 230 decision just came down, Doe v Friendfinder. Among other things it holds that a state law claim under right of publicity is *not* barred by CDA 230, at variance from the 9th Circuit. (Right of publicity may qualify as intellectual property -- and therefore may not be displaced.) -Jz 12:50, 28 March 2008 (EDT)

Thursday, March 27