Paradigms for Studying the Internet: Difference between revisions

From Technologies and Politics of Control
Jump to navigation Jump to search
Line 151: Line 151:


----
----


Wikipedia is offered in other languages, which is a feature offered almost from inception. How does wikipedia get around the challenge where (i.e.) both an English and a German wikipedia page on the same subject feature different citations, or where one page has more depth than the other? This would make a great deal of knowledge inaccessible to people who don't speak the language. Does monolingualism emerge as a barrier for Wikipedia?
Wikipedia is offered in other languages, which is a feature offered almost from inception. How does wikipedia get around the challenge where (i.e.) both an English and a German wikipedia page on the same subject feature different citations, or where one page has more depth than the other? This would make a great deal of knowledge inaccessible to people who don't speak the language. Does monolingualism emerge as a barrier for Wikipedia?
Line 158: Line 159:


----
----


I could not agree more with Ichua. Your point of colleges and universities being somewhat limited due to resource constraints makes me think of tech and educational revolutionaries such as Salman Khan and his YouTube channel. Although his efforts are not mainstream yet, it is a good example of  how the internet could bring about freedom, social justice, economic improvement, and access to higher education to the weak and poor. The same goes with "edX" and other disruptive technologies that could very well contribute to knowledge economies now and in the future.
I could not agree more with Ichua. Your point of colleges and universities being somewhat limited due to resource constraints makes me think of tech and educational revolutionaries such as Salman Khan and his YouTube channel. Although his efforts are not mainstream yet, it is a good example of  how the internet could bring about freedom, social justice, economic improvement, and access to higher education to the weak and poor. The same goes with "edX" and other disruptive technologies that could very well contribute to knowledge economies now and in the future.

Revision as of 15:55, 4 February 2014

February 4

Before we can even begin exploring the who's, what's, and why's – we need to answer the critical question of how. Indeed, the phrase "studying the web" could embrace a staggering world of possible routes to explore, even before beginning to examine its relationship with society and culture. We need something to guide us through this massive field of (very interesting!) foxholes, and link the ideas we encounter into a consistent piece. We need some kind of structure to allow us to understand what we are looking at, the same way a chemist thinks of things in terms of atoms and molecules, or a philosopher can think about things in terms of schools of thought.

This class will explore different frameworks for studying the web, which will structure both the discussion and topic matter covered in the course, as well as the methodology that you should apply to your assignments. The second hour of the class will focus on applying these concepts to Wikipedia, and teeing up the final project for the class, where we will discuss the research prompt, talk about some successful projects from prior years, and plot out the deadlines for the rest of the semester.


Readings

Mechanisms of control
The effects of control

Optional Readings


Assignment 1

Assignment 1 is due before next week's class (February 11th). Details of the assignment will be discussed in today's class; see this page for further information. You can submit the assignment here.

Videos Watched in Class

Links

Class Discussion

Please remember to sign your postings by adding four tildes (~~~~) to the end of your contribution. This will automatically add your username and the date/time of your post, like so: Andy 11:49, 8 November 2013 (EST)


Therefore, to maintain order, ensure efficient government, and improve social justice, kings, presidents, and prime ministers must be the chief architect of their country's internet code. They must be multi-skilled or have the support of a talented and scholarly team.

Ichua 13:39, 31 January 2014 (EST)




It seems lots more fun to watch than just read: http://www.youtube.com/watch?v=o7UlYTFKFqY

Ichua 03:30, 2 February 2014 (EST)

Zittrain's talks are always a lot of fun! But we chose the two chapters in order to focus on a few of the specific things we'd like to dive into for this class. His book talk is much more general. Andy 08:46, 2 February 2014 (EST)




The Zittrain chapters give a good overview of how the Internet had been developed up to circa 2008, but there have been some significant changes--and possible reversals of the "generative" model since that time. The increasing role of SaaS platforms, centralized APIs, and operating platforms with a much more pervasive level of control relative to older operating systems (e.g., IOS, Android, and social networking platforms like the Facebook developer platform) have reintroduced an aspect of large, single-point-of-failure, commercially controlled systems. Whereas Cluetrain envisioned a future of "small pieces loosely joined," the Internet of today might be better described as "lots of small pieces largely dependent on a few large, commercially-controlled pieces." These few large pieces raise concerns in terms of limiting the potential for innovation, negotiation with gatekeepers (which, as rightly discussed in the Zittrain chapters, was one of the things that killed innovation on earlier mobile platforms) and the shifting of business opportunities across the market from creators to platform owners. Will there be another wave of generative platforms that will wear down the the current trend to centralization, and if not, how can we best ensure continuous innovation on the Internet? Jradoff 20:27, 3 February 2014 (EST)


At an event last night Prof. Zittrain mentioned another possible enclosure movement for generativity I hadn't thought of before: many web services are finding themselves at the receiving end of DDoS Attacks for one reason or another. As a result, services are moving from their own servers capable of withstanding such attacks - primarily Amazon Web Services, but there are a few others as well. If all of the Internet moves to just one of three or four web servers, that gives those servers tremendous power to cut off something they may not like. That's a form of "contingent generativity" that could cut off a lot of the social good that both Zittrain and Benkler flag in their articles. Andy 09:28, 4 February 2014 (EST)




Coming off of last week's reading (specifically John Perry Barlow's "A Declaration of Independence of the Internet"), I found danah boyd's essay "White Flight in Networked Publics?" particularly interesting. Even before reading boyd's piece, Barlow's "Declaration" seems hilariously naive in 2014, though I can certainly appreciate the utopian vision it's based on. The idea that the world that we exist in (the physical reality described by Orin Kerr) won't intrude on the virtual world of the Internet seems impossible. (Did they really not believe that the best AND worst parts of us would be present?) The role of the Internet in our everyday social lives has, of course, increased exponentially since 1996, so it only makes sense that who we are and how we behave in the physical world will translate to equivalent behavior on the Internet. The ways in which behavior on the Internet effects people in the physical reality of their lives (particularly when it comes to harassment, threatening behavior, etc.) lends a great sense of urgency to figuring out how we should think about the Internet and the law. Jkelly 23:18, 3 February 2014 (EST)

Both boyd and Hargittai use a lot of pre-Internet scholarship in their writings for this course - a nice reminder that new technology does not necessarily mean new approaches to scholarship. But as Benkler notes, it is not that we are simply repeating the 20th century with shinier objects. There is something different about the way that information travels today that changes the ecology of information and cultural production. We can either adopt that change or legislate/architect it away. Andy 10:36, 4 February 2014 (EST)




I agree with Ichu's remarks about the need to somehow maintain order and to do so utilizing a talented and scholarly task force. My question would then be how this team would be selected/elected? Another potential issue would be how to ensure justice in a system where internet code is controlled by one's government or sole government official/king/president? In our reading by Orin Kerr, he highlights how these conflicting external and internal perspectives on the internet add fuel to the problem of internet law. The internet has two personalities in its vast internal cyberspace and also in acting as a physical network; striking a balance between the two and incorporating both identities into a legal system continues to evade and frustrate authorities.

In response to Megan Garber's reading on Wikipedia, I find that Wikipedia often does not get the credit or praise it deserves. Admittedly, no online community-built encyclopedia can be fool-proof, but the reason why Wikipedia has prevailed is its relative reliability. I have used the site extensively and it has provided me with a quick summary of events on a particular debate or issue. Garber's reasons for Wikipedia's success are logical in that familiarity is the cornerstone for many website's success rates. The ease of navigating the site and the non-committal method of editing or adding to the work encourages more users to contribute. I would also argue that, beyond the cultural/socio-economic/racial influences that cause users to migrate from site to site (such as from myspace to facebook), the constantly changing platform of facebook has led many to stray from the site. This is difficult to prove, of course, but when I had a Facebook account I recall many complaints from my peers about all of the changes that kept happening occurring on the site. It seemed that every week we had to ajust to a new feature or re-learn how to navigate. Accordingly with Garber's theory, the "familiarity" factor was diminishing for users and people tend to resist change especially on a site that they have grown accustomed to.

--AmyAnn0644 04:08, 4 February 2014 (EST)




I was also interested in Megan Garber's point that the authorless structure of Wikipedia lowers the pressure of contributing. It certainly makes sense to me (and, I'm sure, to anyone who has read the comment section of any news article or blog post ever written...) that anonymity can encourage participation. When there's lower pressure to perform and you aren't faced with high stakes when you get involved, it's easier to bring yourself to contribute. This seems to tie in to Zittrain's point about the success of Wikipedia: it developed somewhat un-self-consciously and organically, rather than as a top-down "knowledge project" initiated by large universities. Oversight of the development of new technologies would presumably put a damper on this type of growth at any and all levels. I think this is nicely addressed by Zittrain's point that we're not looking at choosing between technology and non-technology, but a hierarchy and polyarchy.

Jkelly 12:48, 4 February 2014 (EST)




Lawrence Lessig’s article focuses on liberty in Cyberspace and how various modes of regulation effect that liberty. He focuses on four different ways that the web can be regulated,1)the Law, 2)social norms, 3) the market, and 4)architecture. Lessing tries to get us to think differently, more critically, about different mechanisms that can lead to restriction of freedom on the Web.

For instance, with the architecture of the Web, Lessing asserts that the written code of programs inherently can either provide more freedom, or restrict freedom, and access. And when it comes to the law, Lessing points out that “The efficient answer may well be unjust.” He gives an example of the law requiring life sentences for stealing car radios.

We all would probably agree that that is overboard and excessive. And, with that absurdity planted in our minds, Lessing then shows how a coder could easily put a restriction in the radios code that would make stealing the radio less desirable for thieves. Which would in turn make it unnecessary for such a draconian law of life sentences for car radio thieves. This example makes me think about Aaron Swartz, a friend of Lessigs, whom took his own life in 2013. Aaron was prodigy kid who helped create RSS feed, and Reddit at a young age. He later became what you might call an internet activist, and made enemies in the federal government for some hacking activities. He was eventually charged with multiple felonies by the Federal government for hacking MIT’s JSTOR server. Lessing talks about how law and code can either liberate or restrict the Internet. I believe Swartz’s case shows how the MIT/JSTOR rules of access, restricted information on the Web, and how federal laws were excessive and restricted innovation and liberty for Web users. And lastly, Swartz’s case shows how one coder tried to use hacktivism, to liberate information on the web.Mikewitwicki 12:58, 4 February 2014 (EST)




I found the essay I found danah boyd's essay "White Flight in Networked Publics?" both interesting and reflective of what I have witnessed. In particular, I thought the comment that “Subculturally identified teens appeared more frequently draw to MySapce while more mainstream teens tended towards Facebook,” was especially true. We may pride ourselves on a strong sense of individualism, but remnants of the herd mentality are always present. MySpace simply offers a way to share interests that are different and more “specialized” than Facebook. I could not help but wonder if the trend is continuing with an exodus from Facebook. From a personal observation, I’ve noticed that usage among many 16-22 year olds on Facebook is dropping. The pages may still be up with random notices but the real communication and new communities are being centered on Twitter. I’m not sure if this is a spike, a trend or a progression to escape a Mainstream Facebook with parental oversight. What may be of more concern is that Twitter allows the segregation of subcultures and races more easily than previous options. VACYBER 14:09, 4 February 2014 (EST)




I have been following the most recent work of Lawrence Lessig for about a year, so it’s exciting to read “Code 2.0” and make connections between that and his work on copyright law, amateur creativity, Creative Commons, etc. By providing some background on the US government’s inclination towards “indirect” regulation, Lessig paints a frightening picture of the extent to which the state can control entities for its own benefit. The case of New York v. US focuses on the question of indirection and the states, which disallows the federal government from co-opting the states for its own ends. In effect, this case establishes that the government must take responsibility for its actions and remain transparent in its interactions with the states. My question is, however, why isn’t there such precedent for indirection and the American people? Rust v. Sullivan is a prime example of the government’s indirect regulation of its citizens. By ordering doctors, who work in government-funded clinics, to discourage the use of abortion as a family planning method, the Reagan administration furthered its aim to reduce the incidence of abortion. The lack of transparency of the government, in using doctors to discourage their patients from obtaining abortions, is most disturbing. A patient has no way to discern the state’s motives, which masquerade behind the advice of a medical professional. A somewhat similar issue occurred (and continues to occur) in the deeding of land prior to 1948. Such deeds prevented the property covered by that deed from being sold to people of a particular race. While this law is no more, its remnants are still very much alive in the US today. As Lessig explained, communities remained segregated by “a thousand tiny inconveniences of architecture and zoning… highways without easy crossings were placed between communities… railroad tracks were used to divide.” Despite the fact that integration is made difficult by these subtle methods of control, the most troubling part of this it is so very challenging to see the link between the regulation and its consequence. The government’s lack of transparency, while being a rather genius way to accomplish their own goals, is what is so threatening to our liberty. Lessig ends by suggesting that cyberspace is a new terrain in which the government can wield power inconspicuously and endanger our freedom.

Lrsanchez 14:50, 4 February 2014 (EST)




IMPROVING SOCIAL JUSTICE AND ACCELERATING ECONOMIC DEVELOPMENT

Traditionally, colleges and universities limit the number of students admitted into their institutions primarily due to resource constraints. But with the internet, everyone can have access to higher education, regardless of their prior academic failures.

And higher education can even be made almost free! This brings liberty and freedom to the weak and poor. Economic progress can be accelerated. Is this possible? Is this desirable?


WHY THE GOVERNMENT MUST OWN THE COUNTRY'S INTERNET BACKBONE

In the Philippines, the internet backbone is mainly owned and operated by profit-oriented private corporations. Hence, the poor has no access to the internet. With over 40% of the population, or 40 million Filipinos in poverty, and internet infrastructure in most schools are grossly inadequate or absent, only the government can remedy the situation by owning a substantial part of the country's internet backbone. Profit opportunities can still exist for corporations if there are two separate internet backbone: one solely for government administration and education, and the other for private entertainment and commerce.

Ichua 15:32, 4 February 2014 (EST)



Wikipedia is offered in other languages, which is a feature offered almost from inception. How does wikipedia get around the challenge where (i.e.) both an English and a German wikipedia page on the same subject feature different citations, or where one page has more depth than the other? This would make a great deal of knowledge inaccessible to people who don't speak the language. Does monolingualism emerge as a barrier for Wikipedia?

Marissa1989 15:43, 4 February 2014 (EST)




I could not agree more with Ichua. Your point of colleges and universities being somewhat limited due to resource constraints makes me think of tech and educational revolutionaries such as Salman Khan and his YouTube channel. Although his efforts are not mainstream yet, it is a good example of how the internet could bring about freedom, social justice, economic improvement, and access to higher education to the weak and poor. The same goes with "edX" and other disruptive technologies that could very well contribute to knowledge economies now and in the future.

cheikhmbacke 15:42, 4 February 2014 (EST)




An introduction into a "Dots" life brings scrutiny on the constructs of regulation through the market, architecture, law and social norms. As we engage in our conversations dealing with cyberspace, it will be interesting to see which one of the four areas outlined will prove to be the most critical-or will they all hold equal weight in the outcome of how we grow as a society online?

Melissaluke 15:51, 4 February 2014 (EST)