Chapter 9, section 5
Chapter 9 Justice and Development. section 2: Toward Adopting Commons-Based Strategies for Development
Liberal Theories of Justice and the Networked Information Economy
Commons-Based Strategies for Human Welfare and Development
Information-Embedded Goods and Tools, Information, and Knowledge
Industrial Organization of HDI-Related Information Industries
Toward Adopting Commons-Based Strategies for Development
The mainstream understanding of intellectual property by its dominant policy-making institutions-the Patent Office and U.S. trade representative in the United States, the Commission in the European Union, and the World Intellectual Property Organization (WIPO) and Trade-Related Aspects of Intellectual Property (TRIPS) systems internationally-is that strong protection is good, and stronger protection is better. In development and trade policy, this translates into a belief that the primary mechanism for knowledge transfer and development in a global information economy is for all nations, developing as well as developed, to ratchet up their intellectual property law standards to fit the most protective regimes adopted in the United States and Europe. As a practical political matter, the congruence between the United States and the European Union in this area means that this basic understanding is expressed in the international trade system, in the World Trade Organization (WTO) and its TRIPS agreement, and in international intellectual property treaties, through the WIPO. The next few segments present an alternative view. Intellectual property as an institution is substantially more ambiguous in its effects on information production than the steady drive toward expansive rights would suggest. The full argument is in chapter 2.
Intellectual property is particularly harmful to net information importers. In our present world trade system, these are the poor and middle-income nations. Like all users of information protected by exclusive rights, these nations are required by strong intellectual property rights to pay more than the marginal cost of the information at the time that they buy it. In the standard argument, this is intended to give producers incentives to create information that users want. Given the relative poverty of these countries, however, practically none of the intellectual-property-dependent producers develop products specifically with returns from poor or even middle-income markets in mind. The pharmaceutical industry receives about 5 percent of its global revenues from low- and middle-income countries. That is why we have so little investment in drugs for diseases that affect only those parts of the world. It is why most agricultural research that has focused on agriculture in poorer areas of the world has been public sector and nonprofit. Under these conditions, the above-marginal-cost prices paid in these poorer countries are purely regressive redistribution. The information, knowledge, and information-embedded goods paid for would have been developed in expectation of rich world rents alone. The prospects of rents from poorer countries do not affect their development. They do not affect either the rate or the direction of research and development. They simply place some of the rents that pay for technology development in the rich countries on consumers in poor and middle-income countries. The morality of this redistribution from the world's poor to the world's rich has never been confronted or defended in the European or American public spheres. It simply goes unnoticed. When crises in access to information-embedded goods do appear-such as in the AIDS/HIV access to medicines crisis-these are seldom tied to our basic institutional choice. In our trade policies, Americans and Europeans push for ever-stronger protection. We thereby systematically benefit those who own much of the stock of usable human knowledge. We do so at the direct expense of those who need access to knowledge in order to feed themselves and heal their sick.
The practical politics of the international intellectual property and trade regime make it very difficult to reverse the trend toward ever-increasing exclusive property protections. The economic returns to exclusive proprietary rights in information are highly concentrated in the hands of those who own such rights. The costs are widely diffuse in the populations of both the developing and developed world. The basic inefficiency of excessive property protection is difficult to understand by comparison to the intuitive, but mistaken, Economics 101 belief that property is good, more property is better, and intellectual property must be the same. The result is that pressures on the governments that represent exporters of intellectual property rights permissions-in particular, the United States and the European Union-come in this area mostly from the owners, and they continuously push for ever-stronger rights. Monopoly is a good thing to have if you can get it. Its value for rent extraction is no less valuable for a database or patent-based company than it is for the dictator's nephew in a banana republic. However, its value to these supplicants does not make it any more efficient or desirable.
The political landscape is, however, gradually beginning to change. Since the turn of the twenty-first century, and particularly in the wake of the urgency with which the HIV/AIDS crisis in Africa has infused the debate over access to medicines, there has been a growing public interest advocacy movement focused on the intellectual property trade regime. This movement is, however, confronted with a highly playable system. A victory for developing world access in one round in the TRIPS context always leaves other places to construct mechanisms for exclusivity. Bilateral trade negotiations are one domain that is beginning to play an important role. In these, the United States or the European Union can force a rice- or cotton-exporting country to concede a commitment to strong intellectual property protection in exchange for favorable treatment for their core export. The intellectual property exporting nations can then go to WIPO, and push for new treaties based on the emerging international practice of bilateral agreements. This, in turn, would cycle back and be generalized and enforced through the trade regimes. Another approach is for the exporting nations to change their own laws, and then drive higher standards elsewhere in the name of "harmonization." Because the international trade and intellectual property system is highly "playable" and manipulable in these ways, systematic resistance to the expansion of intellectual property laws is difficult.
The promise of the commons-based strategies explored in the remainder of this chapter is that they can be implemented without changes in law-either national or international. They are paths that the emerging networked information economy has opened to individuals, nonprofits, and public-sector organizations that want to help in improving human development in the poorer regions of the world to take action on their own. As with decentralized speech for democratic discourse, and collaborative production by individuals of the information environment they occupy as autonomous agents, here too we begin to see that self-help and cooperative action outside the proprietary system offer an opportunity for those who wish to pursue it. In this case, it is an opportunity to achieve a more just distribution of the world's resources and a set of meaningful improvements in human development. Some of these solutions are "commons-based," in the sense that they rely on free access to existing information that is in the commons, and they facilitate further use and development of that information and those information-embedded goods and tools by releasing their information outputs openly, and managing them as a commons, rather than as property. Some of the solutions are specifically peer-production solutions. We see this most clearly in software, and to some extent in the more radical proposals for scientific publication. I will also explore here the viability of peer-production efforts in agricultural and biomedical innovation, although in those fields, commons-based approaches grafted onto traditional public-sector and nonprofit organizations at present hold the more clearly articulated alternatives.
The software industry offers a baseline case because of the proven large scope for peer production in free software. As in other information-intensive industries, government funding and research have played an enormously important role, and university research provides much of the basic science. However, the relative role of individuals, nonprofits, and nonproprietary market producers is larger in software than in the other sectors. First, two-thirds of revenues derived from software in the United States are from services and do not depend on proprietary exclusion. Like IBM's "Linux-related services" category, for which the company claimed more than two billion dollars of revenue for 2003, these services do not depend on exclusion from the software, but on charging for service relationships.//7 Second, some of the most basic elements of the software environment-like standards and protocols-are developed in nonprofit associations, like the Internet Engineering Taskforce or the World Wide Web Consortium. Third, the role of individuals engaged in peer production-the free and open-source software development communities-is very large. Together, these make for an organizational ecology highly conducive to nonproprietary production, whose outputs can be freely usable around the globe. The other sectors have some degree of similar components, and commons-based strategies for development can focus on filling in the missing components and on leveraging nonproprietary components already in place.
In the context of development, free software has the potential to play two distinct and significant roles. The first is offering low-cost access to high-performing software for developing nations. The second is creating the potential for participation in software markets based on human ability, even without access to a stock of exclusive rights in existing software. At present, there is a movement in both developing and the most advanced economies to increase reliance on free software. In the United States, the Presidential Technology Advisory Commission advised the president in 2000 to increase use of free software in mission-critical applications, arguing the high quality and dependability of such systems. To the extent that quality, reliability, and ease of self-customization are consistently better with certain free software products, they are attractive to developing-country governments for the same reasons that they are to the governments of developed countries. In the context of developing nations, the primary additional arguments that have been made include cost, transparency, freedom from reliance on a single foreign source (read, Microsoft), and the potential of local software programmers to learn the program, acquire skills, and therefore easily enter the global market with services and applications for free software.//8 The question of cost, despite the confusion that often arises from the word "free," is not obvious. It depends to some extent on the last hope-that local software developers will become skilled in the free software platforms. The cost of software to any enterprise includes the extent, cost, and efficacy with which the software can be maintained, upgraded, and fixed when errors occur. Free software may or may not involve an up-front charge. Even if it does not, that does not make it cost-free. However, free software enables an open market in free software servicing, which in turn improves and lowers the cost of servicing the software over time. More important, because the software is open for all to see and because developer communities are often multinational, local developers can come, learn the software, and become relatively low-cost software service providers for their own government. This, in turn, helps realize the low-cost promise over and above the licensing fees avoided. Other arguments in favor of government procurement of free software focus on the value of transparency of software used for public purposes. The basic thrust of these arguments is that free software makes it possible for constituents to monitor the behavior of machines used in governments, to make sure that they are designed to do what they are publicly reported to do. The most significant manifestation of this sentiment in the United States is the hitherto-unsuccessful, but fairly persistent effort to require states to utilize voting machines that use free software, or at a minimum, to use software whose source code is open for public inspection. This is a consideration that, if valid, is equally suitable for developing nations. The concern with independence from a single foreign provider, in the case of operating systems, is again not purely a developing-nation concern. Just as the United States required American Marconi to transfer its assets to an American company, RCA, so that it would not be dependent for a critical infrastructure on a foreign provider, other countries may have similar concerns about Microsoft. Again, to the extent that this is a valid concern, it is so for rich nations as much as it is for poor, with the exceptions of the European Union and Japan, which likely do have bargaining power with Microsoft to a degree that smaller markets do not.
The last and quite distinct potential gain is the possibility of creating a context and an anchor for a free software development sector based on service. This was cited as the primary reason behind Brazil's significant push to use free software in government departments and in telecenters that the federal government is setting up to provide Internet service access to some of its poorer and more remote areas. Software services represent a very large industry. In the United States, software services are an industry roughly twice the size of the movie and video industry. Software developers from low- and middle-income countries can participate in the growing free software segment of this market by using their skills alone. Unlike with service for the proprietary domain, they need not buy licenses to learn and practice the services. Moreover, if Brazil, China, India, Indonesia, and other major developing countries were to rely heavily on free software, then the "internal market," within the developing world, for free software-related services would become very substantial. Building public-sector demand for these services would be one place to start. Moreover, because free software development is a global phenomenon, free software developers who learn their skills within the developing world would be able to export those skills elsewhere. Just as India's call centers leverage the country's colonial past with its resulting broad availability of English speakers, so too countries like Brazil can leverage their active free software development community to provide software services for free software platforms anywhere in the developed and developing worlds. With free software, the developing-world providers can compete as equals. They do not need access to permissions to operate. Their relationships need not replicate the "outsourcing" model so common in proprietary industries, where permission to work on a project is the point of control over the ability to do so. There will still be branding issues that undoubtedly will affect access to developed markets. However, there will be no baseline constraints of minimal capital necessary to enter the market and try to develop a reputation for reliability. As a development strategy, then, utilization of free software achieves transfer of information-embedded goods for free or at low cost. It also transfers information about the nature of the product and its operation-the source code. Finally, it enables transfer, at least potentially, of opportunities for learning by doing and of opportunities for participating in the global market. These would depend on knowledge of a free software platform that anyone is free to learn, rather than on access to financial capital or intellectual property inventories as preconditions to effective participation.
Scientific publication is a second sector where a nonproprietary strategy can be implemented readily and is already developing to supplant the proprietary model. Here, the existing market structure is quite odd in a way that likely makes it unstable. Authoring and peer review, the two core value-creating activities, are done by scientists who perform neither task in expectation of royalties or payment. The model of most publications, however, is highly proprietary. A small number of business organizations, like Elsevier Science, control most of the publications. Alongside them, professional associations of scientists also publish their major journals using a proprietary model. Universities, whose scientists need access to the papers, incur substantial cost burdens to pay for the publications as a basic input into their own new work. While the effects of this odd system are heavily felt in universities in rich countries, the burden of subscription rates that go into the thousands of dollars per title make access to up-to-date scientific research prohibitive for universities and scientists working in poorer economies. Nonproprietary solutions are already beginning to emerge in this space. They fall into two large clusters.
The first cluster is closer to the traditional peer-review publication model. It uses Internet communications to streamline the editorial and peer-review system, but still depends on a small, salaried editorial staff. Instead of relying on subscription payments, it relies on other forms of payments that do not require charging a price for the outputs. In the case of the purely nonprofit Public Library of Science (PLoS), the sources of revenue combine author's payments for publication, philanthropic support, and university memberships. In the case of the for-profit BioMed Central, based in the United Kingdom, it is a combination of author payments, university memberships, and a variety of customized derivative products like subscription-based literature reviews and customized electronic update services. Author payments-fees authors must pay to have their work published-are built into the cost of scientific research and included in grant applications. In other words, they are intended to be publicly funded. Indeed, in 2005, the National Institutes of Health (NIH), the major funding agency for biomedical science in the United States, announced a requirement that all NIH-funded research be made freely available on the Web within twelve months of publication. Both PLoS and BioMed Central have waiver processes for scientists who cannot pay the publication fees. The articles on both systems are available immediately for free on the Internet. The model exists. It works internally and is sustainable as such. What is left in determining the overall weight that these open-access journals will have in the landscape of scientific publication is the relatively conservative nature of universities themselves. The established journals, like Science or Nature, still carry substantially more prestige than the new journals. As long as this is the case, and as long as hiring and promotion decisions continue to be based on the prestige of the journal in which a scientist's work is published, the ability of the new journals to replace the traditional ones will be curtailed. Some of the established journals, however, are operated by professional associations of scientists. There is an internal tension between the interests of the associations in securing their revenue and the growing interest of scientists in open-access publication. Combined with the apparent economic sustainability of the open-access journals, it seems that some of these established journals will likely shift over to the open-access model. At a minimum, policy interventions like those proposed by the NIH will force traditional publications to adapt their business model by making access free after a few months. The point here, however, is not to predict the overall likely success of open-access journals. It is to combine them with what we have seen happening in software as another example of a reorganization of the components of the industrial structure of an information production system. Individual scientists, government funding agencies, nonprofits and foundations, and nonproprietary commercial business models can create the same good-scientific publication-but without the cost barrier that the old model imposed on access to its fruits. Such a reorientation would significantly improve the access of universities and physicians in developing nations to the most advanced scientific publication.
The second approach to scientific publication parallels more closely free software development and peer production. This is typified by ArXiv and the emerging practices of self-archiving or self-publishing. ArXiv.org is an online repository of working papers in physics, mathematics, and computer science. It started out focusing on physics, and that is where it has become the sine qua non of publication in some subdisciplines. The archive does not perform review except for technical format compliance. Quality control is maintained by postpublication review and commentary, as well as by hosting updated versions of the papers with explanations (provided by authors) of the changes. It is likely that the reason ArXiv.org has become so successful in physics is the very small and highly specialized nature of the discipline. The universe of potential readers is small, and their capacity to distinguish good arguments from bad is high. Reputation effects of poor publications are likely immediate.
While ArXiv offers a single repository, a much broader approach has been the developing practice of self-archiving. Academics post their completed work on their own Web sites and make it available freely. The primary limitation of this mechanism is the absence of an easy, single location where one can search for papers on a topic of concern. And yet we are already seeing the emergence of tagging standards and protocols that allow anyone to search the universe of self-archived materials. Once completed, such a development process would in principle render archiving by single points of reference unnecessary. The University of Michigan Digital Library Production Service, for example, has developed a protocol called OAIster (pronounced like oyster, with the tagline "find the pearls"), which combines the acronym of Open Archives Initiative with the "ster" ending made popular in reference to peer-to-peer distribution technologies since Napster (AIMster, Grokster, Friendster, and the like). The basic impulse of the Open Archives Initiative is to develop a sufficiently refined set of meta-data tags that would allow anyone who archives their materials with OAI-compliant tagging to be searched easily, quickly, and accurately on the Web. In that case, a general Web search becomes a targeted academic search in a "database" of scientific publications. However, the database is actually a network of self-created, small personal databases that comply with a common tagging and search standard. Again, my point here is not to explore the details of one or another of these approaches. If scientists and other academics adopt this approach of self-archiving coupled with standardized interfaces for global, well-delimited searches, the problem of lack of access to academic publication because of their high-cost publication will be eliminated.
Other types of documents, for example, primary- and secondary-education textbooks, are in a much more rudimentary stage of the development of peer-production models. First, it should be recognized that responses to illiteracy and low educational completion in the poorer areas of the world are largely a result of lack of schoolteachers, physical infrastructure for classrooms, demand for children's schooling among parents who are themselves illiterate, and lack of effectively enforced compulsory education policy. The cost of textbooks contributes only a portion of the problem of cost. The opportunity cost of children's labor is probably the largest factor. Nonetheless, outdated materials and poor quality of teaching materials are often cited as one limit on the educational achievement of those who do attend school. The costs of books, school fees, uniforms, and stationery can amount to 20-30 percent of a family's income.//9 The component of the problem contributed by the teaching materials may be alleviated by innovative approaches to textbook and education materials authoring. Chapter 4 already discussed some textbook initiatives. The most successful commons-based textbook authoring project, which is also the most relevant from the perspective of development, is the South African project, Free High School Science Texts (FHSST). The FHSST initiative is more narrowly focused than the broader efforts of Wikibooks or the California initiative, more managed, and more successful. Nonetheless, in three years of substantial effort by a group of dedicated volunteers who administer the project, its product is one physics high school text, and advanced drafts of two other science texts. The main constraint on the efficacy of collaborative textbook authoring is that compliance requirements imposed by education ministries tend to require a great degree of coherence, which constrains the degree of modularity that these text-authoring projects adopt. The relatively large-grained contributions required limit the number of contributors, slowing the process. The future of these efforts is therefore likely to be determined by the extent to which their designers are able to find ways to make finer-grained modules without losing the coherence required for primary- and secondary-education texts. Texts at the post-secondary level likely present less of a problem, because of the greater freedom instructors have to select texts. This allows an initiative like MIT's Open Courseware Initiative to succeed. That initiative provides syllabi, lecture notes, problem sets, etc. from over 1,100 courses. The basic creators of the materials are paid academics who produce these materials for one of their core professional roles: teaching college- and graduate-level courses. The content is, by and large, a "side-effect" of teaching. What is left to be done is to integrate, create easy interfaces and search capabilities, and so forth. The university funds these functions through its own resources and dedicated grant funding. In the context of MIT, then, these functions are performed on a traditional model-a large, well-funded nonprofit provides an important public good through the application of full-time staff aimed at non-wealth-maximizing goals. The critical point here was the radical departure of MIT from the emerging culture of the 1980s and 1990s in American academia. When other universities were thinking of "distance education" in terms of selling access to taped lectures and materials so as to raise new revenue, MIT thought of what its basic mandate to advance knowledge and educate students in a networked environment entailed. The answer was to give anyone, anywhere, access to the teaching materials of some of the best minds in the world. As an intervention in the ecology of free knowledge and information and an act of leadership among universities, the MIT initiative was therefore a major event. As a model for organizational innovation in the domain of information production generally and the creation of educational resources in particular, it was less significant.
Software and academic publication, then, offer the two most advanced examples of commons-based strategies employed in a sector whose outputs are important to development, in ways that improve access to basic information, knowledge, and information-embedded tools. Building on these basic cases, we can begin to see how similar strategies can be employed to create a substantial set of commons-based solutions that could improve the distribution of information germane to human development.