Difference between revisions of "Prediction Markets"
|Line 81:||Line 81:|
* Cass R. Sunstein, [http://www1.law.nyu.edu/journals/lawreview/issues/vol80/no3/NYU303.pdf Group Judgments: Statistical Means, Deliberation, and Information Markets], 80 N.Y.U. L. Rev. 962 (2005)
* Cass R. Sunstein, [http://www1.law.nyu.edu/journals/lawreview/issues/vol80/no3/NYU303.pdf Group Judgments: Statistical Means, Deliberation, and Information Markets], 80 N.Y.U. L. Rev. 962 (2005)
* Michael Abramowicz & M. Todd Henderson, [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=928896 Prediction Markets for Corporate Governance], Notre Dame L. Rev. (2007)
* Michael Abramowicz & M. Todd Henderson, [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=928896 Prediction Marketsfor Corporate Governance], Notre Dame L. Rev. (2007)
* Chapter 10 of Michael Abramowicz's book [http://www.amazon.com/Predictocracy-Market-Mechanisms-Private-Decision/dp/0300115997 Predictocracy]
* Chapter 10 of Michael Abramowicz's book [http://www.amazon.com/Predictocracy-Market-Mechanisms-Private-Decision/dp/0300115997 Predictocracy]
Revision as of 00:57, 30 September 2011
back to syllabus
Our seminar this week aims to discuss novel and/or controversial applications of prediction markets. Our study will be organized into three parts. First, before class everyone will get acquainted with prediction markets as they now exist and understand the basic economic theory behind their success -- this page and the first required reading (the Wolfers/Zitzewitz article) will cover that background. We also tried a (failed!) experiment with creating our own prediction contracts on intrade.net that inadvertently taught a lesson about how difficult it is to get efficient information markets (more below). Second, we will examine a few proposals for how prediction markets might be used at the frontier. The readings cover Professor Wolfers' crime prediction market, the DARPA project, and the Google Flu Tracker, as well as a series of short ideas from Professor Abramowicz; and class members will also propose their own innovative applications. Professor Wolfers will appear by videoconference to discuss these questions. Third and finally, we will try to figure out what prediction markets can and cannot do well as applied to law or government. At this point, we will consider skeptical arguments, both moral objections and practical/theoretical worries. Ultimately, the class will come to some conclusions about what uses prediction markets might have in the future.
What Prediction Markets Do Now
The most high-profile examples of prediction marketsthe Iowa Electronic Markets and Intradestarted by focusing primarily on predicting election outcomes and related political and financial events. Now they have expanded to cultural (Oscars) and technological (X Prize) events as well. The status of the commercial markets is uncertain; for example, Tradesports announced recently that it is closing. And questions remain about the legality of prediction markets, whether the CTFC will regulate them, and whether they will be taxed. The basic idea is that prediction markets are more accurate than experts or traditional public opinion polls, because among other advantages they require participants to offer predictions backed by some conviction (money) and because they aggregate knowledge held by many minds. Private companies, like Google and Hewlett-Packard, are thus technology/content/aug2006/tc20060803_012437.htm increasingly employing internal and external prediction markets to better forecast sales or to predict which products are likely to be most popular, among other things.
Novel Applications of Prediction Markets
More central to our purposes are increasingly prevalent proposals to use prediction markets to help guide law and policymaking. One of the earlier and more controversial applications was DARPA's planned terrorism futures market, which would have helped the Pentagon predict terrorist attacks, assassinations, and the like but was killed following political outcry. Somewhat less controversial are proposals to use prediction markets in areas with less potential for moral hazard. Some scholars have argued that they could be useful for corporate governance, for example as an alternative to disclosure requirements aimed at insider trading and other problems -- the idea is that prediction markets in which corporate insiders participated would provide the desirable information, at lower cost and greater accuracy. Professors Wolfers, Henderson, and Zitzewitz, in one of the required readings, describe how prediction markets might be useful to policymakers allocating resources to combat crime. Michael Abramowicz's recent book Predictocracy offers an extended analysis of the usefulness of prediction markets in law and decisionmaking by courts and legislatures (among other institutions).
Prediction markets offer a lot of promise but of course are far from a perfect tool, perhaps especially as applied to policymaking. Skeptics have raised several concerns. Markets are not very helpful when information is secret. If government changes policy in response to the market results, it might be difficult to identify winning conditions. Trading volume is a recurring concern. If volume is too low, not only may results be inaccurate accurate, but the market might even be subject to manipulation by a few well-endowed investors. Finding a non-ambiguous proposition for betting is more challenging when applied to real-world events, especially if the result cannot be easily quantified. Public mistrust of markets also limits their utility to policy-making, as evidenced by the widespread concerns among politicians about creating moral hazards. Moreover, markets are only as good as the information they receive, and some skeptics are upset to learn that in many cases markets just have the effect of re-articulating conventional wisdom. These and other skeptical concerns need to be addressed before real world legal or policy markets will be practical.
Justin Wolfers (joining via videoconference) is a tenured Associate Professor of Business and Public Policy at the University of Pennsylvania's Wharton School. He is a leading expert on prediction markets and has published articles on them in, among others, Economica, Science, and the Journal of Economic Perspectives. Professor Wolfers also serves as an Associate Editor of the Journal of Prediction Markets and as an advisor to seven different for-profit and not-for-profit organizations that work on prediction markets.
We aim for the class to proceed as follows:
- Professor Wolfers will lead off with an overview of how prediction markets work, their potential for use by private and public policymakers, and the most cutting-edge or salient issues that prediction market proponents face today
- Professor Fisher will offer a brief response
- We will then turn to questions and class discussion
- Finally, we'll discuss and critique the policy applications that class members themselves came up with
Justin Wolfers and Eric Zitzewitz | 18 Journal of Economic Perspectives 107 (2004) | (19 pages, essential background--only one equation for the math-phobic)
Novel Applications of Prediction Markets
Miguel Helft | New York Times | November 11, 2008 | (1 page, useful diagram)
M. Todd Henderson, Justin Wolfers, and Eric Zitzewitz | Olin Working Paper No. 402 | April 2008 | (5 pages, just read abstract and introduction)
Michael Abramowicz | Excerpts from Predictocracy | A series of short suggestions on novel applications of prediction markets to law and government
Carl Hulse of the NYT and Wire Services | July 2003 | (~6 pages)
Discussion of the DARPA Markets in the Senate | July 28 and 29, 2003 | (2 short comments, mostly for humor value)
Student Note | 122 Harvard Law Review 1217 (2009) | (21 pages, a short but good overview of defenses and criticisms of prediction markets, especially as applied to law)
Michael Abramowicz | Excerpts from Predictocracy | Professor Abramowicz's discussion of some advantages and problems of Prediction Markets
- A brief paper describing the failed DARPA terrorism futures market, by a Professor at the Naval Postgraduate School. Robert Looney, DARPA’s Policy Analysis Market for Intelligence: Outside the Box or Off the Wall?, STRATEGIC INSIGHTS (2003).
- Cass Sunstein's Infotopia
- Robin Hanson's Futarchy proposal
- Cass R. Sunstein, Group Judgments: Statistical Means, Deliberation, and Information Markets, 80 N.Y.U. L. Rev. 962 (2005)
- Michael Abramowicz & M. Todd Henderson, [http://brianelderroofing.com/ Prediction Markets for Corporate Governance], Notre Dame L. Rev. (2007)
- Chapter 10 of Michael Abramowicz's book Predictocracy
We asked students to participate in two pre-class activities; the tasks and the results are below.
Creating a Market on Intrade.net
Several weeks ago the class started an experiment with its own prediction markets. Here was the plan:
- "Everyone will create their own prediction market contract via intrade.net. Once you set up an account there, you can then create you own contract at http://www.intrade.net/market/create/start.faces. Your contract can be about anything you like (doesn't have to be legal), but it should conclude before our class date, April 27, 2009. The idea is (1) for everyone to get a feel for how prediction markets operate; (2) to see what kinds of contracts get enough volume to be successful and what kinds don't; (3) to see how accurate the predictions are, how they change over time, etc.
- "Once you've signed up and created a contract -- please do so by Friday, March 20 -- you can invite other people in the class to bet on your contract to give it some initial starting volume. (Intrade.net gives everyone $10k in play money upon signing up, so obviously if people want to bet on the wider world's contracts, that's great too). Everyone should list the contracts they've created below to facilitate class participation on intrade (and so that we can generate a variety of different kinds of contracts). At the actual class session on April 27, everyone should come in prepared to briefly discuss what happened with their contract (feel free to create more than one)."
Here were the contracts that the class created:
- Matthew: 2009 Virginia Attorney General Race
- CKennedy: A Million Chinese Charterists
- JG: A major world airline switches the default pilot for its flights over to computer (i.e. perhaps human copilot, but no human pilot) by 2020
From a technological perspective, the first part of the experiment worked. Most students were able to create the markets easily and the diversity of proposals was interesting. But the markets had almost no trading on them and did not yield any useful information. The reasons they failed, though, were revealing about prediction markets -- which is why we somewhat optimistically labeled the experiment a successful failure. Here is why we suspect the markets did not work. First, they were on the play money side of Intrade, Intrade.net. Although some experiments have found play money markets to be as accurate as real money markets, that finding assumes that the play money markets have similar volume. Of course, the play/real money distinction has an impact on how much volume each market has, and the lack of a financial incentive may have proved lethal here. Second, the topics were mostly obscurethe type of stuff that law students and tech enthusiasts found interesting, but not a lot with a broader audienceand this definitely could have impacted trading volume. Third, the user-created area on Intrade.net is a quiet alcove, so it does not get much traffic period. Fourth, beyond trading volume, the Intrade markets were quirkily designed with a binary input (more likely or less likely) rather than a large probability scale. This reduced the ability to indicate bet specificity, which, as we have discussed, is one primary benefit of prediction markets over voting. Fifth, and relatedly, the binary input, as opposed to a price, did not allow for the money (or simulation of money) psychology to workin short, there was no way to express investor confidence. So for all these reasons most of the markets got nowhere. In setting up the markets, we focused on making sure everyone understood the technology and could get their experiment started. But if we had watched our own test markets for a while, we would have realized that the experiment was unlikely to produce any useful information. Now perhaps we could have planned the failure in advance as a teaching tool, but it would have been more interesting to find a way to create markets that actually worked.
Class-Generated Applications of Prediction Markets for Law and Government
As part of our discussion of novel uses of prediction markets, we hope not only to entertain critical perspectives but also to be constructive. To that end, we're asking everyone in the class to come up with a suggestion for how to use prediction markets in law or government. The suggestion can be subject-specific (e.g., using prediction markets to guide crime policy or to help the DOJ and FTC analyze the risks of proposed mergers) or can cut across subjects and focus on institutions or modes of decisionmaking (e.g., using prediction markets in small claims court adjudication). But it should be specific enough that we can analyze drawbacks or advantages (e.g., not just something like, Congress should use prediction markets to help determine legislative priorities). See the Applications section of http://predictocracy.org/index.html for models and ideas. Please send your (brief) ideas to Elisabeth and Matt by Sunday the 26th at 5 pm; we'll likely discuss some in class and will definitely post them on the site later on.
The Ideas, Grouped in Loose and Overlapping Categories
Predicting Whether Policies Will Be Successful
- My idea is for the government to use prediction markets to gauge public opinion on how likely it is that a given statute, rule, or action would accomplish its purpose. For instance, prior to Congress voting on a bill a prediction market could analyze whether the bill would have significant positive effect in the real world. The results could inform Congressional debate on whether to pass the bill, modify it, or reject it.
- I'm not a political scientist, but I easily believe that public confidence is a meaningful and desirable quality in any government program, and I can easily believe that confidence-inspiring programs are, all things being equal, better than those that don't inspire confidence. With that in mind, the FDIC, the the US Dept. of Treasury, and others, could use these markets to predict the confidence-inspiring factor of proposed bailout packages. Among those that roughly tend to distribute moneys in the same ways, we might prefer the ones that the public votes "will fix the financial crisis by the end of YYYY". This concept can broadly be applied to any proposed program. E.g., "If immediately implemented in full, Pickens plan will improve our economy, national security, and environment in a cost-effective manner by 2012." Granted I would hardly advise *anybody* to rely on the prediction market to choose between alternative programs except in the extremely narrow case that the programs' expert-predicted outcomes, and their methods of achieving their results, are highly similar.
- U.S. government agencies fund a small galaxy of projects, some of which are overlapping, and many of which never come to fruition. One person I've talked to spent more than 15 years working for contractors for a sub-cabinet agency, and not one of the half-dozen or more projects this person was assigned to was completed. These contractor staff knew which projects were more likely to be completed than others. Therefore we propose a market aimed at predicting which projects will be completed, versus many others that are cancelled after wasted spending.
Deciding Whether Particular Policies or Decisions are Politically Viable
- I wonder if there's a prediction market for nominations to federal government positions, like federal judgeships and cabinet appointments. If the public could make their views known that candidate X has sufficient support from the public, perhaps the current, overly-hostile environment surrounding confirmations could be alleviated somewhat. Of course, the extent this is true is dependent on something I don't quite understand, which is the prediction market's ability to alter the very prediction it is "about." In other words, will the bettors just be betting on whether or not the current, hostile Congress will be able to confirm potential nominee X? If so, that's not very useful. But if the prediction market could alter things somewhat, so that X is more confirmable since members of the general public have more input, then it could be useful.
- Have an internal prediction market limited to Congress and congressional staffers that would sell contracts on the likelihood of passage of particular pieces or legislation, judicial nominations, or other legislative actions. For instance, a market could be put in place for just the Senate and its staffers as to whether Judge Hamilton will be voted out of the judiciary committee within a specified timeframe (e.g. 1 month) and whether he will be confirmed to the 7th circuit within a specified timeframe (e.g. 2 months). In contrast to a public market, such internal markets will likely be more accurate since virtually all the relevant information is held by these few individuals. Such markets would ideally assist legislative leaders in setting legislative agendas by providing information as to whether it is sensible to even schedule a vote on a particular legislative action.
- A second prediction market idea would be to have a market identical to the legislator market proposed above except that it would be limited to members of the political media. Having both markets would enable (1) a comparison of the reliability of media (i.e. how much media market prices differed from legislator market prices), and (2) the direction of causation between media stories and legislator perceptions (i.e. whether media market prices lead legislator market prices or whether legislator market prices lead media market prices). If there were also public nationwide markets, further comparisons could be made as to how far apart public perceptions are from media and political perceptions. The information that can be gleaned from comparing prediction markets covering the same subjects but with different participants could prove quite interesting.
Aggregating or Obtaining Information Useful as an Input in Policymaking
- I think we should use prediction markets to determine the solvency of the various financial institutions for which the government is currently performing stress tests. By utilizing these markets (which can more accurately reflect the binary condition of solvency than can equity markets which include "option values" for scenarios where solvency emerges through some other means), the government can use the markets to evaluate whether these institutions are solvent. Any "stress test" must make a number of assumptions about the future state of the world and financial markets and a liquid market of educated investors would be more effective at making these assumptions than any one body.
- The government should use prediction markets to assign an initial price on carbon dioxide under a cap-and-trade regime. By canvassing the market through a prediction methodology, the government would reduce initial price volatility and mitigate the amount of money that it "leaves on the table" in selling the first batch of credits.
- The current outbreak of swine flu caught many people, especially the Mexican government, off guard. Would it be possible to use prediction markets to help determine when a pandemic were likely to occur? Could they help determine where to devote resources in the most efficient way, and to the areas that need it most? Or is health care fundamentally different than other fields, making government intervention fruitless?
- My proposed use of prediction market is to fix common misconceptions about crime rates. It was shown that usually people over-estimate the current crime-rate (thus supporting a "law and order" policy suggestions). I think we could use prediction markets to ask people and policy makers to assume how the crime rates will increase or decrease in their area. I think it will flash out misconceptions (if such do exist) and will contribute to promoting better-informed policies.
- Maybe we could use prediction markets to predict what the next stage of our energy market will look like? For a question on energy efficiency, we could ask what the most popular/wide reaching advancement in energy efficiency will be in 5,10, etc. years OR which technology will be responsible for cutting the most carbon emissions in the year 5,10, etc. years from now? hybrids? electric cars and the infrastructure to support them? zero energy or highly efficient homes? other? For a question on alternative energy, we could bet on which source of energy will be of highest use in 5,10, etc, years: wind, nuclear, hydroelectric, solar, etc.?
- Prediction markets could be used to predict threats to cyber-infrastructure. The cyberwarfare idea is overhyped, but cybercrime is very real, and often, specialists and hobbyists both can see which technologies are exploitable without actually having a concrete answer as to how to exploit them. If the spread of new technologies were accompanied by a market in which people could make bets as to where the next major virus, worm, or targeted installation were to take place, the public and private sectors could concentrate their research dollars on addressing concerns with that code. The major problem: ensuring accurate reporting of "success" cases in order to properly reward winners: companies who have fallen prey to attack are rarely eager to share that information.
Prediction Markets as a Partial Substitute for Elections
- So here's my use of a prediction market by gov't - as an input for a "middle ground" solution between appointing and electing state judges. AFAI understand it, the basic impetus for electing state judges was that it was thought (god knows why) that the judicial system should better reflect prevailing public sentiment in the state and/or that state judges should be more accountable to the people. Yet as the recent Caperton case shows, there are some problems with judicial elections. My proposition would be to use an appointment system combined with a prediction market, which would give the governor (he's the one that appoints state judges, right?) a way of determining whom the public would like to see in office. If he rejects the public choice, he would have to face them at the next election, giving the public some credible clout (ie. the prediction market would not be meaningless) in determining the judicial choices. This would follow a standard winner-takes-all market structure.
Deciding Whether to Investigate Internal Government Wrongdoing
- I think there should be markets for investigations of government officials for corruption. I think Ted Stevens provides a good example of this. Despite being ultimately vindicated, I think quite a number of people believed that he was crooked and would have been willing to wager that he’d eventually be investigated. The idea being that there exists wisdom amongst the electorate and contractors of which politicians are ultimately more crooked than others. The main reason why people might not be willing to bet against someone like Stevens is that he’s too powerful to be taken down. Yet were Inspectors General and others willing to take advantage of prediction markets, that might change. So a market that said “I believe that charges will be brought against elected official X” would reflect more than anger from party opponents, it would be a way for a distributed understanding amongst the electorate that some official is corrupt to have an impact on information gathering that could lead to a prosecution. It would also make conspiracy theorists pay for their ideas. A market for “I believe Obama was not born in the US and ought to be investigated” would result in a wealth transfer from the crazies to those willing to bet against them.
- Using a normative prediction market to determine the desirability of investigating or prosecuting members of the Bush administration (or any administration) for war crimes. A prediction market might be desirable because it would present a picture of how the average legislator would view such actions, isolating the extent to which they would be perceived as purely ideological.
Prediction Markets for Law Firms and Litigants
- Law firms have a hard time making human resources decisions. Sometimes their summer classes are oversubscribed, sometimes undersubscribed. Sometimes they are too litigation-heavy, sometimes too corporate-heavy. Sometimes a lot of people in a class take a year for a clerkship, other times only a few. What about using prediction markets to help with recruiting decisions. There are many different possibilities. You could think of a law firm opening a prediction market to its summer class to get a sense of how many may end up doing a clerkship, etc. You could have a prediction market open to all of a firm's associates to get a prediction on what the firm's attrition rates will be in the upcoming years. You could have a prediction market open to all law students from top 20 schools to get a sense of whether litigation or corporate law is in vogue, whether students are leaning towards firm jobs or government jobs, etc.
- How about a prediction market for law firm layoffs? The real decision-makers would never participate, but lowly associates probably would, and in aggregate would have a lot of useful predictive information. It would work particularly well for the firm world since it's component firms are so readily comparable, but in principle it could apply to any industry.
- Using prediction markets in high-stakes commercial litigation to help parties determine both outcome and expected damages in potential jury trials. This scenario would be limited to big cases, where a critical mass of people who are knowledgeable about the facts of the case (and perhaps who could also be jury members absent this knowledge - which would exclude people with substantive ties to either party) can participate in the market.
Why Prediction Markets Won't Work
- I'm very skeptical of the idea of using prediction markets in law and government. I'm fixated on two main objections: (1) if everyone knows that government decisions will be based on a prediction market, an interested party could and in most cases would simply invest lots of money in his desired outcome and thereby effectively buy policy; and (2) even apart from the issue of interest group influence, I don't see how these markets can be depended upon to generate clear (how to translate a price or a percentage into a yes or no?) or reliable (how to attract informed betters and exclude uninformed ones?) answers. Working from this perspective, I was not able to come up with any aspect of law or government in which I could imagine use of a prediction market being a good or even feasible idea. I have therefore decided instead to give an example of a bad idea for use of prediction markets in government: FDA approval of pharmaceuticals. I think this example illustrates both of my objections. Drug companies desiring approval would use their considerable financial resources to move the market in their favor, and other potential market participants would lack both the information and the expertise necessary to weigh in in any meaningful way. I have trouble imagining casual betters reading through pages and pages of witness statements as suggested in the Predictocracy small claims adjudication example, and I find it even less likely that a better would be interested in perusing the lengthy and technical records of clinical trials. Though this example is perhaps extreme (USPTO issuance of patents would play out similarly), I think the problems it illuminates are endemic to any attempts at basing law and government decisions on prediction markets.
Class Session Recap
The class got underway with a brief presentation by Professor Wolfers, setting up discussion, and a response by Professor Fisher. After a response to that by Professor Wolfers, we opened up discussion and questions to the class. Below is a summary of some of the major themes that drove discussion.
Empirics and Design
As Professor Wolfers noted, and discussed more fully here, whether and when prediction markets generate accurate forecasts is an empirical question. Their efficacy should be measured comparative to the available alternatives, not comparative to a baseline of perfection. And the markets should be considered and evaluated in a context-specific way. They are unlikely to be very effective at predicting low-probability events, for example, but may be better as a corporate management tool.
Accordingly, many of the toughest questions surrounding prediction markets are questions of design. Though design problems can be resolved, it may be easier or less costly to resolve them when applying prediction markets to some circumstances; it may be difficult or prohibitively costly to resolve them in others. A few of the design problems the class discussed are as follows. First, it may be difficult to create a contract with winning conditions that map onto desired policy outcomes. A betting market about which team will win the World Series is easy to formulate; it may be harder to reduce predictions about crime patterns to a contract, especially because of difficulties of measuring the underlying data or outcomes. Second, the markets may be subject to manipulation by the most interested parties. Third, markets rely on the participation of the uninformed, and so attracting those people is key to success. But since the markets also rely on punishing the uninformed, it becomes harder to recruit participants. Prediction markets may work best where biases that might lead the uninformed to bet are common, but mild.
Finally, we discussed the relevance of the recent failure to regulate economic markets, with attendant disastrous consequences. Is it especially questionable given that experience to increase reliance on market mechanisms like prediction markets or other kinds of futures markets?
Applications to Legal Questions
We intended the class to focus in a rough way on how prediction markets might be useful for government or law. The upshot of much of the discussion was that those uses may be limited. Prediction markets work poorly in situations where information is secret and the information holders have strong reasons to keep it secret, where participants are subject to many biases, and where the existence of the market may itself affect the policy outcome (Professor Fisher likened this last problem to the Heisenberg uncertainty principle). These characteristics may obtain with respect to lots of legal issues. At the same time, however, problems like policy feedback loops may be solved with the right kind of design. Further, to the extent we have no particularly promising solutions to any particular problem for government, prediction markets while flawed might be comparatively superior.
Tradeoffs With Democratic Values
More generally, even if prediction markets work well, they may sacrifice other characteristics of decisionmaking that are of centrally high value. Professor Michael Abramowicz has proposed using prediction markets as a substitute for legal decisionmaking by judges or juries, but adjudication may be a desirable mode of decisionmaking for non-instrumental reasons. Prediction markets may crowd out public deliberation and other types of decisionmaking that promote community rather than just the aggregation of individual bits of knowledge. At the same time, it may be possible to come up with ways to use prediction markets that retain the benefits of discussion -- the class suggested that the deliberative aspects of making a decision might center on what values or metrics the community wishes to maximize, and the markets will help the community determine whether its policies have reached the desired goal.
A recurring example throughout the class was the possible use of prediction markets as a replacement for academic hiring discussions at faculty meetings. The market would predict, for example, the likelihood that a candidate would become a productive scholar; the decision could then track that or other metrics directly (or, alternatively, the market's results might be a useful data point at the faculty meeting). On one hand, prediction markets are unlikely to be able to predict all the information relevant to faculty hiring, like whether someone would be a good colleague. And the information generated about potential productivity, especially in small fields, might be relatively thin. On the other, prediction markets in such a context may actually help avoid some of the recurrent problems with deliberative systems -- namely that they tend to marginalize voices who are afraid to speak publicly. That problem may have unwelcome distributional consequences, and so in some sense prediction markets may do even better than democratic deliberation, on its own terms. In the corporate governance context, it was pointed out, prediction markets help give voice to lower-level workers whose ideas or views are crowded out by middle management. Of course, prediction markets may have their own unwelcome distributional consequences given participation trends.
Evaluation of the Class
The question we wanted the class to focus on was whether and in what circumstances prediction markets might be useful tools in law and government. We assigned one short paper offering a basic explanation of how prediction markets work generally, but most of the remainder of our readings were less technical -- they described and proposed new applications, suggested circumstances in which prediction markets might fail, and gave a flavor of how policymakers have responded to some proposed uses of prediction markets (especially the DARPA proposal). We faced the following tradeoff in assigning readings and setting up the class. To really understand where prediction markets can work and whether it is possible to design any particular market to avoid fatal flaws, one likely needs a fair bit of background on technical aspects of the markets -- as we discussed in class, much of any discussion of their efficacy must rely on empirical and technical details. But the more background we assigned, the less we could assign on more cutting-edge questions about prediction markets. As a result, some of the Q&A/class discussion parts of the seminar focused on technical issues that might have been resolved with different background reading.
That said, the combined comments of Professor Wolfers and Professor Fisher did a great job of fleshing out some of the answers to our large-scale questions on a theoretical level. Overall the speakers and the class discussion were lively and engaging and we felt that we were successful in covering the relevant ground and in moving to new questions in the rare moments when discussion lagged. The topic itself was pretty broad; we might have focused the class more narrowly on analyzing a particular case study or area where prediction markets might be a useful tool. But we think that participants did leave with a solid grasp on how to analyze and think about prediction markets in the future.
As was typical with IIF seminars, the discussion with our guest and questions from the class filled up a lot of time, and the two hours flew by. We did not end up getting to discuss the class's own proposals for applications of prediction markets, or even to discuss very many specific proposals by academics -- we mostly stayed at a theoretical level (with the notable exception of the running example of prediction markets for faculty hiring). This may not have been a bad thing; the students were all prediction market amateurs and so generating our own ideas may have been more useful as a tool for learning than as a tool for making progress on thorny issues related to prediction markets.
Professor Wolfers' engaging opening statement focused on a few big-picture points, rather than detailing how prediction markets work in general, which was what we had wanted (the latter would probably have consumed most of the class period on its own). Professor Fisher's response highlighted two issues: first, that we can identify a series of conditions under which we expect prediction markets to work well and to work poorly; and second, that even when the markets work well, we might be wary of extending them to legal arenas for reasons described above. Professor Wolfers then offered a brief response, and then we turned it over to the class.
This basic framework -- an expert presentation, a response, a brief reply to that, and then class discussion -- worked well. One snag along the way was that we did not have much debate or disagreement on some of the more fundamental issues (perhaps with the exception of the value of non-instrumental modes of decisionmaking). But Professor Zittrain stepped in to liven things up and challenge our guest, and by the end we had more questions and comments than we could get to before time was up.
Class Participation and Use of Technology
With the exception of one minor mid-class glitch, having our guest appear by videoconference worked quite well. We banned computers and did not make use of tools like Twitter or the Berkman Question Tool, but members of the class seemed to have plenty of questions to ask. We may have benefited from being the last class of the semester, and so students were more comfortable and more active as questioners than they might have been toward the beginning of the semester absent an anonymizing tool.
A few months before the class session, we asked students to create their own fake-money prediction market contracts on intrade.net. As described above, this didn't work out because no one bet on the contracts. As a teaching tool, it may not have been that effective either, because it didn't force anyone to actually learn about prediction markets; rather, students just came up with a semi-random topic to bet on. We had also hoped that the experiment might help us draw some conclusions about what kinds of topics are likely to generate interest or prediction market activity. Generating any useful information of an experimental nature was likely an unrealistic goal for a class of amateurs, however.
Our second pre-class experiment was much more successful. Asking people to come up with ideas for prediction market applications required them for the most part to engage with the readings and to think about the circumstances in which the markets might be useful. The exercise generated a lot of creativity (though some students thought the markets were unlikely to be successful at all). One trouble, as noted above, was that we didn't end up having time to discuss the proposals; another is noted in the suggestions section below.
Suggestions for Future Iterations
I. Conceptual Basics First
How prediction markets work may not be intuitive to the non-economists in the room. For example, the idea that, in a market with sufficiently high volume, random bets or bets attempting to rig might actually make the market more accurate (by serving as a subsidy and creating a financial incentive for investors with actual knowledge to bet even more) seemed difficult for some students to grasp, at least initially. So before class--and, most importantly, before any simulations, the students need to be briefed on how a simple market like Intrade actually works. Here are some basic principles students should understand before getting into the articles or simulations. (There are undoubtedly many other basic principles relevant to understanding the markets. But these are a few that stood out from the class discussion as perhaps responsible for some misunderstandings.)
- First, prediction markets differ from voting! They allow the investors to indicate their confidence level (on the analogy to voting, you might think of it as strength of preference, but for an outcome's likelihood of occurring rather than the desirability of it occurring). In fact, it is important to emphasize (as obvious as this might seem) that a rational investor will be just as willing to bet on outcomes she sees as revolting as outcome she sees as favorable.
- Second, prediction markets avoid the bottleneck problem. That is, information does not have to be screened through any layers where decision-makers may have a non-truth-tracking agenda. This is especially important in creating internal markets in organizations. The whole advantage is getting unfiltered access to people "on the ground," which explains (as Prof. Wolfers pointed out in our discussion) why they can be threatening to hierarchy.
- Third, prediction markets are only as good as the information that goes into them. Prof. Wolfers emphasized this point repeatedly. The relevant, comparative question in evaluating prediction markets is always, is this market better than the other means we have available for collecting this information? Often all prediction markets can do is give an unbiased account of the conventional wisdom, but that is still useful! When markets predicted Obama would win the New Hampshire primary, it was not a fatal blow to those their usefulness. The relevant question was, did they outperform polls? The answer to that is yes, they generally do (see confidence level explanation above).
In addition to an understanding of how a basic betting market works, the class might have benefited from a more in-depth understanding of prediction market design principles. The readings we assigned on applications generally did not venture down into the trenches. Michael Abramowicz's proposals on predictocracy.org are more thought experiments than fully thought-out proposals. Fearing we were overloading on the reading, we only asked students to look at the introduction to the paper on how government might use prediction markets to forecast crime, not the details of how the market would be designed. We assigned layman's reactions to and descriptions of the DARPA failure (from the mainstream press and Senate speeches); we might instead have assigned Robin Hanson's paper on Designing Real Terrorism Futures. We might have made the wrong choice along this axis; in any event, it is something for future classes to consider.
III. Better Simulations and Experiments
From a technological perspective, the first part of our experiment -- creating a contract on intrade.net -- went well. Most students were able to create the markets easily and the diversity of proposals was interesting (they can be found above). But the markets didn't generate any betting activity, as we explained above. Here are some possible alternative methods of experimentation.
- We might have familiarized the class with a working market. That is, our first simulation was designed in part to get students used to Intrade, but in practice it introduced them to a part of Intrade that actual traders do not frequent. Most of those active markets cost money, but we might have asked students to propose how they would bet on a live market and then to follow it throughout the semester -- the idea being that experiencing first-hand the ups and downs of being a trader would give a greater sense of what prediction markets can do now and how accurate they are. Failing that, it might have been useful to have a guest who had traded extensively in Intrade describe her strategies and experiences. If the class was lacking one element, it was a grounded view of how the prediction markets that we have now are actually doing, and of their strengths and weaknesses.
- We might have made use of existing academic or corporate experiments. Google opens its internal prediction market results to researchers; and more generally there is a lot of experimentation going on out there by experts. Instead of trying to start from the ground up, we might have seen whether the class could assist or partner with existing academic experts or with corporations already making use of prediction markets. Rather than coming up with random contracts in a vacuum, for example, we might have been more focused -- asking class members to come up with innovative contract ideas that might be helpful for a particular company that was already running an internal market, and seeing whether the company would ask employees to bet on our contracts along with the ones they were already running. This kind of idea would involve a lot of advance planning, of course, but might be fun and the prospect of results useful in a real-world sense might get the class really invested in learning about the markets.
IV. Making Better Use of the Proposed Applications of Prediction Markets
Our second simulation involved the class coming up with its own novel ways to apply prediction markets to law and government. Some students may have come up with their ideas before doing the assigned readings, as they were due a day before the seminar took place. Whatever the reason, though, conceptual basics were an issue for some of the proposals; others were probably unrealistic. And we didn't end up having time to discuss them (though this isn't necessarily a bad thing, depending on the length of the class. We have three suggestions for how to improve the class proposal process.
- First, have some sort of feedback mechanism that creates an incentive to craft realistic proposals -- perhaps some sort of contest, with a prize for the most viable idea. The evaluation could be done by class vote, or by our outside guest -- or of course by prediction market.
- Second, design the pre-class activities so that some short basic reading is done before the proposals are due, rather than giving the students the whole reading list yet having an early deadline for the proposals, which was the unrealistic route we took here.
- Finally, the proposals might actually work better as a post-class activity. After listening to our in-class discussion (particularly the concerns raised by Professor Fisher), students would have been much more likely to come up with well-thought-out ideas within the constraints we discussed.
V. Improving the Discussion and the Topical Focus
Overall, we were really pleased with how the discussion went; we mostly attribute the success to picking a great guest! But future generations might consider the following tweaks:
- In a sort of unplanned fashion, our class discussion ended up taking as its running example the possibility of prediction markets for academic hiring. The example usefully highlighted and made concrete some of the advantages and problems of prediction markets. But sometimes the discussion became sidetracked on the specific difficulties of academic hiring rather than the general principles the example intended to elucidate. We had available to us some excellent class-suggested ideas for new applications (see the corruption market above, for an example), but we never got to them before class time ran out. For future discussions it would be useful to reserve time to discuss specific applications, so that the interesting but irrelevant details of working out one type of market do not dominate the goal of getting a broader sense of how prediction markets might be applied in different areas.
- Alternatively, we might try the opposite tack: going for narrow depth rather than breadth. That is, rather than starting on a meta level and reserving time to discuss applications, we might focus the discussion around a single, well-chosen, law and policy-related example, say DARPA or prediction markets for crime. We'd then try to apply existing theories about situations in which prediction markets work well or poorly to our example, to brainstorm how it might be tweaked, and to figure out whether it might work in the end. The trouble with this suggestion is that our class isn't qualified to do any real design of a potential prediction market, so we would risk losing the breadth of discussion without gaining all that much in depth.