Day 6 Predictions: Difference between revisions

From Cyberlaw: Difficult Issues Winter 2010
Jump to navigation Jump to search
No edit summary
mNo edit summary
Line 1: Line 1:
Daniel: Our guests will probably discuss at length the challenges that Dispute Finder and most web-based cooperative tools bump into while attempting to harness input from virtual crowds. I guess they will talk about Dispute Finder’s design difficulties, such as costs and trade-offs (between precision and recall, between user-friendliness and number / quality of features, etc). They’ll most likely also summon stories from the interviews discussed in the document we received, perhaps to illustrate content-layer problems with measurement of information-sources reliability; users’ misunderstandings / trouble with logic operations; and group biases.
Daniel: Our guests will probably discuss at length the challenges that Dispute Finder and most web-based cooperative tools bump into while attempting to harness input from virtual crowds. I guess they will talk about Dispute Finder’s design difficulties, such as costs and trade-offs (between precision and recall, between user-friendliness and number / quality of features, etc). They’ll most likely also summon stories from the interviews discussed in the document we received, perhaps to illustrate content-layer problems with measurement of "information sources reliability"; users’ misunderstandings / trouble with logic operations; and group biases.
I would love to hear their views on the [http://courses.ischool.berkeley.edu/i256/f09/lectures/RobEnnalsGuestLecture.ppt proposed use of Turks] to improve the database of disputed claims and arguments, as well as on the current biases of the disputed facts / arguments presently listed by the software.
I would love to hear their views on the [http://courses.ischool.berkeley.edu/i256/f09/lectures/RobEnnalsGuestLecture.ppt proposed use of Turks] to improve the database of disputed claims and arguments, as well as on the current biases of the disputed facts / arguments presently listed by the software.

Revision as of 00:16, 11 January 2010

Daniel: Our guests will probably discuss at length the challenges that Dispute Finder and most web-based cooperative tools bump into while attempting to harness input from virtual crowds. I guess they will talk about Dispute Finder’s design difficulties, such as costs and trade-offs (between precision and recall, between user-friendliness and number / quality of features, etc). They’ll most likely also summon stories from the interviews discussed in the document we received, perhaps to illustrate content-layer problems with measurement of "information sources reliability"; users’ misunderstandings / trouble with logic operations; and group biases. I would love to hear their views on the proposed use of Turks to improve the database of disputed claims and arguments, as well as on the current biases of the disputed facts / arguments presently listed by the software.