VRM Research opportunities: Difference between revisions

From Project VRM
Jump to navigation Jump to search
 
(40 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Project Overview ==
== Project Overview ==
=== Objectives ===
=== Objectives ===
Our primary goal here is to complete some meaningful research within the 2009-2010 school year that builds visibility and credibility around Project VRM internally at Berkman and related communities of interest, and externally with the public at large. A research initiative should help test one or more basic VRM principles. Hopefully, results of a research effort can be used to make a case for adopting VRM principles of openness and be used to demonstrate the benefits of these principles to vendors and/or customers.
Our primary goal is to test one or more basic VRM principles (e.g. benefits of vendor openness, willingness of users to pay for perceived value in the absence of existing payment mechanisms provided by the seller). Results of research efforts will guide the expression of VRM principles, and, presumably, drive their adoption.


Additional benefits include bringing together passionate participants around a research project, demonstrating and furthering Berkman research software, and forcing some clarity and learning around testable characteristics of VRM.
Additional benefits include bringing together passionate participants around a research project, demonstrating and furthering Berkman research methodologies and software -- and forcing some clarity and learning around testable characteristics of VRM.


=== Testable Principles ===
=== Testable Principles ===
Generally speaking, VRM's vision is equip individuals with tools that make them independent leaders and not just captive followers in their relationships with vendors and other parties on the supply side of markets. VRM is successful when customers see direct benefits from taking control of their relationships, and vendors see alternatives to customer lock-in for gaining loyalty and generating profit.
Generally speaking, VRM's vision is equip individuals with tools that make them independent leaders and not just captive followers in their relationships with vendors and other parties on the supply side of markets. VRM is successful when customers see direct benefits from taking control of their relationships, and vendors see alternatives to customer lock-in for gaining loyalty and generating profit.


This vision makes several assumptions. Primarily that '''a free customer is more valuable than a captive one.''' Testing this hypothesis (or more accurately, specific versions and aspects of this hypothesis) should be our primary goal. This hypothesis begs at least two important questions:
This vision makes several assumptions. Primarily, that '''a free customer is more valuable than a captive one.''' Testing this hypothesis (or more accurately, specific versions and aspects of this hypothesis) should be our primary goal. This hypothesis begs at least two important questions:


What characterizes a free customer?
'''What characterizes a free customer?'''
* Able to choose how to relate to a vendor
* Able to choose how to relate to a vendor
** Customer relies on tools and data under their control to relate to and manage vendors
** Customer relies on tools and data under their control to relate to and manage vendors
** Choose what information to share and when
** Choose what information to share and when
** Choose how this information can be used (i.e. under what terms), for example:
** Choose how this information can be used (i.e. under what terms), for example:
*** Customer data must be portable
*** Customer-generated data must be portable
*** Customer supplied data must be retractable
*** Customer-supplied data must be retractable
*** Customer supplied data can't be used for targeted advertising / marketing messages
*** Customer-supplied data can't be used for targeted advertising / marketing messages
*** etc.
*** etc.
** Customer receives a copy of data that is provided or generated as part of doing business, e.g. transaction data
** Customer receives a copy of data that is provided or generated as part of doing business, e.g. transaction data
Line 23: Line 23:
** Options for terminating relationship at will and without penalty
** Options for terminating relationship at will and without penalty


What are the potential benefits to a vendor to freeing a customer?
'''What are the potential benefits to a vendor for freeing a customer -- or dealing with customers that are already free?'''
* Eliminate cost/hassle of gathering, storing, and managing customer data where the customer is relying on their own tools
* Decreased cost/hassle of gathering, storing, and managing customer data where the customer is relying on his or her own tools
* Increased attention / visibility to vendor for being open, i.e. being the open alternative in the market
* Increased attention / visibility to vendor for being open, i.e. being the open alternative in the market
* Increased participation from customers wanting to engage with open businesses
* Increased participation from customers wanting to engage with open businesses
Line 30: Line 30:
* Increased sharing / customer WOM around open products / services
* Increased sharing / customer WOM around open products / services
* Increased volume and quality of customer-supplied data
* Increased volume and quality of customer-supplied data
* Less guesswork by the vendor if the customer is telling them exactly what the want when they want it - or at least more/better information about themselves
* Decreased guesswork by the vendor if the customer is telling them exactly what the want when they want it - or at least more/better information about themselves
* Increased customer trust / loyalty / goodwill(longer term?)
* Increased customer trust / loyalty / goodwill (longer term?)
* Increased external innovation and value being generated around vendor services / data
* Increased external innovation and value being generated around vendor services / data
** e.g. if a vendor opens their transaction data, a 3rd-party service might help customers better manage their electronic receipts, thereby making the open vendor more attractive for plugging into this value ecosystem
** e.g. if a vendor opens their transaction data, a 3rd-party service might help customers better manage their electronic receipts,  
* Development of an ecosystem of value around vendor services, creating the open version of customer lock-in
** e.g. Good services based on open transaction data encourage continued use of open transaction data provider


Open Questions
'''Open Questions'''
* Similarity to "free culture" arguments, e.g. what are the benefits to CC Licensing. Prior research already done here?
* Similarity to "free culture" arguments, e.g. what are the benefits to CC Licensing. Prior research already done here?
* What aspects of the benefits above are perceptual vs. technical? How might we measure and test these?
* What aspects of the benefits above are perceptual vs. technical? How might we measure and test these?


== Specific Research Proposals ==
== Specific Research Proposals ==
Use mechanical Turk and internally developed web and measurement software tools for completing web-based, personal data-gathering scenarios - Doc Searls and Keith Hopper with help from Aaron Shaw, Jason Callina, Joe Andrieu, and Tim Hwang
Present users with a scenario that explores the hypothesis that '''a free customer is worth more than a captive one''' by testing specific behaviors of individuals placed into "free customer" vs. "captive customer" scenarios. The experimental scenarios will use mechanical Turk and Berkman-developed web and measurement software tools for completing web-based, personal data-gathering scenarios - Doc Searls and Keith Hopper with guidance from Dave Rand, Jason Callina, Joe Andrieu, Tim Hwang and Rob Faris. External funding, while not required, is being explored, as it would greatly expand the scope of testable treatments and number of participants while reducing the overall timeframe.


Present users with different web-based scenarios that test specific aspects of the hypothesis that '''a free customer is worth more than a captive one'''
=== Proposed Experimental Scenario ===
* Upon completion of a scenario, users will be receive a small amount of money (or free music download) in exchange for their effort.  
This scenario entails an information gathering process, where participants are placed in either a free or captive customer scenario and then subsequently asked to enter a variety of personal information. Different treatments within this experiment will test participants' willingness to engage, exchange information, and offer the experience to their friends.
# Participant selects the Amazon HIT and agrees to complete an associated online process in exchange for a small amount of money (e.g. $0.10)
# Participant is randomly assigned to one of two groups (free or captive)
# Both groups are presented with an identical, multi-step information gathering process - specifically, to provide music preference information (e.g. favorite artists and tracks) along with personal non-identifiable demographic information (e.g. sex, age, salary, zip, etc.). All questions/fields are optional.
# At the end of the information gathering process, both groups are informed they have completed the requirements to redeem their earnings. Additional steps taken at this point (e.g. listening, sharing) are not required.
# Upon completion of the entire process, both groups are provided the option to share a link to this project with a friend (or on twitter, facebook, etc).
# For the Free Customer Scenario:
## Before information gathering begins, the free group is informed that they will be testing a new user-driven information collection tool that lets individuals gather together their own data to later share if and how they choose to. The information gathering process will involve letting them generate a list of their favorite musical artists along with some other personal information. It will be made clear that the information that is collected will be strictly for their own use and will not be shared without their permission. At the end of the information gathering process, participants are then offered the chance to share their data with a specific vendor (e.g. Last.fm) in exchange for targeted music recommendations based on their favorite artists (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation).
## Upon completion of all selected actions, the participant is then informed that this study was to test their willingness to provide personal information. They will be asked brief survey questions to determine the depth of their belief they were in a "free" situation and the data would not, in fact, be used, shared, or investigated without their permission. Data from participants who did not trust the process might be discarded. Participants will then be asked if they're willing to anonymously share the personal data collected with the research team.
# For the Captive Customer Scenario:
## Before information gathering begins, the captive group is informed that they will be testing a new vendor-driven information collection
tool that gathers together individual's data for a vender (e.g. Last.fm) for the purpose of generating music recommendations. The information gathering process will be identical to the "free customer" process and involve letting them generate a list of their favorite musical artists along with some other personal information. Upon completion of the information gathering process, relevant data is automatically shared with the vendor (user is not given a choice), and targeted artist and song recommendations are provided in return (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation).  


Scenario 1
* Some aspects to test:
* A proposed web-based scenario begins by asking users to provide or produce meaningful personal data (e.g. music preferences, audio listening history, etc.). Each scenario represents three differing degrees of personal control over the experience and the data. At the end of each process, the user is presented with various options for sharing the data, using the data (for example, to deploy against a music recommendation API) and various commercial activities (e.g. music downloads).
** Will free participants be more likely to complete the process than captive ones?
* The specific hypotheses to be tested will be whether an individual is more willing to complete the data gathering effort and subsequently to engage with the system and with vendors if they have more personal control / autonomy over the data and experience
** Will free participants provide more data to more vendors than captive ones?
** What types of data might free participants be more willing to provide?
** Will free participants be more likely to share the experience with their friends?
** Will free participants be more likely to listen and purchase recommended songs?
** How will specifics of the experience and specific wording affect individual's willingness to participate?
** Are there certain individual characteristics (e.g. age) that predict willingness to participate?
** What is the most effective way to present a free scenario so that it feels free?
** What percentage of participants don't trust online data gathering efforts and what affects this perception


Scenario 2
* Potential issues / items to resolve:
* randomly assigning users to the role of Vendor or Customer
** Are we really measuring FREE vs. CAPTIVE customer experiences? In other words, is our definition of free and captive too arbitrary or context-specific as to lose experimental merit?
** Is offering targeted music a valuable enough proposition to encourage data sharing, listening and purchasing of music tracks and ultimately, sharing with friends? Is there a way to make this more compelling?
** Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation?
** Is the subtle deception involved in this experiment appropriate, presented appropriately, and functionally the best way to structure this experiment?
** How will we control for willingness to purchase/listen if one process potentially alters the quality of the music recommendations? Is this necessary to control for?
** Show we throw out data from "free" customers who don't trust the information gathering process?
** Should captive customers be told what vendor they are sharing the data with - if so, should we test multiple vendors?
** Should we disclose which data is being shared with vendors if it all doesn't go into the API request?
** Should other free group options exist, such as the ability to download your entered information in a standard format or share your preferences and recommendations with a friend?
** How will participants be paid so as not to influence whether or not they choose to provide information (or alternatively, simply skip the process and collect their $.20)?
** What music recommendation APIs are available, what types of data do they require to generate quality recommendations (and is this standard)?
** How might trust issues with the data collector (i.e. Berkman/Harvard) influence outcomes?
** How will the music services themselves (e.g. perceived brand trust and value) affect outcomes (and how might we control for this)?
** What are the experimental disclosure requirements here - esp as it relates to personal information gathering that won't be used to generate music recommendations
** Can the cash reward be exchanged for a music download (that possibly is of higher value) to test effectiveness of the recommendation and ultimately test if free participants provide information that results in better vendor offerings.


Scenario 3
=== Additional Scenario Possibilities ===
* Eyebrowse
'''Scenario 2'''
* Assign users to either the role of Vendor or Customer and pair them up. Customers gather music listening preferences and habits about themselves through either a user-driven, open tool and process or through a vendor-driven, choice-free process.
* The results of these processes are shared with their vendor partners who are asked to make a music download recommendation to their customer based on the information shared. The vendor receives a larger reward if the customer selects their recommended download over a (smaller) cash prize.
* This scenario goes beyond demonstrating increased sharing to test the idea that openness has the potential to generate less guesswork and increased sales for the vendor
 
'''Scenario 3'''
* Require AMT participants to use [http://eyebrowse.csail.mit.edu/ Eyebrowse] software to collect browser history data.
* Create two scenarios - one that puts the user in charge of sharing what/how/to whom and another where the data is uploaded to a commercial vendor as part of the HIT.
* Measure willingness of participants to complete the task and subsequently to upload their data for the two scenarios
* (NOTE: Can eyebrowse allow for non-sharing of data?)


== Project Status ==
== Project Status ==
Line 61: Line 104:
* Specific research proposal is shaping up involving the use of Amazon Mechanical Turk and based on code and data acquisition mechanisms already constructed and tested by Berkman staff for other research projects (cooperation project). See [http://cyber.law.harvard.edu/projectvrm/VRM_Research_opportunities#Specific_Research_Proposals Specific Research Proposals].
* Specific research proposal is shaping up involving the use of Amazon Mechanical Turk and based on code and data acquisition mechanisms already constructed and tested by Berkman staff for other research projects (cooperation project). See [http://cyber.law.harvard.edu/projectvrm/VRM_Research_opportunities#Specific_Research_Proposals Specific Research Proposals].


== Notes from Meeting on 10/29 ==
== Sources/Background ==
* Before / after with VRM implementations
* [http://www.goodreads.com/book/show/2527900.Nudge_Improving_Decisions_About_Health_Wealth_and_Happiness Nudge: Improving Decisions About Health, Wealth, and Happiness], by Richard H. Thaler, Cass R. Sunstein
* Self-tracking - what sorts of changes occur
* [http://swoopo.com Swoop.com], "entertainment shopping"
* Look at companies that are willing to share info and what changes this brings to them
* Game: profiles of companies (mirror existing companies) - users interacting and measuring
* User research around learning perceptions and actions
** How does behavior change when people are ignorant vs. once they understand
** Increase visibility around what users "give away" + then add-in control
* People's willingness to give data with control vs. without control
* Max - existing research: ranked shopping list based on privacy levels
** Can we use real money? Mechanical Turk?
* Testing what people say vs. what they do
* How do we do the "invention is the mother of necessity" aspect of VRM - can we test something that doesn't exist yet?
* Research Hypotheses:  
** Will people be more willing to yield their information if they control it?
** How will people behave if we give them more control over/with their data
* Realworld retailer - at checkout, they're given some choices
** give up information in exchange
* How do we test removal of guesswork?
* Turk experiment with movie recommendations
* Test
** 1st group: You have money to buy some movies (simulated stores)
** 2nd group: Create basket and share with stores
* Education of customers is an important aspect of example
* Problem with secondary markets, e.g. users aggregating data without vendor buy-in
* LastFM, scrobbling test (e.g. what will people do with their data?)
** What would you pay per song? Then collect data and present it to them, will it change what the song is worth?
* Don't forget about selection bias with these types of experiments
 
== Notes from Workshop on 10/13 ==
=== Should be... ===
* testable, concrete, measurable
* of use for doc in his new book
* appease the berkman gods with productive research efforts
* relatively easy and completable with volunteers, internal resources and a limited time-frame
* provide businesses with fodder that they need to help make the case internally for opening up user control
** Ben from Zeo needs a list of benefits to openness to bring to his investors: We need to provide that!
** What are the benefits to vendors of VRM?
** What is user data control? Define.
** Relationships -> what are the vendor benefits?
*** eliminate guesswork
 
=== For example... ===
* Talk to organizations who have opened up and have them describe the http://cyber.law.harvard.edu/projectvrm/edit/VRM_Research_opportunitiesbenefits
** What's measurable here?
* Test the hypothesis: A free customer is worth more than a captive one
** "worth more" means defining customer value and measuring it
** is the customer valuing the vendor more?
** what is free vs. captive?
* before and after giving users their data
* altimeter group 
** engagementdb.com: report that shows that those who do social media are most profitable
* case studies (e.g. HBS format)
* backward analysis?
* report on personal informatics
** what data comes out of personal informatics?
* Company experiences with launching api's?
* interviews, surveys
* What are some open data efforts?
** VRM spotting -> what are some user-driven organizations?
* What are possible frameworks / scenarios for measurement / testing / research
* What are the intention economy principles?
 
=== What is VRMness (user-drivenness?) that might be testable? ===
* individual is the POI
* individual gets a copy of their data
* individual controls use of data
* users initiate
* user contribution is a core value
* user choice
* belongs to user - user owns / controls the system
* service portability / substitutability

Latest revision as of 10:42, 18 March 2010

Project Overview

Objectives

Our primary goal is to test one or more basic VRM principles (e.g. benefits of vendor openness, willingness of users to pay for perceived value in the absence of existing payment mechanisms provided by the seller). Results of research efforts will guide the expression of VRM principles, and, presumably, drive their adoption.

Additional benefits include bringing together passionate participants around a research project, demonstrating and furthering Berkman research methodologies and software -- and forcing some clarity and learning around testable characteristics of VRM.

Testable Principles

Generally speaking, VRM's vision is equip individuals with tools that make them independent leaders and not just captive followers in their relationships with vendors and other parties on the supply side of markets. VRM is successful when customers see direct benefits from taking control of their relationships, and vendors see alternatives to customer lock-in for gaining loyalty and generating profit.

This vision makes several assumptions. Primarily, that a free customer is more valuable than a captive one. Testing this hypothesis (or more accurately, specific versions and aspects of this hypothesis) should be our primary goal. This hypothesis begs at least two important questions:

What characterizes a free customer?

  • Able to choose how to relate to a vendor
    • Customer relies on tools and data under their control to relate to and manage vendors
    • Choose what information to share and when
    • Choose how this information can be used (i.e. under what terms), for example:
      • Customer-generated data must be portable
      • Customer-supplied data must be retractable
      • Customer-supplied data can't be used for targeted advertising / marketing messages
      • etc.
    • Customer receives a copy of data that is provided or generated as part of doing business, e.g. transaction data
    • Full disclosure on how customer supplied-data is being used (privacy policy)
    • Options for terminating relationship at will and without penalty

What are the potential benefits to a vendor for freeing a customer -- or dealing with customers that are already free?

  • Decreased cost/hassle of gathering, storing, and managing customer data where the customer is relying on his or her own tools
  • Increased attention / visibility to vendor for being open, i.e. being the open alternative in the market
  • Increased participation from customers wanting to engage with open businesses
    • Both initial willingness and ongoing enagement
  • Increased sharing / customer WOM around open products / services
  • Increased volume and quality of customer-supplied data
  • Decreased guesswork by the vendor if the customer is telling them exactly what the want when they want it - or at least more/better information about themselves
  • Increased customer trust / loyalty / goodwill (longer term?)
  • Increased external innovation and value being generated around vendor services / data
    • e.g. if a vendor opens their transaction data, a 3rd-party service might help customers better manage their electronic receipts,
  • Development of an ecosystem of value around vendor services, creating the open version of customer lock-in
    • e.g. Good services based on open transaction data encourage continued use of open transaction data provider

Open Questions

  • Similarity to "free culture" arguments, e.g. what are the benefits to CC Licensing. Prior research already done here?
  • What aspects of the benefits above are perceptual vs. technical? How might we measure and test these?

Specific Research Proposals

Present users with a scenario that explores the hypothesis that a free customer is worth more than a captive one by testing specific behaviors of individuals placed into "free customer" vs. "captive customer" scenarios. The experimental scenarios will use mechanical Turk and Berkman-developed web and measurement software tools for completing web-based, personal data-gathering scenarios - Doc Searls and Keith Hopper with guidance from Dave Rand, Jason Callina, Joe Andrieu, Tim Hwang and Rob Faris. External funding, while not required, is being explored, as it would greatly expand the scope of testable treatments and number of participants while reducing the overall timeframe.

Proposed Experimental Scenario

This scenario entails an information gathering process, where participants are placed in either a free or captive customer scenario and then subsequently asked to enter a variety of personal information. Different treatments within this experiment will test participants' willingness to engage, exchange information, and offer the experience to their friends.

  1. Participant selects the Amazon HIT and agrees to complete an associated online process in exchange for a small amount of money (e.g. $0.10)
  2. Participant is randomly assigned to one of two groups (free or captive)
  3. Both groups are presented with an identical, multi-step information gathering process - specifically, to provide music preference information (e.g. favorite artists and tracks) along with personal non-identifiable demographic information (e.g. sex, age, salary, zip, etc.). All questions/fields are optional.
  4. At the end of the information gathering process, both groups are informed they have completed the requirements to redeem their earnings. Additional steps taken at this point (e.g. listening, sharing) are not required.
  5. Upon completion of the entire process, both groups are provided the option to share a link to this project with a friend (or on twitter, facebook, etc).
  6. For the Free Customer Scenario:
    1. Before information gathering begins, the free group is informed that they will be testing a new user-driven information collection tool that lets individuals gather together their own data to later share if and how they choose to. The information gathering process will involve letting them generate a list of their favorite musical artists along with some other personal information. It will be made clear that the information that is collected will be strictly for their own use and will not be shared without their permission. At the end of the information gathering process, participants are then offered the chance to share their data with a specific vendor (e.g. Last.fm) in exchange for targeted music recommendations based on their favorite artists (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation).
    2. Upon completion of all selected actions, the participant is then informed that this study was to test their willingness to provide personal information. They will be asked brief survey questions to determine the depth of their belief they were in a "free" situation and the data would not, in fact, be used, shared, or investigated without their permission. Data from participants who did not trust the process might be discarded. Participants will then be asked if they're willing to anonymously share the personal data collected with the research team.
  7. For the Captive Customer Scenario:
    1. Before information gathering begins, the captive group is informed that they will be testing a new vendor-driven information collection

tool that gathers together individual's data for a vender (e.g. Last.fm) for the purpose of generating music recommendations. The information gathering process will be identical to the "free customer" process and involve letting them generate a list of their favorite musical artists along with some other personal information. Upon completion of the information gathering process, relevant data is automatically shared with the vendor (user is not given a choice), and targeted artist and song recommendations are provided in return (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation).

  • Some aspects to test:
    • Will free participants be more likely to complete the process than captive ones?
    • Will free participants provide more data to more vendors than captive ones?
    • What types of data might free participants be more willing to provide?
    • Will free participants be more likely to share the experience with their friends?
    • Will free participants be more likely to listen and purchase recommended songs?
    • How will specifics of the experience and specific wording affect individual's willingness to participate?
    • Are there certain individual characteristics (e.g. age) that predict willingness to participate?
    • What is the most effective way to present a free scenario so that it feels free?
    • What percentage of participants don't trust online data gathering efforts and what affects this perception
  • Potential issues / items to resolve:
    • Are we really measuring FREE vs. CAPTIVE customer experiences? In other words, is our definition of free and captive too arbitrary or context-specific as to lose experimental merit?
    • Is offering targeted music a valuable enough proposition to encourage data sharing, listening and purchasing of music tracks and ultimately, sharing with friends? Is there a way to make this more compelling?
    • Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation?
    • Is the subtle deception involved in this experiment appropriate, presented appropriately, and functionally the best way to structure this experiment?
    • How will we control for willingness to purchase/listen if one process potentially alters the quality of the music recommendations? Is this necessary to control for?
    • Show we throw out data from "free" customers who don't trust the information gathering process?
    • Should captive customers be told what vendor they are sharing the data with - if so, should we test multiple vendors?
    • Should we disclose which data is being shared with vendors if it all doesn't go into the API request?
    • Should other free group options exist, such as the ability to download your entered information in a standard format or share your preferences and recommendations with a friend?
    • How will participants be paid so as not to influence whether or not they choose to provide information (or alternatively, simply skip the process and collect their $.20)?
    • What music recommendation APIs are available, what types of data do they require to generate quality recommendations (and is this standard)?
    • How might trust issues with the data collector (i.e. Berkman/Harvard) influence outcomes?
    • How will the music services themselves (e.g. perceived brand trust and value) affect outcomes (and how might we control for this)?
    • What are the experimental disclosure requirements here - esp as it relates to personal information gathering that won't be used to generate music recommendations
    • Can the cash reward be exchanged for a music download (that possibly is of higher value) to test effectiveness of the recommendation and ultimately test if free participants provide information that results in better vendor offerings.

Additional Scenario Possibilities

Scenario 2

  • Assign users to either the role of Vendor or Customer and pair them up. Customers gather music listening preferences and habits about themselves through either a user-driven, open tool and process or through a vendor-driven, choice-free process.
  • The results of these processes are shared with their vendor partners who are asked to make a music download recommendation to their customer based on the information shared. The vendor receives a larger reward if the customer selects their recommended download over a (smaller) cash prize.
  • This scenario goes beyond demonstrating increased sharing to test the idea that openness has the potential to generate less guesswork and increased sales for the vendor

Scenario 3

  • Require AMT participants to use Eyebrowse software to collect browser history data.
  • Create two scenarios - one that puts the user in charge of sharing what/how/to whom and another where the data is uploaded to a commercial vendor as part of the HIT.
  • Measure willingness of participants to complete the task and subsequently to upload their data for the two scenarios
  • (NOTE: Can eyebrowse allow for non-sharing of data?)

Project Status

  • Meeting with geeks on 10/29 produced some rough research directions and commitment from Berkman staffers to helping execute
  • Additional meetings (11/2, 11/3) with Keith Hopper and Jason Callina and Keith Hopper and Tim Hwang to discuss possible scenarios and where to seek additional advice/support
  • There are clear benefits to producing research not only for the VRM community but also for the business community. Both Zeo and Personal Black Box (interestingly, both startup orgs) have expressed a strong interest in research that helps clarify and "prove" the benefits of vendors opening up control to the user.
  • Specific research proposal is shaping up involving the use of Amazon Mechanical Turk and based on code and data acquisition mechanisms already constructed and tested by Berkman staff for other research projects (cooperation project). See Specific Research Proposals.

Sources/Background