VRM Research opportunities: Difference between revisions
(13 intermediate revisions by one other user not shown) | |||
Line 3: | Line 3: | ||
Our primary goal is to test one or more basic VRM principles (e.g. benefits of vendor openness, willingness of users to pay for perceived value in the absence of existing payment mechanisms provided by the seller). Results of research efforts will guide the expression of VRM principles, and, presumably, drive their adoption. | Our primary goal is to test one or more basic VRM principles (e.g. benefits of vendor openness, willingness of users to pay for perceived value in the absence of existing payment mechanisms provided by the seller). Results of research efforts will guide the expression of VRM principles, and, presumably, drive their adoption. | ||
Additional benefits include bringing together passionate participants around a research project, demonstrating and furthering Berkman research methodologies | Additional benefits include bringing together passionate participants around a research project, demonstrating and furthering Berkman research methodologies and software -- and forcing some clarity and learning around testable characteristics of VRM. | ||
=== Testable Principles === | === Testable Principles === | ||
Line 23: | Line 23: | ||
** Options for terminating relationship at will and without penalty | ** Options for terminating relationship at will and without penalty | ||
'''What are the potential benefits to a vendor for freeing a customer?''' | '''What are the potential benefits to a vendor for freeing a customer -- or dealing with customers that are already free?''' | ||
* Decreased cost/hassle of gathering, storing, and managing customer data where the customer is relying on | * Decreased cost/hassle of gathering, storing, and managing customer data where the customer is relying on his or her own tools | ||
* Increased attention / visibility to vendor for being open, i.e. being the open alternative in the market | * Increased attention / visibility to vendor for being open, i.e. being the open alternative in the market | ||
* Increased participation from customers wanting to engage with open businesses | * Increased participation from customers wanting to engage with open businesses | ||
Line 42: | Line 42: | ||
== Specific Research Proposals == | == Specific Research Proposals == | ||
Present users with a scenario that | Present users with a scenario that explores the hypothesis that '''a free customer is worth more than a captive one''' by testing specific behaviors of individuals placed into "free customer" vs. "captive customer" scenarios. The experimental scenarios will use mechanical Turk and Berkman-developed web and measurement software tools for completing web-based, personal data-gathering scenarios - Doc Searls and Keith Hopper with guidance from Dave Rand, Jason Callina, Joe Andrieu, Tim Hwang and Rob Faris. External funding, while not required, is being explored, as it would greatly expand the scope of testable treatments and number of participants while reducing the overall timeframe. | ||
=== Proposed Experimental Scenario === | === Proposed Experimental Scenario === | ||
This scenario entails | This scenario entails an information gathering process, where participants are placed in either a free or captive customer scenario and then subsequently asked to enter a variety of personal information. Different treatments within this experiment will test participants' willingness to engage, exchange information, and offer the experience to their friends. | ||
# Participant selects the Amazon HIT and agrees to complete an associated online process in exchange for a small amount of money (e.g. $.10) | # Participant selects the Amazon HIT and agrees to complete an associated online process in exchange for a small amount of money (e.g. $0.10) | ||
# Participant is randomly assigned to one of two groups (free or captive) | # Participant is randomly assigned to one of two groups (free or captive) | ||
# Both groups are presented with an identical, multi-step information gathering process - specifically, to provide music preference information (e.g. favorite artists and tracks) along with personal demographic information (e.g. | # Both groups are presented with an identical, multi-step information gathering process - specifically, to provide music preference information (e.g. favorite artists and tracks) along with personal non-identifiable demographic information (e.g. sex, age, salary, zip, etc.). All questions/fields are optional. | ||
# At the end of the information gathering process, both groups are informed they have completed the requirements to redeem their earnings. Additional steps taken at this point (e.g. listening, sharing) are not required. | # At the end of the information gathering process, both groups are informed they have completed the requirements to redeem their earnings. Additional steps taken at this point (e.g. listening, sharing) are not required. | ||
# Upon completion of the entire process, both groups are provided the option to share a link to this project with a friend (or on twitter, facebook, etc). | # Upon completion of the entire process, both groups are provided the option to share a link to this project with a friend (or on twitter, facebook, etc). | ||
# | # For the Free Customer Scenario: | ||
## Before information gathering begins, the free group is informed that the information that | ## Before information gathering begins, the free group is informed that they will be testing a new user-driven information collection tool that lets individuals gather together their own data to later share if and how they choose to. The information gathering process will involve letting them generate a list of their favorite musical artists along with some other personal information. It will be made clear that the information that is collected will be strictly for their own use and will not be shared without their permission. At the end of the information gathering process, participants are then offered the chance to share their data with a specific vendor (e.g. Last.fm) in exchange for targeted music recommendations based on their favorite artists (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation). | ||
## Before information gathering begins, the captive group is informed that | ## Upon completion of all selected actions, the participant is then informed that this study was to test their willingness to provide personal information. They will be asked brief survey questions to determine the depth of their belief they were in a "free" situation and the data would not, in fact, be used, shared, or investigated without their permission. Data from participants who did not trust the process might be discarded. Participants will then be asked if they're willing to anonymously share the personal data collected with the research team. | ||
# For the Captive Customer Scenario: | |||
## Before information gathering begins, the captive group is informed that they will be testing a new vendor-driven information collection | |||
tool that gathers together individual's data for a vender (e.g. Last.fm) for the purpose of generating music recommendations. The information gathering process will be identical to the "free customer" process and involve letting them generate a list of their favorite musical artists along with some other personal information. Upon completion of the information gathering process, relevant data is automatically shared with the vendor (user is not given a choice), and targeted artist and song recommendations are provided in return (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation). | |||
* Some aspects to test: | * Some aspects to test: | ||
Line 63: | Line 66: | ||
** How will specifics of the experience and specific wording affect individual's willingness to participate? | ** How will specifics of the experience and specific wording affect individual's willingness to participate? | ||
** Are there certain individual characteristics (e.g. age) that predict willingness to participate? | ** Are there certain individual characteristics (e.g. age) that predict willingness to participate? | ||
** What is the most effective way to present a free scenario so that it feels free? | |||
** What percentage of participants don't trust online data gathering efforts and what affects this perception | |||
* Potential issues / items to resolve: | * Potential issues / items to resolve: | ||
** | ** Are we really measuring FREE vs. CAPTIVE customer experiences? In other words, is our definition of free and captive too arbitrary or context-specific as to lose experimental merit? | ||
** Is offering targeted music a valuable enough proposition to encourage data sharing, listening and purchasing of music tracks and ultimately, sharing with friends? Is there a way to make this more compelling? | |||
** Is | |||
** Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation? | ** Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation? | ||
** Is the subtle deception involved in this experiment appropriate, presented appropriately, and functionally the best way to structure this experiment? | |||
** How will we control for willingness to purchase/listen if one process potentially alters the quality of the music recommendations? Is this necessary to control for? | ** How will we control for willingness to purchase/listen if one process potentially alters the quality of the music recommendations? Is this necessary to control for? | ||
** Should captive | ** Show we throw out data from "free" customers who don't trust the information gathering process? | ||
** Should captive customers be told what vendor they are sharing the data with - if so, should we test multiple vendors? | |||
** Should we disclose which data is being shared with vendors if it all doesn't go into the API request? | |||
** Should other free group options exist, such as the ability to download your entered information in a standard format or share your preferences and recommendations with a friend? | ** Should other free group options exist, such as the ability to download your entered information in a standard format or share your preferences and recommendations with a friend? | ||
** How will participants be paid so as not to influence whether or not they choose to provide information (or alternatively, simply skip the process and collect their $.20)? | ** How will participants be paid so as not to influence whether or not they choose to provide information (or alternatively, simply skip the process and collect their $.20)? | ||
** What music recommendation APIs are available, what types of data do they require to generate quality recommendations (and is this standard)? | ** What music recommendation APIs are available, what types of data do they require to generate quality recommendations (and is this standard)? | ||
** How might trust issues with the data collector (i.e. Berkman) influence outcomes? | ** How might trust issues with the data collector (i.e. Berkman/Harvard) influence outcomes? | ||
** How will the music services themselves (e.g. perceived brand trust and value) affect outcomes (and how might we control for this)? | ** How will the music services themselves (e.g. perceived brand trust and value) affect outcomes (and how might we control for this)? | ||
** What are the experimental disclosure requirements here - esp as it relates to personal information gathering that | ** What are the experimental disclosure requirements here - esp as it relates to personal information gathering that won't be used to generate music recommendations | ||
** Can the cash reward be exchanged for a music download (that possibly is of higher value) to test effectiveness of the recommendation and ultimately test if free participants provide information that results in better vendor offerings. | |||
=== Additional Scenario Possibilities === | === Additional Scenario Possibilities === |
Latest revision as of 09:42, 18 March 2010
Project Overview
Objectives
Our primary goal is to test one or more basic VRM principles (e.g. benefits of vendor openness, willingness of users to pay for perceived value in the absence of existing payment mechanisms provided by the seller). Results of research efforts will guide the expression of VRM principles, and, presumably, drive their adoption.
Additional benefits include bringing together passionate participants around a research project, demonstrating and furthering Berkman research methodologies and software -- and forcing some clarity and learning around testable characteristics of VRM.
Testable Principles
Generally speaking, VRM's vision is equip individuals with tools that make them independent leaders and not just captive followers in their relationships with vendors and other parties on the supply side of markets. VRM is successful when customers see direct benefits from taking control of their relationships, and vendors see alternatives to customer lock-in for gaining loyalty and generating profit.
This vision makes several assumptions. Primarily, that a free customer is more valuable than a captive one. Testing this hypothesis (or more accurately, specific versions and aspects of this hypothesis) should be our primary goal. This hypothesis begs at least two important questions:
What characterizes a free customer?
- Able to choose how to relate to a vendor
- Customer relies on tools and data under their control to relate to and manage vendors
- Choose what information to share and when
- Choose how this information can be used (i.e. under what terms), for example:
- Customer-generated data must be portable
- Customer-supplied data must be retractable
- Customer-supplied data can't be used for targeted advertising / marketing messages
- etc.
- Customer receives a copy of data that is provided or generated as part of doing business, e.g. transaction data
- Full disclosure on how customer supplied-data is being used (privacy policy)
- Options for terminating relationship at will and without penalty
What are the potential benefits to a vendor for freeing a customer -- or dealing with customers that are already free?
- Decreased cost/hassle of gathering, storing, and managing customer data where the customer is relying on his or her own tools
- Increased attention / visibility to vendor for being open, i.e. being the open alternative in the market
- Increased participation from customers wanting to engage with open businesses
- Both initial willingness and ongoing enagement
- Increased sharing / customer WOM around open products / services
- Increased volume and quality of customer-supplied data
- Decreased guesswork by the vendor if the customer is telling them exactly what the want when they want it - or at least more/better information about themselves
- Increased customer trust / loyalty / goodwill (longer term?)
- Increased external innovation and value being generated around vendor services / data
- e.g. if a vendor opens their transaction data, a 3rd-party service might help customers better manage their electronic receipts,
- Development of an ecosystem of value around vendor services, creating the open version of customer lock-in
- e.g. Good services based on open transaction data encourage continued use of open transaction data provider
Open Questions
- Similarity to "free culture" arguments, e.g. what are the benefits to CC Licensing. Prior research already done here?
- What aspects of the benefits above are perceptual vs. technical? How might we measure and test these?
Specific Research Proposals
Present users with a scenario that explores the hypothesis that a free customer is worth more than a captive one by testing specific behaviors of individuals placed into "free customer" vs. "captive customer" scenarios. The experimental scenarios will use mechanical Turk and Berkman-developed web and measurement software tools for completing web-based, personal data-gathering scenarios - Doc Searls and Keith Hopper with guidance from Dave Rand, Jason Callina, Joe Andrieu, Tim Hwang and Rob Faris. External funding, while not required, is being explored, as it would greatly expand the scope of testable treatments and number of participants while reducing the overall timeframe.
Proposed Experimental Scenario
This scenario entails an information gathering process, where participants are placed in either a free or captive customer scenario and then subsequently asked to enter a variety of personal information. Different treatments within this experiment will test participants' willingness to engage, exchange information, and offer the experience to their friends.
- Participant selects the Amazon HIT and agrees to complete an associated online process in exchange for a small amount of money (e.g. $0.10)
- Participant is randomly assigned to one of two groups (free or captive)
- Both groups are presented with an identical, multi-step information gathering process - specifically, to provide music preference information (e.g. favorite artists and tracks) along with personal non-identifiable demographic information (e.g. sex, age, salary, zip, etc.). All questions/fields are optional.
- At the end of the information gathering process, both groups are informed they have completed the requirements to redeem their earnings. Additional steps taken at this point (e.g. listening, sharing) are not required.
- Upon completion of the entire process, both groups are provided the option to share a link to this project with a friend (or on twitter, facebook, etc).
- For the Free Customer Scenario:
- Before information gathering begins, the free group is informed that they will be testing a new user-driven information collection tool that lets individuals gather together their own data to later share if and how they choose to. The information gathering process will involve letting them generate a list of their favorite musical artists along with some other personal information. It will be made clear that the information that is collected will be strictly for their own use and will not be shared without their permission. At the end of the information gathering process, participants are then offered the chance to share their data with a specific vendor (e.g. Last.fm) in exchange for targeted music recommendations based on their favorite artists (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation).
- Upon completion of all selected actions, the participant is then informed that this study was to test their willingness to provide personal information. They will be asked brief survey questions to determine the depth of their belief they were in a "free" situation and the data would not, in fact, be used, shared, or investigated without their permission. Data from participants who did not trust the process might be discarded. Participants will then be asked if they're willing to anonymously share the personal data collected with the research team.
- For the Captive Customer Scenario:
- Before information gathering begins, the captive group is informed that they will be testing a new vendor-driven information collection
tool that gathers together individual's data for a vender (e.g. Last.fm) for the purpose of generating music recommendations. The information gathering process will be identical to the "free customer" process and involve letting them generate a list of their favorite musical artists along with some other personal information. Upon completion of the information gathering process, relevant data is automatically shared with the vendor (user is not given a choice), and targeted artist and song recommendations are provided in return (this will leverage a music recommendation API, such as from Last.fm). These recommendations can be listened to and specific tracks can be downloaded and purchased (e.g. with the AMT revenue generated from participation).
- Some aspects to test:
- Will free participants be more likely to complete the process than captive ones?
- Will free participants provide more data to more vendors than captive ones?
- What types of data might free participants be more willing to provide?
- Will free participants be more likely to share the experience with their friends?
- Will free participants be more likely to listen and purchase recommended songs?
- How will specifics of the experience and specific wording affect individual's willingness to participate?
- Are there certain individual characteristics (e.g. age) that predict willingness to participate?
- What is the most effective way to present a free scenario so that it feels free?
- What percentage of participants don't trust online data gathering efforts and what affects this perception
- Potential issues / items to resolve:
- Are we really measuring FREE vs. CAPTIVE customer experiences? In other words, is our definition of free and captive too arbitrary or context-specific as to lose experimental merit?
- Is offering targeted music a valuable enough proposition to encourage data sharing, listening and purchasing of music tracks and ultimately, sharing with friends? Is there a way to make this more compelling?
- Can we collect information and disclose its purposes in such a way as to accurately leverage recommendation APIs and not be deceitful, yet still create a clear and compelling delineation between free and captive participation?
- Is the subtle deception involved in this experiment appropriate, presented appropriately, and functionally the best way to structure this experiment?
- How will we control for willingness to purchase/listen if one process potentially alters the quality of the music recommendations? Is this necessary to control for?
- Show we throw out data from "free" customers who don't trust the information gathering process?
- Should captive customers be told what vendor they are sharing the data with - if so, should we test multiple vendors?
- Should we disclose which data is being shared with vendors if it all doesn't go into the API request?
- Should other free group options exist, such as the ability to download your entered information in a standard format or share your preferences and recommendations with a friend?
- How will participants be paid so as not to influence whether or not they choose to provide information (or alternatively, simply skip the process and collect their $.20)?
- What music recommendation APIs are available, what types of data do they require to generate quality recommendations (and is this standard)?
- How might trust issues with the data collector (i.e. Berkman/Harvard) influence outcomes?
- How will the music services themselves (e.g. perceived brand trust and value) affect outcomes (and how might we control for this)?
- What are the experimental disclosure requirements here - esp as it relates to personal information gathering that won't be used to generate music recommendations
- Can the cash reward be exchanged for a music download (that possibly is of higher value) to test effectiveness of the recommendation and ultimately test if free participants provide information that results in better vendor offerings.
Additional Scenario Possibilities
Scenario 2
- Assign users to either the role of Vendor or Customer and pair them up. Customers gather music listening preferences and habits about themselves through either a user-driven, open tool and process or through a vendor-driven, choice-free process.
- The results of these processes are shared with their vendor partners who are asked to make a music download recommendation to their customer based on the information shared. The vendor receives a larger reward if the customer selects their recommended download over a (smaller) cash prize.
- This scenario goes beyond demonstrating increased sharing to test the idea that openness has the potential to generate less guesswork and increased sales for the vendor
Scenario 3
- Require AMT participants to use Eyebrowse software to collect browser history data.
- Create two scenarios - one that puts the user in charge of sharing what/how/to whom and another where the data is uploaded to a commercial vendor as part of the HIT.
- Measure willingness of participants to complete the task and subsequently to upload their data for the two scenarios
- (NOTE: Can eyebrowse allow for non-sharing of data?)
Project Status
- Meeting with geeks on 10/29 produced some rough research directions and commitment from Berkman staffers to helping execute
- Additional meetings (11/2, 11/3) with Keith Hopper and Jason Callina and Keith Hopper and Tim Hwang to discuss possible scenarios and where to seek additional advice/support
- There are clear benefits to producing research not only for the VRM community but also for the business community. Both Zeo and Personal Black Box (interestingly, both startup orgs) have expressed a strong interest in research that helps clarify and "prove" the benefits of vendors opening up control to the user.
- Specific research proposal is shaping up involving the use of Amazon Mechanical Turk and based on code and data acquisition mechanisms already constructed and tested by Berkman staff for other research projects (cooperation project). See Specific Research Proposals.
Sources/Background
- Nudge: Improving Decisions About Health, Wealth, and Happiness, by Richard H. Thaler, Cass R. Sunstein
- Swoop.com, "entertainment shopping"