Talk:ListenLog: Difference between revisions
(working...) |
|||
(18 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
==Users Might Object== | |||
Some individuals might perceive the ListenLog as yet another mechanism for aggregating personal attention data that will (or at least could) be used or used indiscriminately by a third party. There are four aspects of this concern, depending on how the LL concept is presented and implemented: | |||
# Knee-jerk negative reaction to the concept of personal/attention data | # Knee-jerk negative reaction to the concept of personal/attention data being monitored. This reaction is prior to any consideration as to why and for whom the data might be used. | ||
# Assumption that whenever personal data is captured, it is captured ultimately | # Assumption that whenever personal data is captured, it is captured ultimately on behalf of a vendor, even if users are told otherwise. The default assumption is that data is stored and used by the vendor who is in control of the current user experience, e.g. whomever distributes this software application or hosts this website. The assumption might be that it is used by an unwarranted vendor or that if explicit permission is given, the data might be used in an unwarranted fashion (e.g. when you provide your phone number to a vendor who then later uses it to solicit you). | ||
# Concern that even if the data is owned or controlled by | # Concern that even if the data is owned or controlled by users, it might eventually be compromised, e.g. by a malicious 3rd party, oppressive government, legal body, etc. who can tie this data to me. Since we intend to store the data remotely, this might aggravate the concern. | ||
# | # As a user, why would I choose to do this? Since LL will likely have the ability for individuals to opt-in or opt-out, why would someone choose to collect their own data? What's in it for them? There might be built-in resistance without a compelling use case. | ||
What might we do to address these concerns? Here | What might we do to address these concerns? Here are some suggestions I propose: | ||
* Clear, careful presentation of the concept. This is unlike anything people will be familiar with and they will likely resist, misunderstand, or compare it to historic privacy violations. A primary emphasis on new, previously unavailable user functionality might be a good approach. | |||
* Store the data somewhere agnostic and other than on the application or vendor servers (I recommend Berkman servers) | |||
* Transmit data securely | |||
* Automatically and by default store all data fully encrypted (i.e. "locked"). As an alternative to opt-in, users would need to "unlock their data" (this action would decrypt it wholesale) in order to get access to functionality or to share the data externally. Users could lock, unlock, and delete all of their data at will. Users could also opt-out of capturing data in the first place. | |||
** Perhaps lock and unlock could apply to different sub-sets of data, e.g. location data | |||
* Don't build any ability to share data into first version | |||
* Focus functional development on a single piece of previously unavailable end-user functionality, e.g. "Search across everything I've ever listened to" to help demonstrate our user-driven intentions | |||
* Encourage additional audio applications/sites/devices to publicly commit to support the ListenLog standard. LL would be significantly more compelling to users if it spans devices/platforms | |||
[[User:Khopper|Khopper]] 15:31, 2 February 2009 (UTC) | |||
Point 2 above is why we formed the Mydex Community Interest Company, a social enterprise; we believe this helps (although does not fully overcome) perceptions that the data is being used for anyone other than the originator. By all means use Mydex in your 'store the data somewhere agnostic' approach if that would be helpful - we can discuss the details. Iain | |||
==Provisional Control Over User Data== | |||
ListenLog is not trying to replicate [http://attentiontrust.org AttentionTrust]. Our goal is not to make privacy demands or propose selling listening data for a share of ad revenue. ListenLog will provide an aggregated set of activity data across an individual's listening applications for them to use, analyze, and share. Within most media applications, individuals do not have access to their application activity data and can not use or share it how they choose. We are proposing to change that. | |||
This does not mean that an application vendor or the media providers who host ListenLog functionality will not also store and use user activity data for their own purposes (yes, they should have a privacy policy clarifying this). In this case, the user does not have control over their data. With virtually every website/online application in existence, the user does not have the ability to move their activity data away, delete it, or dictate how it can be used. | |||
Perhaps the only way we will get vendors to deploy LL is to embrace those that support LL in addition to any tracking the vendor might already be doing, rather than instead of it. | |||
Baby steps. | |||
[[User:Khopper|Khopper]] | [[User:Khopper|Khopper]] 20:47, 5 March 2009 (UTC) |
Latest revision as of 08:47, 6 March 2009
Users Might Object
Some individuals might perceive the ListenLog as yet another mechanism for aggregating personal attention data that will (or at least could) be used or used indiscriminately by a third party. There are four aspects of this concern, depending on how the LL concept is presented and implemented:
- Knee-jerk negative reaction to the concept of personal/attention data being monitored. This reaction is prior to any consideration as to why and for whom the data might be used.
- Assumption that whenever personal data is captured, it is captured ultimately on behalf of a vendor, even if users are told otherwise. The default assumption is that data is stored and used by the vendor who is in control of the current user experience, e.g. whomever distributes this software application or hosts this website. The assumption might be that it is used by an unwarranted vendor or that if explicit permission is given, the data might be used in an unwarranted fashion (e.g. when you provide your phone number to a vendor who then later uses it to solicit you).
- Concern that even if the data is owned or controlled by users, it might eventually be compromised, e.g. by a malicious 3rd party, oppressive government, legal body, etc. who can tie this data to me. Since we intend to store the data remotely, this might aggravate the concern.
- As a user, why would I choose to do this? Since LL will likely have the ability for individuals to opt-in or opt-out, why would someone choose to collect their own data? What's in it for them? There might be built-in resistance without a compelling use case.
What might we do to address these concerns? Here are some suggestions I propose:
- Clear, careful presentation of the concept. This is unlike anything people will be familiar with and they will likely resist, misunderstand, or compare it to historic privacy violations. A primary emphasis on new, previously unavailable user functionality might be a good approach.
- Store the data somewhere agnostic and other than on the application or vendor servers (I recommend Berkman servers)
- Transmit data securely
- Automatically and by default store all data fully encrypted (i.e. "locked"). As an alternative to opt-in, users would need to "unlock their data" (this action would decrypt it wholesale) in order to get access to functionality or to share the data externally. Users could lock, unlock, and delete all of their data at will. Users could also opt-out of capturing data in the first place.
- Perhaps lock and unlock could apply to different sub-sets of data, e.g. location data
- Don't build any ability to share data into first version
- Focus functional development on a single piece of previously unavailable end-user functionality, e.g. "Search across everything I've ever listened to" to help demonstrate our user-driven intentions
- Encourage additional audio applications/sites/devices to publicly commit to support the ListenLog standard. LL would be significantly more compelling to users if it spans devices/platforms
Khopper 15:31, 2 February 2009 (UTC)
Point 2 above is why we formed the Mydex Community Interest Company, a social enterprise; we believe this helps (although does not fully overcome) perceptions that the data is being used for anyone other than the originator. By all means use Mydex in your 'store the data somewhere agnostic' approach if that would be helpful - we can discuss the details. Iain
Provisional Control Over User Data
ListenLog is not trying to replicate AttentionTrust. Our goal is not to make privacy demands or propose selling listening data for a share of ad revenue. ListenLog will provide an aggregated set of activity data across an individual's listening applications for them to use, analyze, and share. Within most media applications, individuals do not have access to their application activity data and can not use or share it how they choose. We are proposing to change that.
This does not mean that an application vendor or the media providers who host ListenLog functionality will not also store and use user activity data for their own purposes (yes, they should have a privacy policy clarifying this). In this case, the user does not have control over their data. With virtually every website/online application in existence, the user does not have the ability to move their activity data away, delete it, or dictate how it can be used.
Perhaps the only way we will get vendors to deploy LL is to embrace those that support LL in addition to any tracking the vendor might already be doing, rather than instead of it.
Baby steps.
Khopper 20:47, 5 March 2009 (UTC)