It has been widely acknowledged that the underlying business model of social media companies and Web 2.0 service providers is characterized by the large amounts of revenue that they earn from charging advertisers for using their platforms to target users with a mass of products and services. These outlets essentially provide a platform to reach hundreds of millions of individuals. Why wouldn’t any for-profit business leverage such tools? That being said, are expectations of privacy too high for users on these platforms? We generally think about scope and context with regards to online privacy. But we need to also realize that privacy concerns vary from individual to individual.
There is no discounting the fact that individuals need to be equipped with information and tools to better understand and control their perceived privacy risks. Nonetheless, as data science has demonstrated, privacy is not contingent on factors only within our control (e.g. third-parties can iteratively learn about an individual through observable facts about them such as routine, habits, associates, etc.). Given that individuals generally lack the knowledge of what data is personal and what is not, should we be focusing more on risk? What is the risk of disclosing certain information? What are the negative consequences of disclosure? And ultimately, shouldn’t we be considering whether or not there is a purposeful use of data as opposed to if data is personal or not? Data collection does have positive applications in healthcare & scientific research, delivery of government services, transportation & logistics, etc.
My thinking is that the present ecosystem benefits commercial entities (revenue generation) and governments (intrusive surveillance), and individuals have little to no control over how their personal data is used. There is a need for a paradigm shift towards an arrangement where data collectors and individuals use a permission-based system in determining what data usage is acceptable, an approach where data collectors respect the privacy and interests of the individuals whose data they are collecting and where privacy policy, law and technical standards enable users to exercise greater control – a transparency and a context-based approach (accountability and trust). Moreover, individuals should also be allowed to share in the revenue that is earned from the use of their personal data (licensing of personal data sets – a type of “pay-for-share” approach to online data collection).
There’s no doubt that a revenue sharing system for personal data could be akin to opening a can of worms if not implemented and embedded into practice effectively. That being said, what I envision is some type of a micropayment system that allows individuals to be paid for sharing original content, providing useful information, or documenting their daily interactions. The ecosystem can be created around an inclusive arrangement that addresses the current tensions in the data privacy debate, including securing and protecting user data, developing accountability systems, and agreeing on rules for a permission-based, trusted flow of data for different contexts. Central to this would be the inclusion of individuals, who serve a progressively crucial role as both data subjects and as data creators.
A startup called Tsu (https://www.tsu.co/) has launched a social network that is premised on a similar idea, where users distribute and share original content, just like they do on other social networks, but on this platform, the majority of the advertising revenue is distributed among the users. However, for this approach to scale, legislation would have to be developed that is adaptable (i.e. robust enough to be enforceable and flexible enough to accommodate contextual differences). Lawmakers would also have to resist the urge to introduce “one-size-fits-all” legislation that creates unintended outcomes that restrict the open flow of data and discourage the trusted sharing and use of data to create value.