Online Privacy: What’s Really Going On?
You’ve read the headlines: Company X under investigation for data mining. Social media giants questioned by Congress about privacy concerns. Users report seeing ads mirroring their search histories.
Modern society has a big problem: determining how to protect privacy in digital spaces. It’s no longer possible for most people to live “off the grid,” away from smart devices and technology that are necessary for nearly every aspect of modern life, including work, school, and socialization.
As the COVID-19 pandemic forced more people to plugin, it’s easy to resign yourself to the fact that your data and privacy will be exploited for profit. On the surface, this fact may be true, however, there are steps both citizens and companies can take to protect themselves and their users.
That process starts with understanding what’s really going on with online privacy.
How did we get here?
While a loaded question, it’s important to have a basic understanding of how we’ve reached this level of online privacy exploitation. Generally, there are three broad categories to consider when trying to answer this question:
1. Third-party data collection
2. Personalized ads
3. Lack of national regulations
Chances are that you’ve heard of all these issues before, especially concerning social media. In March 2019, Mark Zuckerberg wrote an opinion in The Washington Post calling for national privacy regulations. In November 2020, Zuckerberg also testified in front of the U.S. Senate about Facebook’s content moderation and site regulations, among other issues.
Most people even slightly familiar with Facebook’s long history of privacy concerns may be puzzled by Zuckerberg’s call for broader-reaching regulations. In one of its most high-profile scandals, Zuckerberg testified in front of Congress in 2018 after Cambridge Analytica used a quiz app to access data about 50 million Facebook users during the midterm elections. Shortly after the scandal, speaking about the tech industry, Zuckerberg told CNN, “I’m not sure we shouldn’t be regulated.”
Since Zuckerberg’s testimony, Twitter, Facebook, Google, and other tech giants have modified their privacy agreements. However, even before Facebook, Twitter, and other tech companies were brought to task during Senate hearings, the government had concerns about digital privacy.
In 2012, they published The Consumer Privacy Bill of Rights; however, the hearings and subsequent privacy concerns have proved there’s a need for laws with more teeth than just a bill of rights. This need is almost directly the result of social media companies keeping their platforms but still needing funds to keep up with ever-growing user demands.
User demands and privacy concerns
As the internet becomes more essential for everyday life, users are increasing their expectations for speed, accuracy, and access. It costs money to maintain a website as big and vast as Facebook, but according to a survey of 1,000 social media users, 60% of those surveyed said they would be willing to pay up to $5.29 per month which is more than the $2.07 per month they would need to average to equal ad revenues. But, in order to make this revenue switch, users would have to be assured Facebook would not collect, store, and sell personal data the way they do today. Zuckerberg and other tech leaders are aware of this, too. That’s why in 2018, Zuckerberg testified in Congress that “there will always be a version of Facebook that is free.”
That begs the question: How can social media continue to be free while still maintaining and adding perks to keep up with the demands of its users? How can Facebook and other sites offer wish lists, saving preferences, customized content, suggestions, and other features that users expect without charging a fee? The answer may look familiar: through a combination of third-party data and personalized ads.
At its most basic level, third-party data is information collected about a user that the third party has no affiliation with. This most often means that a site works with another site or several sites to share or attain data about its users.
Companies most often use third-party data to grow their audience bases or market their products or services to a broader audience. In terms of online privacy, that often means using information about your browsing history and how you interact with certain sites to send you targeted ads.
Because of the risk of data and privacy breaches that can come with using third-party data to create targeted ads, Twitter, Facebook, and other popular sites stopped working with third-party users. Both Twitter and Facebook did so after privacy concerns were raised both internally and by platform users.
Facebook stopped using third-party data for ads in 2018. However, as its Help Center explains, that doesn’t mean individual advertisers aren’t still using third-party data to identify and reach targeted audiences. The same is true for Twitter, which stopped using third-party data directly in 2020 after it experienced “an issue,” according to its help center, with the site’s mobile app sharing data from users who clicked on an undisclosed app in 2018.
As of now, there aren’t any federal laws specific to online privacy. However, on multiple occasions, tech leaders like Apple’s Tim Cook have said this lack of regulation has harmed both society as a whole and individuals’ rights to digital privacy. In April 2019, Cook even told Time Magazine, “Technology needs to be regulated. There are now too many examples where the no rails have resulted in a great damage to society.”
While there aren’t national regulations, in December 2020, The Federal Trade Commission announced that it would be launching an inquiry into how giant tech companies like Amazon, TikTok, Twitter, YouTube, and Facebook use and collect data. They will also review the companies’ general privacy practices to look for blatant violations of FTC and other guidelines.
The probe comes in the wake of widespread calls for national regulations and a slew of potential and actual privacy breaches among a plethora of internet giants. As of now, though, most regulations with laws attached to them exist only at the state level and vary widely in scope and effectiveness.
Ethical data usage
At a 2018 European Union privacy conference in Brussels, Apple’s Cook called for the government to regulate user rights and privacy, according to Time Magazine. At the conference, Cook said, “We shouldn’t sugarcoat the consequences. This is surveillance and these stockpiles of data serve only to make rich the companies that collect them. This should make us uncomfortable.”
During the conference, Cook also slammed competitors for using data unethically and putting users’ privacy at risk in the process. However, The Atlantic and other critics pointed out that Apple has had its own privacy issues, including its role in permitting Facebook to create the quiz app for iPhones that led to Cambridge Analytica getting access to user data, as mentioned previously.
These criticisms highlight a problem that companies like Apple, Twitter, and Facebook face: how to balance ethical data usage with the demands of users for free and customizable apps, sites, and services. There are myriad opinions and ideas behind how this issue can be solved, such as McKinsey & Company’s proposed corporate data program.
However, from Cook to Zuckerberg, and from McKinsey to The Atlantic, most opinions boil down to this: If the federal government won’t pass national digital privacy policies, companies must hold themselves accountable one way or another to ensure that their users’ data is managed responsibly and ethically.
Corporate accountability for privacy
This broad and difficult to attain goal relies on companies seeing that they can still make a profit if they act responsibly and maintain privacy standards. Companies can start by permitting users to choose how their data is used and how much of their data remains private.
As previously mentioned, Cook has also called out the tech industry for its history of unethical practices. He also updated Apple’s privacy policies and asked the government to create national digital privacy regulations. In 2018, we also saw Zuckerberg testify for the first time about Facebook’s privacy policies. These call-outs are signs that the tech industry may be ready for change, which could include widespread regulations.
User privacy expectations
While tech giants are reckoning with the public distrust that years of unethical data and privacy practices have created, it’s up to users to demand more than just lip service. Data is being collected and shared more than ever before, and internet users must understand how that data is being used and where it’s going.
According to a 2019 PEW Research Center survey, 81% of Americans surveyed said they feel they have ”very little or no control” over the data companies collect. The same study found that 97% of Americans reported being asked to read and agree to privacy policies, but only 9% said they always read them. Even more telling, only 8% said they understood privacy policies “a great deal.”
So, what can individual users do to take control of their data? Being apathetic about your digital safety won’t solve the problem. Take steps to educate yourself about digital privacy and what exactly you’re consenting to when you click “accept” without reading policies. More importantly, if you have a question about a company’s policies or disagree with the terms, reach out to the company and ask questions.
When companies are caught using and abusing data, say something about it and make it clear that you demand better from them. Consider whether you would prefer to have customized content or control over your data and tell companies which you’d rather they prioritize. Taking a more active role in guarding your privacy will send the message that the price of their “free” services is too high.