May 3, 2023

How Data Collection Undermines Democracy

Written By: Jebediah Dean

When using personal and private data, artificial intelligence can fundamentally discriminate against race, religion, and sex and the lack of data privacy permits therefore permits gerrymandering and targeted political advertisements.

Introduction  

Data is significant to optimize and benefit consumers’ experiences. Amazon uses purchasing data to find and recommend you products. Social media platforms like Tik Tok and Instagram use your data to provide the best content for you. Some organizations, however, are less secure and are willing to sell and share the personal information that they gather. What matters then, is what these companies secretly and non-consensually do with our data (Klosowski, Thorin). When companies engage in this nonconsensual collection of your personal data and use it in ways that don’t increase the quality of the service they provide you, free and fair elections are sabotaged. Politicians leverage personal information by targeting certain citizens with political ads to manipulate their votes. Politicians also leverage personal data to redraw district maps based on personal demographics which often act as a proxy for discrimination.  

Personal information, including addresses, current locations, credit card numbers, personal habits, and anything that could be used to identify an individual (European Commission), should be protected by law. Sadly, some businesses can easily acquire personal data to share, store, and sell it to whomever (AURA). Individuals can’t limit how their data is collected, what their data is used for, and who has or owns their personal information (Washington, Lou). Therefore, private information becomes published in a non-consensual manner. When data is shared, we are harassed by scammers and identity thieves who pose real financial dangers (AURA). 

Data Collection 

Today gerrymandering and targeting advertisements are examples of how data analysis leads to electoral manipulation. Modern data analytical practices and artificial intelligence algorithms have high computational power and covertly discriminate. Artificial intelligence is software that uses large human data sets to make complicated decisions (IBM). When there is no restriction on what data is collected then data discrimination and manipulation become feasible. 

The information that businesses collect can be very invasive and doesn’t require cookies. Software can trace the movement of your mouse or record how you use technology (Freedman, Max). Other companies track you personally, trace your activity, and record which browsers you use (Hebert, Amy). This monitoring crosses into our personal and private lives and should break the law. The Federal Trade Commision, gives consumer advice and states that companies use “techniques to connect your identity to different devices you use to go online… and then tailor ads to you across all your devices” (Hebert, Amy). This targeted advertising can be political and used to accomplish political agendas. Additionally, 70% of Americans feel it’s difficult to limit how their data is stored, used, and sold without them knowing (Freedman, Max). The government doesn’t require organizations to disclose how they monitor, collect, distribute, or sell your personal information, nor do they limit or allow the public to themselves limit how organizations can monitor their private life (Hebert, Amy).  

Discrimination in Artificial Intelligence 

Artificial intelligence permits politicians to politically target and influence voters using tools like gerrymandering and targeted political advertisements. Artificial intelligence is a modern way of analyzing large amounts of data to make political and financial decisions (Volyntseva, Yulia). To create artificial intelligence, a target variable is set (Lawrence, Joe). A target variable could be anything including ‘who will vote for which political party’. The next step is to train the algorithm by feeding it large amounts of data associated with the target variable (Lawrence, Joe), in our case ‘who people might vote for’. The associated data could be anything from who people voted for during the last elections to the type of shoes they wear. Algorithms then predict outcomes based on new data entries (Lawrence, Joe). In our example, an artificial intelligence algorithm could predict who someone with Nike boots will vote for. When we combine the computing abilities of Artificial intelligence and the lack of data privacy in our modern world, there’s potential for authoritarian rule and discrimination (Ünver, H. Akın). An authoritarian regime could emerge from artificial intelligence algorithms when physical manipulation occurs from an analytical processing of personal data. Algorithmic analytical processes are foundational for gerrymandering, criminal systems, targeted advertising, and other forms of classification that can discrimination based on personal data.  

The criminal system is the perfect example of how artificial intelligence discriminates. In the field of artificial intelligence, it’s unethical to specify race, age, or sex. Still, discrimination transpires when models are built on attributes that act as proxies for race, age, and sex. Zip codes or street addresses act as a proxy for race because of racial inequalities in America. Algorithms then build racist personalities that affect electoral decisions. In the criminal system, artificial intelligence is used in predictive gun control, predictive prison paroles, child risk scoring, and for refugee admittance. These systems use personal attributes like someone’s address to make very impactful decisions. The data determining, for example, how high of a score a child may receive for their chances of committing a crime when they grow up is based on any personal data they can get from their lives and the lives of children across the globe. This discriminates against unequal groups across the United States without specifically separating groups based on sex, race, or religion.  

Conclusion 

The New York Times posted a statement on Data Privacy Laws in the United States. In this statement, they referenced a need for a “floor” or foundational rules that limit organizations’ ability to sell or abuse personal information (Klosowski, Thorin). The statement outlined four basic protections that should be implemented by the legislative; Data Collection and Sharing Rights, Opt-in Consent, Data Minimization, and Nondiscrimination and No-Data Discrimination. Data Collection and Sharing Rights would entitle the public to see the data companies have on them, ask companies to delete any data that they have collected, and limit how that data is shared and spread (Klosowski, Thorin). Opt-in Consent requires customers to opt into data collection processes rather than having to opt out (Klosowski, Thorin). Data Minimization states that companies can collect only what they need to provide the service customers are using (Klosowski, Thorin). Finally, Nondiscrimination and No-Data Discrimination states that companies can’t discriminate against people who enact their right to keep their data private, or by race, sex, and religion (Klosowski, Thorin).  

A company’s ability to collect and use our data affects how the United States holds free and fair elections. Artificial intelligence can fundamentally discriminate against race, religion, and sex. There’s a lack of horizontal accountability permitting a partisan organization to produce targeted, single-sided, and biased advertisements. Insufficient horizontal accountability fosters an environment where information is shared and collected without public consent. The produced lack of privacy permits gerrymandering and targeted political ads, both detrimental to democracy. The absence of privacy prevents the public from managing and limiting what is considered private data which is displayed publicly when it is un-consensually collected, shared, sold, and used. These instances, which are detrimental to democracy, emerge when personal data is manipulated for political gain or when discrimination affects equality. Either way, free and fair elections are negatively impacted.  
 

Sources 

  1. AURA. “14 Dangers of Identity Theft with Serious Consequences.” 14 Dangers of Identity Theft With Serious Consequences, 14 Dec. 2022, https://www.aura.com/learn/dangers-of-identity-theft
  1. European Commission. “What Is Personal Data?” European Commission, https://commission.europa.eu/law/law-topic/data-protection/reform/what-personal-data_en
  1. Freedman, Max. “Businesses Are Collecting Data. How Are They Using It?” Business News Daily, 21 Feb. 2023, https://www.businessnewsdaily.com/10625-businesses-collecting-data.html
  1. Hebert, Amy, et al. “How to Protect Your Privacy Online.” Consumer Advice, 31 Jan. 2022, https://consumer.ftc.gov/articles/how-protect-your-privacy-online
  1. IBM. “What Is Artificial Intelligence (AI)?” IBM, https://www.ibm.com/topics/artificial-intelligence
  1. Klosowski, Thorin. “The State of Consumer Data Privacy Laws in the US (and Why It Matters).” The New York Times, The New York Times, 6 Sep. 2021, https://www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us/
  1. Lawrence, Joe. “How to Develop an AI System in 5 Steps.” BairesDev Blog: Insights on Software Development & Tech Talent, 7 Dec. 2022, https://www.bairesdev.com/blog/how-to-develop-an-ai-system-in-5-steps/
  1. Volyntseva, | Yulia. “How Artificial Intelligence Is Used for Data Analytics.” Businesstechweekly.com, 13 July 2022, https://www.businesstechweekly.com/operational-efficiency/data-management/how-artificial-intelligence-is-used-for-data-analytics/
  1. Washington, Lou. “Data Ownership: Who Owns Data, and What Is It Worth?” Cincom Australia Blog, 9 Jan. 2023, https://www.cincom.com/blog/au/transform/data-ownership
  1. Winters, Ben. “Ai in the Criminal Justice System.” EPIC, https://epic.org/issues/ai/ai-in-the-criminal-justice-system/

Sign Up For Updates

Get the latest updates, research, teaching opportunities, and event information from the Democratic Erosion Consortium by signing up for our listserv.

Popular Tags

Popular Categories

2 Comments

  1. WYNE EI HTWE

    Hello! Dean (let me call Dean). I completely agree with the points made in the article about the detrimental effects of non-consensual data collection on democracy and individual privacy. Data is undoubtedly a valuable tool for improving user experiences and providing personalized services. However, when companies misuse this data by secretly collecting and sharing it without consent, it poses serious threats to our democratic processes. Politicians leveraging personal data to manipulate voters and redraw district maps based on demographics not only compromises the integrity of elections but also perpetuates discrimination.
    I wholeheartedly support the call for legal protection of personal information, as highlighted by the European Commission. Our personal data, including sensitive details like addresses and credit card numbers, should be safeguarded by law. The ease with which some businesses can acquire, share, and sell personal data is alarming and should be addressed through stringent regulations.
    I also agree that there is an urgent need for stronger data privacy regulations and greater transparency in data collection practices to protect our democracy and individual rights. Yes, Robust data protection regulations that ensure individuals have control over their personal information should be implemented and enforced. These laws should require explicit consent for data collection and impose strict penalties for data misuse. Thank you, Dean. It was an enlightening read!

    Regards,
    Wyne Ei

  2. SOE KO KO AUNG

    I completely agree with your post. The excessive collection of data and breaches of privacy go against the principles of democracy. Of course, nowadays, our private data is integral to our daily lives, and it’s almost impossible to live without the influence of AI technology. Unfortunately, many companies prioritize their interests over consumers’ privacy, lacking ethical considerations and often disregarding rules and regulations. It makes more burden on that, I think, the absence of international laws to prevent privacy breaches only encourages further violations. It’s indeed a betrayal of trust and abuse of power, both in the business world where the trust between companies and consumers is broken and in politics where power is misused for personal gain. The recent case in Hungary, where personal data was allegedly used for political campaigns, is a clear example of this breach of trust and privacy. While international organizations issue statements on such matters, it often feels like mere lip service, as actions are more effective than words. Might be my overthinking, sometimes, it seems that these organizations are pretending to be on the right side by issuing statements, while they might be collaborating with those who breach ethics and laws. It’s frustrating that there isn’t enough protection in place for such cases, and those responsible often neglect global issues like these. I think urgent action is necessary in this situation. The more time passes, the more ethics and fundamental values are being violated. Thank you for highlighting the issues of data collection and privacy breaches.

Submit a Comment