MBA

Masters

Admissions

GMAT

Business Schools

    Rankings

    Jobs

    Watch

      Resources

      Insights

        How Are Big Data, AI & Social Media Used To Hack Democracy?

        Bocconi University’s Gaia Rubera is an expert on how big data, AI, and social media influence our opinions and the way we think. But how are they used to hack democracy?

        Can democracy be hacked? That is, can how we think and what we think be influenced without our knowing?

        It’s a question that’s been on the mind of Gaia Rubera, head of the Department of Marketing and Amplifon Chair in Customer Science at Bocconi University, for a while, and one that's become increasingly important in recent years.

        For Gaia, the use of big data, artificial intelligence (AI), and social media to interfere in elections and drive the spread of misinformation has worsened over the last several years and poses a serious threat to society.


        How big data, AI, and social media are used to hack democracy

        There are three main uses cases for big data, AI, and social media when it comes to influencing democracy.  

        The first is in the mold of Cambridge Analytica, the consulting firm that in the 2010s harvested the personal data of up to 87 million Facebook profiles without users’ consent, to be used primarily for political advertising.

        “Someone collects data, and these data are used to feed a machine learning algorithm that is then used to profile people,” explains Gaia. “Once you identify these people, you’re able to micro target them.”

        Gaia is set to discuss the Cambridge Analytica case during a free online Masterclass in Data Science she’s hosting on March 30. She’ll also be discussing the use of social bots in infodemics, and why it’s challenging for current Machine Learning models to detect them.



        The use of bots is the second use case and involves the intentional, mass spread of false information, or disinformation, on platforms like Twitter. Bots are computer programs that act as agents for users and simulate human activity; they’re commonly used to spread disinformation from fake social media accounts.

        This was the case during the 2017 French presidential election, explains Gaia, when the ‘MacronLeaks’ targeted presidential candidate Emmanuel Macron. Documents were obtained after the personal and professional mailboxes of several leaders of his movement were hacked, and bots were used to widely circulate that information online alongside falsified documents. 

        The third use case is citizens themselves spreading misinformation, which destroys people’s trust in anything, says Gaia. “Right now, most people don’t know what’s true and what’s not true anymore.

        “It's almost impossible to make decisions because you don't know who's right or who's wrong.”



        This was particularly acute during the Covid pandemic, with the widespread circulation of misinformation about vaccines, the severity of illness, and even the existence of the virus creating an environment in which it was challenging to know what to believe. 

        This is known as an infodemic, which according to the World Health Organization is too much information, including false or misleading information, in digital and physical environments during a disease outbreak. It’s something Gaia believes is getting worse. 

        “There are two goals that these misinformation campaigns have,” she says. “One is the target: I want to show that, I don't know, the virus doesn't exist, it was made up—that is the specific goal. 

        “Then there’s the long-term goal, which is people say, ‘okay, now I don't trust the official newspapers anymore because they are providing fake information about the virus, or now I don't trust politicians anymore.’ 

        “You tweak the minds of some people and next time, when there’s the next crisis, you have a bunch of people that have already been inclined to believe fake news. This is a war for our minds and it's been going on for years."


        How to overcome big data, AI, and social media's threat to democracy

        The first solution Gaia proposes is to educate yourself about the use of technology, but also how psychology works in the human brain. There are two psychological elements at play, she adds: nudge theory and tribalism.

        “They push you in one direction little by little and you don't even realize it, so you should start realizing that,” she advises. The other thing they do is divide society into camps, creating and reinforcing a mentality of ‘us versus them’. 

        We then share information that relates to our beliefs, without fully knowing whether what we’re sharing is fact or fiction. Gaia advises to stop automatically sharing, and instead to think about where information has come from, whether it’s a viable source, and if it’s a news story or opinion piece. 

        Another solution she’s heard proposed is the suspension of social networks one week before elections, something she agrees with. That would aim to curtail the spread of misinformation and targeted disinformation campaigns designed to falsely tarnish the reputation of candidates and swing the vote in favor of a particular nominee.

        As citizens, we should be aware of the perils of misinformation and the role big data, AI, and social media play in spreading it. There’s an added layer of action needed by legislators as well, Gaia adds. What laws can be passed to tackle the problem, without overstepping the line into censorship? 

        As a researcher, she also believes academics play an invaluable role by continuing to research and probe social media companies and the ripe environment they create for the spreading of false information.


        How Bocconi teaches its students about big data, AI, and social media

        Although Gaia admits overcoming the challenges posed by big data, AI, and social media requires wider societal change, Bocconi helps by teaching students about the consequences of things like disinformation campaigns and the public spreading misinformation.

        Gaia’s teaching covers digital marketing, big data and AI marketing, social media communication, and customer-centric innovation. 

        At Bocconi, she teaches students how to find and comb data from social networks, mostly Twitter. She also teaches them how machine learning works, with live use cases showing them how misinformation spreads in practice.

        Bocconi also has several master’s programs that focus on technology and the management of security risks.

        There’s an MSc in Cyber Risk Strategy & Governance; an MSc in Data Science & Business Analytics; and an MSc in Economics and Management of Innovation and Technology. 

        Register for free Bocconi Masterclass in Data Science


        BB Insights draws on the expertise of world leading business school professors to cover the most important business topics of today. 

        Comments.