Latest

Chinese social media and the 2025 Australian federal election

In any electoral or voting cycle, misinformation and disinformation has the ability to alter the outcome of the democratic process. From January to December 2023, we conducted a research project that collected more than 3,000 items about the 2023 Australian Indigenous Voice Referendum that had been published across the Chinese-language social media platform called WeChat. The findings revealed an increasing prevalence of misleading information in relation to the referendum and Indigenous communities, disseminated across Chinese-speaking migrant communities. The transnational, multilingual nature of such social media services has exposed a major gap in legal and regulatory mechanisms related to Australian election law, which has primarily limited its attention to English-language services.

Non-English-speaking communities in Australia are often not adequately informed when making political decisions.

In the context of the Voice referendum, misleading information encompassed a spectrum of misinformation, disinformation, online falsehoods and fake news, citing racism, conspiracy theories and colonial denialism. Notably, some of the misleading content we investigated demonstrated discrepancies with the fact-checked information provided by the Australian Electoral Commission and The Guardian. Misleading information was circulated rapidly and broadly through short videos on WeChat and Red (a video content creation service with e-commerce capabilities). WeChat is one of the world’s largest social media services, and Red is one of the fastest growing, yet they have bypassed Australian mechanisms that seek to manage electoral information as the mechanisms focus solely on English-language communication.

This means that non-English-speaking communities in Australia are often not adequately informed when making political decisions. Community members and organisations have voluntarily taken on the responsibility of fact-checking and addressing misleading information among Chinese migrants through debunking videos. Combating misleading information has become a competition for social influence. Yet our research on WeChat and the Voice referendum corroborated that creators of misleading information outperformed community truth-tellers in influencing individual voters.

WeChat bypasses Australian mechanisms that seek to manage electoral information (Adem Ay/Unsplash)

Public resources are not equitably distributed when it comes to combating misinformation and disinformation. Non-English misinformation and disinformation are not explicitly addressed in the Communication Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 (Exposure Draft) unless foreign interference is involved, which is hard to parse in the diasporic, transnational audience of services such as WeChat.

With Australia’s federal election due in less than 18 months, we anticipate that old misleading narratives may resurface, facilitated by new technologies such as generative AI.

The challenge of misleading information disseminated through short videos is expected to become even more complex during the upcoming 2025 Australian federal election. Unlike text-based WeChat Official Accounts, which require organisational registration, the barrier to entry for content production on short video features such as WeChat’s Channel or the Red platform is much lower. Video-capable accounts on both WeChat Channel and Red do not require valid Chinese ID numbers or Chinese business registrations, making them more accessible to layperson creators and even political campaigners in China or Australia. Short-video accounts have become popular among Chinese migrants as a means to share their opinions, life experiences in Australia, and potentially monetise their content. Misleading information originating from Chinese diasporas on Chinese social media platforms occupies a regulatory grey area – while formally regulated by both Chinese and Australian bodies, in practice it falls outside the purview of both. And it is likely to continue to blossom during significant political events.

While X (formerly Twitter) features strong political discourses, WeChat and Red are notionally “apolitical” due to the explicit potential for political discussions on Chinese social media to be censored. However, short videos on these platforms take on political dimensions when personal views are blended with news topics in relation to Australia. The use of “likes” and “shares” increases attention towards certain posts and even leverages explicit domestic Australian partisan positions.

In this domestic–political atmosphere, the rise of short-video features has provided fertile ground for the dissemination of misleading information. Creators of these videos do not adhere to journalistic standards or media ethics, nor do they need to, as they do not identify themselves as journalists and neither does Australia’s regulatory regime. Further, Chinese platform regulations demonstrate leniency in content control, particularly regarding matters unrelated to the domestic concerns of Beijing; the internal matters of external states seem fair game.

With Australia’s federal election due in less than 18 months, we anticipate that old misleading narratives may resurface, facilitated by new technologies such as generative AI, which has already been observed in the lead-up to the 2024 US elections. With the capacity to automatically translate audio on-demand and create videos featuring not only Australian politicians but also Chinese-Australian community members endorsing them, generative AI presents an unregulated frontier for political discourse. This blurs the line between foreign and domestic concerns yet holds significant implications for national outcomes.

Freedom House generously supported Fan Yang in the analysis and authorship of the initial report.