Published by Global Ground Media
Governments across Asia, namely in Thailand, Indonesia, Japan, the Philippines, and India, are considering ways to tackle the problem of disinformation on social media, particularly as they gear up for crucial elections in 2019.
However, observers note that their strategies and success rates in combatting this issue vary significantly. The battle against online disinformation and misinformation is further intensified by the lack of fully developed democracies in the region.
Dr Masato Kajimoto, a journalism professor at the University of Hong Kong who co-authored a paper on the topic titled Information Disorder in Asia, explains that the extent to which these countries have legislated against “fake news” is rooted in their particular political climate. He says that Thailand and Indonesia introduced or enforced laws against disinformation which could be abused to silence opposition.
Meanwhile, he says, countries such as Japan and the Philippines have taken a more “hands-off approach” to the problem; the former because the severity and impact of disinformation on social media are relatively small, and the latter because the government wants to maintain a steady stream of its own online propaganda, summarises Kajimoto.
“We should keep making an effort to try to tackle this [problem], but the entire region needs more freedom first,” Kajimoto told Global Ground Media. “There is still not much democracy or complete press freedom in Asia. I feel generally pessimistic about the future fight against disinformation in the region because freedom of speech is always somewhat controlled.”

Criminalising ‘Fake News’

Globally, at least 30 countries have attempted to legislate against online disinformation since 2016, according to a 2018 study by academics at the University of Oxford.
The researchers found that the use of automated bots on social media, designed to influence the outcome of elections, is increasing internationally. They concluded that more action is needed to strengthen national guidelines in various democracies before elections, as there is no use waiting “for national courts to sort out the technicalities of infractions after running an election or referendum. Protecting our democracies now means setting the rules of fair play before voting day, not after.” But in Asia, where democracies are often incomplete, enforcing such rules remains a huge challenge, as both dictators and pseudo-dictators would have to adhere to guidelines which they will rarely be punished for ignoring.
In an email to Global Ground Media, Singaporean academic James Gomez, founder of the Bangkok-based non-profit and think tank Asia Centre, says some member states in the Association of Southeast Asian Nations (ASEAN) have used the rise of online disinformation as an excuse to attack opposition parties. He argues that by establishing task forces or agencies to monitor online discourse, convening select committee hearings and proposing new laws or revisions, governments often try to limit free speech. “The attempts of governments to countermeasure fake news are disproportionate and have created [a] chilling effect on freedom of expression and self-censorship altogether,” he explains.
Gomez cites the example of Malaysia, where an anti-fake news bill was passed in April 2018, just before the general election, which was designed to “shape and manipulate online discussion in favour of Najib Razak’s government during the election period,” he says. “[The bill] contains a broad and vague definition of ‘fake news’ and was passed without comprehensive debate or deliberation.”
“While [ASEAN] governments claim they introduced such measures to address threats of communal violence or public disorder in the run-up [to the elections], or following heightened political activity such as elections, it seems that the real objectives are to discredit the members of the opposition, civil society, manipulate online discussion, or to prevent criticism of corrupt public institutions,” he adds.
As election fever ramps up, tactics to disarm disinformation actors on social media have intensified.


In Thailand, there were fears that increased censorship of social media by the military junta would significantly restrict free speech during the elections last month on 24 March 2019. The official results are set to be announced in May this year, but early exit polls suggest the pro-military Palang Pracha Rath Party (PPRP) won the largest number of votes, at 8.4 million out of 38 million ballots.
For the first time in the history of Thai elections, in 2018, the Election Commission announced that it would start regulating campaigners’ social media accounts. However, observers say this poses an enormous challenge, not only due to the amount of content being churned out but because it requires the commission to be fair and balanced in its efforts. There are fears it could lead to some posts being censored even if they are not seriously misleading or false.
Senior Thai army officials, including Commander-in-Chief General Chalermchai Sitthisart, have expressed concern about the dissemination of disinformation and misinformation in the build-up to the elections. But commentators argue this reflects how the Thai army, which runs the country as a de facto dictatorship, wants to limit political discussion in Thailand.
In 2019, Sudarat Keyuraphan, a leading member of the opposition Pheu Thai Party and a prime ministerial candidate, was the victim of a hoax viral video, local media reported.
The 45-second video allegedly showed her staying silent while a man verbally threatened the King in 2010, in an apparent display of disloyalty. Section 112 of Thailand’s criminal law punishes insults, threats or defamation of the King with up to 15 years in prison, as examined by Global Ground Media in a previous article.
In response, Keyuraphan reported the low-quality video to the police, explaining that it had been digitally manipulated, a point which was later confirmed.
Kajimoto, from the University of Hong Kong, says Thailand has approached the problem of disinformation by “extending the interpretation of existing laws.” He adds, “Thailand certainly has an issue with censorship when it comes to elections.”
Meanwhile, Gomez explains that Thai authorities have used the Computer Crime Act 2007 (CCA) as a deterrent to criticism directed at institutions and public officials, branding them false. “In Article 14(2) [of the Computer Crime Act], it stipulates false information disturbing public order and national security as [a] punishable offence,” he says. “Consequently, people generally practice self-censorship, the same with the media and the press.”


Debunking fake news remains a priority in Indonesia, particularly as the country prepares to go to the polls in April. About half of Indonesia’s population, approximately 130 million people, are active social media users, a figure that was growing at a rate of more than 20 percent annually as of January 2017. One of the main hoaxes common on social media in Indonesia is that some election candidates, including President Joko “Jokowi” Widodo, have links to the banned Indonesian Communist Party (PKI).
In January 2018, Jokowi established the National Cyber and Encryption Agency, to police disinformation on social media. The authorities have been working with social media platforms to remove and block content which they deem to be harmful, such as hate speech and defamatory content which undermines the president. Suspected members of the Muslim Cyber Army were arrested last year for this alleged activity.
In January 2019, false reports circulated that seven containers of punched ballot papers supporting Jokowi and his running mate had been imported from China. The General Election Commission subsequently filed a report with the police, who confirmed the news was false, but not before an estimated 17,000 users had tweeted about it, the Jakarta Post reported.
Astari Yanuarti, the co-founder of anti-hoax education company Redaksi, told Global Ground Media that local strategies to tackle disinformation include digital literacy education; fact-checking agencies; screening machines for negative content; activating public reporting channels; and law enforcement. But she says the impact of these actions “have not been seen” and there are calls for more arrests of those concocting online hoax campaigns, and even censorship of social media platforms in the weeks before a major election.
“If necessary, [we should] close social media access such as Facebook and Twitter until the election is over so that a peaceful and clean election from hoaxes can be achieved,” she said in an email to Global Ground Media. “More stringent arrangements are needed in social media that are accompanied by education so that netizens can use social media wisely and still be able to express opinions responsibly.”
Yanuarti also argues that tech companies have a significant responsibility to tackle the problem. “[They] have an obligation to maintain their platforms free from hoaxes, slander and hate speech,” she says. “[Methods could include] increasing AI’s ability to filter hoax content; changing the algorithm so that it can break up online ‘echo chambers’ and creating a special team to handle certain situations such as elections.”


India is set to hold a general election in April and May 2019, along with Legislative Assembly elections which will be held simultaneously in some states. In January, false information about the election dates began circulating online, forcing the country’s Election Commission to report the posts on social media to the police, the Times of India reported.
India has seen the creation of multiple independent fact-checking agencies in recent years, including AltNews, Internews, DataLeads and Boom Live. Google has also invested heavily in a growing network of journalists trained in fact-checking techniques through a series of boot camps. Meanwhile, the government has suggested it may introduce legislation against fake news but has not yet proposed a bill.
Karen Rebelo, a Mumbai-based senior journalist at fact-checking agency Boom Live, told Global Ground Media that she has generally not felt encouraged by the government’s handling of online disinformation. “They are making noises that they are serious, but when it comes to keeping their own house in order, they are not so effective,” she says. “We see this problem getting worse as we are just months away from a general election. They ask social media companies to do more, but they need to look at themselves. Misinformation will peak before the election and plateau afterwards, but we won’t know the impact of it until afterwards.”
She explains that one of the major hurdles in the fight against disinformation is that media literacy is practically “non-existent” in a country of 1.3 billion people. “We are in a situation where people are coming onto YouTube, and they believe it is always real news,” she says. “Some people’s first interaction with news is with some dodgy site.”


There is a growing awareness of the need to tackle the spread of disinformation ahead of 2019’s midterm elections, due to be held in May. The Philippines remains a hotbed for fake news, in part because it has a captive potential audience. Filipinos spend vast amounts of time on social media sites, on average 3 hours and 57 minutes daily, according to a 2018 report by UK-based consultancy We Are Social.
However, Kajimoto suggests that while there have, in fact, been moves in the Philippines to legislate against fake news by the opposition, President Rodrigo Duterte and his supporters have shared the opinion that such a law wouldn’t be passed in Congress. Kajimoto says this “hands-off” approach to social media contrasts heavily with how the government approaches press freedom generally. “You see attempts to repress journalists, shown by the arrest of Rappler CEO Maria Ressa,” he says. “[The government] is trying to restrict media, and in doing so, restrict their fact-checking efforts as well.”
Philippine politician Manuel Roxas, who lost the battle for the Philippine presidency to Duterte in 2016, has been a particularly high-profile victim of disinformation. In August last year, non-profit fact-checking organisation Vera Files debunked an online news report which wrongly suggested Roxas had called on the public to unite against Duterte after being ambushed by the media. The organisation estimated the article may have reached up to 374,000 people, some of whom were directed to it by a pro-Duterte Facebook page.


Japan is preparing for nationwide local elections in mid-April. Emperor Akihito is also set to abdicate within the same month. Compared to the aforementioned Asian countries, daily social media consumption is generally lower in Japan, studies have found. The country, which has an ageing population, is still dominated by traditional media such as television and newspapers.
Nonetheless, sources told the Japan Times in January that the government plans to introduce online codes of conduct with major US technology companies this year, in an attempt to tackle disinformation. It also plans to continue encouraging Japanese tech companies to regulate their platforms more effectively, the report said. Crucially, the government is reportedly not keen to legislate against disinformation.
Kajimoto explains that online disinformation does not always have the same impact in Japan, because many Japanese opt to trust traditional sources. He cites a successful bid by politician Denny Tamaki to become governor of Okinawa in 2018, despite being subjected to an intense disinformation campaign, as evidence of this. “Online campaigning came in quite late in Japan,” Kajimoto says. “We still see cases of candidates being attacked online, but people are tending to use more traditional media,” he says. “Japan has an ageing society, so that affects the dynamic.”

General Trends Across Asia

Although every country in Asia has unique characteristics in this area, there are broad trends which relate to many of them around the spread of disinformation during election periods, specifically as can be seen in the country analysis above:

  • increased powers being given to government regulators to punish propagators of disinformation;
  • sporadic police crackdowns on false online actors, like bots or people pushing false agendas backed by the ruling government;
  • intensified government talks with local representatives from social media platforms to discuss ways to tackle disinformation;
  • various attempts, often by conservative media or radical online actors, to discredit political candidates through online smear campaigns;
  • online trolling by individuals and political groups of candidates who have been smeared;
  • an increase in independent, often non-profit groups, serving as online fact-checkers in the build-up to major elections, hoping to educate and strengthen the population’s media literacy;
  • growing partnerships between independent fact-checkers and social media companies.

How governments in Asia tackle online disinformation before, during and after major elections this year will help set the tone for the political climate in their countries for the coming decade, if not longer.
The evidence suggests that while some genuine efforts are being made to reduce the spread of disinformation for the benefit of citizens, many ruling parties are using crackdowns on disinformation to limit free speech.
The battle to curtail the rise of online disinformation will only begin to produce a beneficial impact once various governments in the region move towards establishing genuine democracies, allowing fair and free elections, as well as free speech as Kajimoto states. According to him, “[t]he real issue is that we do not have full democracy here. Those things should go hand-in-hand to tackle disinformation.”

Pressure grows on tech giants to solve the disinformation riddle

Technology companies are also facing increased pressure to act on the issue of disinformation, sometimes more so than governments.
Facebook, which has 2.3 billion active monthly users worldwide, perhaps faces the most international pressure to mitigate disinformation on its platform. Mark Zuckerberg’s company has hired local workers in several Asian countries to review and flag misleading or dangerous content.
But Facebook has faced criticism from moderators over deteriorating mental health as a result of the often violent and sexual content which they have to highlight. It has also been criticised for occasionally over- or under-policing specific content in Asia.
Notably, regarding Myanmar, the platform made an official admission in November 2018 that it did not do enough to counteract the spread of disinformation, namely the incitement of racial violence towards the Rohingya by a prominent extremist group, which likely contributed to the deaths of at least 10,000 people.
This admission was a stark reminder of the power of social media in spreading dangerous messages.
Conversely, Gomez, from the Asia Centre, says technology companies such as Facebook are facing significant pushback from some Asian governments over disinformation, sometimes to the point of over-censoring content. “[Their] ultimate goal is to legislate and intimidate technology companies to censor content at the source,” he says. “This is the challenge companies like Google, Facebook and WhatsApp presently face.”
In a key development affecting countries around the world, in January WhatsApp began restricting the forwarding of messages to five people at a time, over fears the platform was being used, either deliberately or inadvertently, to share misinformation.
Previously, individual users could forward messages to up to 20 users or groups at a time. The encrypted messaging service, owned by Facebook, has faced particular criticism for emboldening groups spreading disinformation because its closed nature means it cannot be independently moderated or fact-checked.
The changes were introduced after a trial in India last year, following the spread of messages which led to killings and attempted lynchings, Reuters reported. But the restrictions will likely only serve to slow down, rather than stop, the dissemination of disinformation and misinformation on the platform.

Article by Rachel Blundy.
Editing by Mike Tatarski and Anrike Visser.