When it comes to business strategy in online spaces, battling lies and hacks is a common problem.
Everything is so digital now, so discerning the difference between the real and the fake online is no-brainer important.
Understanding the unique strategies cyberbullies and attackers use in the virtual world is a large undertaking, but in a nutshell, there are a few pointers everyone can use to stay safe online.
Firstly, let’s break down some of the common phrases around online untruths. There is a distinction between disinformation, misinformation, and malinformation.
Misinformation is false information not intended to cause harm, often spread by people who believe it to be true. As such, it can be a result of honest misunderstandings or the omission of key details. No intention to deceive is involved, and sometimes, even seemingly credible sources may unknowingly disseminate misleading data.
Disinformation is false information deliberately created and disseminated to cause harm. Unlike misinformation, here, the deceit is deliberate, and the perpetrator has an agenda in mind. They sow disinformation with the intent to manipulate public opinion for personal or political gain. This fabricated content often mimics credible sources to dupe people who are none the wiser.
Also worth noting malinformation is true information shared with the intent of causing harm.
This can be as innocuous as posting the wrong information somewhere, and as malicious as revenge porn, where the change of context from private to public is the sign of malicious intent. Another case is providing false information about where and when a photograph was taken in order to mislead the viewer.
The phrase ‘fake news’ is often used as a blanket term, but it fails to differentiate between these various forms of false or misleading content. This widespread misuse of the term undermines public trust in journalism.
Public trust is affected when misleading stories mix with genuine reporting, diluting the impact of well-researched articles aimed at informing the public. When people are bombarded with mis- and disinformation, their faith in reliable sources falters.
So, given the stakes at play, developing strategies to counter online untruths is key.
The bottom line is that we need to promote media literacy. Empowering people to critically evaluate sources for their credibility can involve creating workshops or providing educational resources on detecting fake news and fact-checking techniques. AI-driven educational tools can help improve media literacy by providing users with real-time feedback on the credibility of the content they are viewing. These assistants can offer tips, warnings, and educational content to help users learn how to better identify misleading information.
Fact-checking organisations, news outlets, and social media platforms must collaborate to rapidly identify and debunk false claims. Many platforms have already begun integrating tools like pop-up warnings and flags on suspected disinformation content. AI-powered bots can scan vast amounts of content across the internet in real-time to identify false claims. They use NLP and machine learning to understand the context and verify facts against trusted databases or fact-checking websites. Examples include Full Fact’s automated fact-checking tools.
News outlets should adhere to a stringent code of conduct when reporting the news, prioritising accuracy and avoiding sensationalism to prevent the spread of misinformation. Journalistic ethics are a hot-button discussion point amid tech advancement – we must continue educating and humanising virtual problems. Sentiment analysis can help identify biased or emotionally charged language that is often a hallmark of fake news. AI tools can assess the tone and sentiment of articles to help determine if they might be attempting to manipulate reader emotions.
Enabling more online accountability is paramount, too. Tech companies can develop measures to hold users accountable for their actions, such as suspending or banning those who repeatedly spread falsehoods. Platforms like WikiTribune employ a combination of community effort and AI to fact-check and verify news stories. The AI aids in initial detection and analysis, while the human element provides nuanced understanding and verification.
With the rise of deepfakes and manipulated media, tools like Adobe’s Content Authenticity Initiative aim to provide a digital provenance for images and videos, tracking edits and origins to help determine authenticity. AI techniques are also being developed to detect subtle signs of manipulation in videos and images that humans might miss.
It’s important to be able to distinguish between reliable and unreliable sources to ensure that you’re getting accurate information.
The first step in spotting fake news is to check the source. Is it a reputable news outlet or a blog? Is the author of the article well-known and credible? The source of the information plays a significant role in its credibility.
Fake news can often be outdated, so check the date of the article. If it’s old news, it could be a sign that the information is not current or reliable.
Headlines are often designed to grab attention and may not always accurately represent the content of the article. If a headline seems too sensational or extreme, it may be a sign that the news is fake.
Don’t just skim through the article. Read it thoroughly and critically. Look for any inconsistencies or errors and verify the facts with other reliable sources.
A common tactic used by fake news sites is to mimic the URL of a reputable news outlet. Always double-check the URL before clicking on a link.
Fake news often uses inflammatory language and makes bold, unsubstantiated claims. If the tone of the article seems too aggressive or the claims seem too good to be true, it may be fake news.
Several online tools can help you verify the accuracy of the information. These include fact-checking websites, browser extensions, and mobile apps.
Remember, it’s important to stay informed, but it’s equally important to stay critical. Don’t just accept information at face value. Always question, verify, and cross-check before sharing or believing.
Successive modern New Zealand governments have taken several steps to combat online misinformation and fake news. In addition to regulatory measures, New Zealand leaders promote industry-led mechanisms to improve the safety of social media platforms.
For instance, on July 25, New Zealand adopted the Aotearoa New Zealand Code of Practice for Online Safety and Harms, a new industry-led mechanism designed to provide guidance for social media platforms to enhance safety.
One of the key initiatives is the “whole-of-society” approach to build understanding and resilience against the harms of disinformation. This approach involves multiple sectors of society, including government, media, technology companies, and the public, working together to combat disinformation.
Collaborations between industry professionals, government agencies, non-profit organizations, and individuals are vital in educating the public about misinformation’s dangers – both online and offline. The power of the community cannot be underestimated.
Supporting research initiatives gets behind the scenes into where the mis- and disinformation emerges. Investing in studies that examine the causes, consequences, and solutions to the proliferation of misinformation can provide invaluable insights into devising effective strategies.
Armed with this knowledge of the differences between misinformation, disinformation, and fake news, as well as their societal impact, we must collectively strive to combat these threats in New Zealand’s digital space and beyond.
By raising awareness about these distinctions and holding ourselves responsible for sharing reliable content, we can support independent journalism while fostering a more accurate understanding of today’s complex issues.