Assessing current platforms’ attempts to curb misinformation

I researched two popular social media platforms to look at what they are doing to limit misinformation and then introspectively evaluated if their policies are working or not and how they could possibly improve.

The first platform that I analyzed was Instagram, owned by Facebook now Meta, which has 2 billion users worldwide according to Statista. They have a page that discusses some of the ways that they are trying to stop misinformation on their site. In 2018 they acknowledged that misinformation is a problem in society and not good for their business. In 2019 on their About page, they announced that they would be working with third-party fact-checkers that would be fact-checking information both in the U.S. and globally. If something was determined to be partially or fully inaccurate, it would be labeled so that viewers would know that what they were viewing might not be accurate. Instagram stated that if this same content was shared on Facebook, it would be tagged also. They also added the ability for viewers to tag something as false so that it could be reviewed for accuracy.

Instagram added the ability for users to report content that they think might be false and this helps to teach technology, artificial intelligence, what type of content might be false so that AI can better identify this type of information, according to Engadget.

The fact-checkers are third-party and are certified through an organization that is supposed to be neutral, International Fact-Checking Networking. The policy is that information is reviewed by technology and/or a human for false information, that they label false information, and make sure that the information is not high on people’s feed and that people who consistently are flagged as offenders, their accounts are disabled so that they cannot share information for a certain period of time.

Based on my use of Instagram, there is still a lot of information that is being shared that is misleading. I see news headlines that someone who just scrolls will walk away believing something that is inaccurate and the person who clicks to read the story further will walk away having a more accurate understanding of what is being reported, which might be very different than the initial headline. For example, there is a BBC story about a man who has been denied a kidney transplant since he refuses to get vaccinated against COVID-19, Instagram headline reads “Man denied heart transplant by US hospital as he is not vaccinated against Covid-19″. I am not saying this is right or wrong, but once the reader moves past the headline and to the article, I think the reader has more information to contemplate. There are over 100,000 people on the organ transplant list and a decision has to be made about who gets these transplants, they are given to the person who has the best chance of survival. What is missing is someone, a human, checking on information that is posted. Perhaps AI can eventually be taught to recognize the difference between headlines and the information in the actual article. Until then, I think humans need to review this information.

The other social media platform is one that I don’t use all that often, Twitter. Twitter has about 436 million users according to a chart published by Statista. According to a 2021 shareholders report, through the purchase of Revue, they want to make it easier for people to publish content. To help combat misinformation, they implemented a pilot program encouraging U.S. users to report information that is or is potentially misleading. Twitter started using labels to identify COVID-19 related misinformation. On Twitter’s rules and policies page, they have a subpage about how they handle medical misinformation, that users who violate the policy can be required to delete their tweets. They have a strike policy and at 5 or more strikes, an account can be permanently suspended. In October 2021 they implemented that interfering with civic services on the platform is not allowed.

I think Twitter should have implemented the ability for users to flag content sooner than 2021. According to Bloomberg, Twitter is using the reported tweets to help them learn and identify types of misinformation that are being shared since as of August 2021, they had not implemented a very good fact-checking system.

I see comments that are flagged “may contain offensive content”. I appreciate these notifications because I know that if I read further, the comments might not be accurate or could be offensive.

Twitter Offensive Content

Overall, I think that we want to allow free speech on social media. There are a lot of questions that have to be asked and considered because I don’t believe that we want censorship. For example, the former president Donald Trump, should his account have been suspended sooner than it was? I think that some companies like Twitter, are late to the game with their policies about handling certain types of information. Having checkmarks next to accounts, to flag them, is important. It informs the reader that the information might not be accurate and should be further researched. Informing readers that they need to do their own resarch is important since without being prompted to question the information, many will not.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.