Social media platforms such as Facebook, Twitter, and WhatsApp are often blamed for the seemingly unstoppable influence of factually inaccurate “fake” news on a large, gullible section of the masses worldwide. However, if recent trends are to be taken into account, it seems search engines such as Google are just as culpable — however unwittingly — in this growing phenomenon.
Understandably so! After all, Google, Bing, and other major search engines are often the vehicle for propagandist and mischievous elements in the society to push their agenda into the browsers of unsuspecting users. In the past several months, however, Google has been seen cranking up its game to put a halt to the spread of fake news through its search engine. Inspired by the success of its earlier efforts, the company is now making public the major changes it is implementing to make its search system foolproof to questionable content — at least, to the extent it is practically possible.
Interestingly, to counter the epidemic of fake news and inappropriate content, the Mountain View, California-based search giant will be resorting to manual monitoring (read: human assistance). That approach is vastly different from the company’s search feature that relies on machine learning and algorithms. However, as history has shown us time and again, bots and search engine crawlers are not always as effective as the good old human mind.
With that into consideration, Google is increasingly using human assistance to evaluate and rate the accuracy and quality of its search results in general. While these evaluators are not meant to directly affect page ranks of individual web pages, they flag search results as and when they deem necessary.
Apart from that, Google is now asking its users to lend a helping hand to the cause. To make it easier on your part, the company has introduced several ways to give feedback and flag inappropriate search results.