On December 8, Google announced that they are expanding their search engine bias toward banning extremist content from web pages. The problem with this is that there is no clear definition for what constitutes “extreme” content, forcing sites to rely upon YouTube or other platforms to maintain credibility.
This issue has been around for years as websites try to navigate the ever-changing landscape of digital censorship. Sites like Facebook, Twitter, and even Google itself have tried to suppress politically biased articles and information.
And it’s not just mainstream media sources being affected – independent news writers can also be threatened by extremists groups using social media hacks.
Being able to reach an audience is one of the most important things about writing. By having an online presence, you’re making your work available to hundreds (or thousands) of people who may never have heard of you.
The way to achieve this is through blogging services such as WordPress.com. Or, you can sign up for a free account on YouTube, for example.
But here’s the thing: Nobody reads blogs anymore. At least, not old peoples’ blogs. Young people don’t read them unless they’re part of a community that shares the same interests.
You might get someone viewing your article on Huffington Post, but you need to promote it on Reddit and 4Chan too. You need to push.
"We've been thinking for a while that we need to have a way to surface extremist content," John Eyston, senior director of editorial excellence at Bing said during his talk.
But how exactly do you go about doing that? How can you determine if something is extremist? And what makes a particular set of ideas extreme?
"There are certain standards that we use here at Microsoft with respect to determining whether a piece of content is propaganda or not," explained Mr. Eyston.
To be classified as propaganda, a story must adhere to three criteria -- one being originality, it has to offer a viewpoint that’s never been heard before; Kalvin Arally called this the ‘originality test,’ where anything that recites history comes up with some variation of ‘this has already been said.’
The third standard is consistency, to qualify as unique your idea must be presented in an interesting and creative way, without facts that could offend or confuse anyone.
``Consistency isn’t just a matter of saying the same thing over and over, but also the tone and manner in which you say it,'' stated Ms. Kalincik.
Mr. Eyston concluded by stating that he believed the team was working toward having a tool available within the next few months that would help surface extremist content. He did note however, that there were no plans to remove older stories from the search results.
It’s not just social media sites that are having trouble managing comments. Bloggers run into this issue when they try to filter bad comments by typing “nonsense” in their favorite keyword tool then closing the result.
Some commenters are blind or have disabilities; others may be trolls trying to sow dissent for fun and profit.
These issues can easily happen on a site with hundreds or even thousands of comments. So how do web professionals deal with these problems?
Content management systems (CMS) make it easy to manage all your content online including comments. But what happens when the people running the CMS is not responsible enough to handle things like spam and malicious code?
They ask you to use third-party plugins or services to control comment ratings and moderation. However, if you use one of these tools, you lose some degree of control over the coding and markup processes associated with your website.
You also open yourself up to attacks from hackers who specialize in breaking comments down.
Let’s take a look at some of the major changes that Google has made to their algorithms in recent years.
Google claims they have not changed the algorithm itself, just when they advertise what it is doing.
What does that mean? It means that rather than web pages with extremist content being banned entirely or penalized by one way or another, they are now accepting that these websites exist online and people should find ways to navigate through them.
Here are some examples of what sites used to do and how they were labeled:
- “Site X is bad” because of link sharing or spammy content.
- “Site Y is terrible” because of all the negative reviews about it.
- “That site sucks” because it crashes frequently or doesn’t work at all.
Webmasters complained about these labelings and decided there had to be a better way to deal with these issues.
Now, more and more web companies are coming forward with policies stating that although links may matter for ranking, they shouldn’t determine whether or not someone can access certain information on a website.
These lists only determine where you will find listings for businesses like auto repair shops, estate agents, political candidates, food trucks, etc.
It makes sense that search engines don’t want to promote places that provide medical services, especially if there are many cheaper options available.
There are few ways to promote extremist content online, especially via major social media sites like YouTube or Facebook. That is because both companies have tightened down their guidelines in recent years to prevent spam and commercial material.
However, there are still some methods that can be used to reach an audience through Twitter, Instagram, and other platforms.
One such method is hacking or posting anonymous tweets. This technique allows you to send messages from different accounts. By creating numerous fake profiles with random letters and numbers for the Twitter logo, up-and-down arrows instead of follows, and the same goes for hashtags on Instagram.
The messages you spread remain untraceable as they use virtual words not found elsewhere on the internet. People who receive these tweets do not know whether they were sent by someone inside or outside the company, which eliminates the risk of retribution against those people.
This way, extremist content reaches millions of users without going through traditional channels. It has become one of the most effective means of spreading extremism worldwide.
Not too long ago, I assumed that web pages with high rankings by Google and other search engines were the most accurate and trustworthy.
After all, they had to be correct to get ranked so highly.
But over time, I noticed how often non-experts would ask me about certain topics/articles (and sometimes links offered within those articles) only to become confused or concerned at what they heard.
We can learn several things from this. First, not everyone should rely on searches for important information. Second, it’s possible that our own bias is making us believe more strongly than reality that some content is useful or good, which will influence how we feel about that content (the ranking, the article itself).
Third, people need to understand that trying to game the system through artificial clicks may work for a few months (or even years)), but will likely hurt your website's reputation in the end.
While there’s not much you can do about extreme content, there are some steps you can take to improve your ratings for other content.
There are two main types of people who contribute to extremist views; those that seek out violent or hateful imagery and rhetoric, and those that prefer the anonymity of social media.
While it is impossible to prevent all exposure to extremist content, working to avoid social media sites such as Facebook (and their user data) is a starting point.
Google+ and YouTube are also big contributors to extremist content, so setting up alternative accounts is another way to reduce tolerance of this kind of material.
Although we all want safe, secure websites, this is not always possible when you have an online presence. There are people out there who do not mind trying to hack into your site for their own enjoyment.
While most hackers try to break in for fun, it was once used to collect money from victims. Nowadays, hackers can make money by selling information such as credit cards and taking control of accounts.
Search engines also list sites that host extremist content. While Google has made efforts to remove violent or controversial images and videos, they still allow some pages to post offensive material without any intervention.
Google claims that their search results are unbiased and don’t promote specific political views, but studies show otherwise. A 2013 study found that searches about politics were twice as likely to lead to right-wing news sources than left-wing ones.
Other studies have shown that social media tends to favor liberals over conservatives. Twitter has even been known to block users more often due to their opinions being too conservative.
Facebook has liberal algorithms and moderators, while YouTube favors leftist channels and removes conservative videos. All of these things put together create a large bias towards liberalism.
Now, I’m sure you take down content that could be considered extremist very quickly, but do not think that it won’t affect your ranking. When people search for terms related to extremism, your page will often have a shorter shelf life than pages filled with other topics.
People are becoming increasingly aware of the need to combat terrorism and violent extremism. Moreover, social media has enabled users to rapidly share videos and articles across platforms. Pages requiring or promoting extremist views can become poisoned assets that no longer serve their intended purpose over even relatively short periods of time.
According to one study, nearly half of all terrorists worldwide were influenced by extremist propaganda online. The research was conducted by the University of New Haven and involved more than 1,000 participants from around the world.
That said, there is still some debate as to how influential the internet is in spreading radical ideology. However, we do know that internet-based searches are the first thing people searching for information about terrorist groups like ISIS use to start off their discovery process.