It’s no secret that the internet has long been a respite for those with extremist views or ill-intent toward other people. But pockets of terrorist ideologies aren’t just relegated to the dark web or clandestine forums. Facebook has been front and center in the news lately as a hangout for extremists to spread their word, and many people believe that the tech giant isn’t doing enough to stop it. In light of this, Facebook is stepping up to the plate to defend against extremist content in unexpected ways — by using artificially intelligent and human monitors to identify and filter out offensive content.
From now on, images, words, or videos that depict extremism (i.e. videos of ISIL members beheading people), will be detected and blocked from distribution before they are uploaded. However, the technology in question has historically been used for filtering out a different type of content: child pornography. Amid growing pressure from global governments, Facebook is deploying advanced AI equipped with image language-capabilities that have proven helpful in the past for keeping objectionable pictures of minor children off of their platform as well as other sites like YouTube. However, there has been some hesitance about using this technology for a less clear-cut purpose than in the past, as this is still uncharted territory for wide-spanning AI tech.
Turning Up the Pressure
This security initiative didn’t come out of thin air. Facebook had long been in the hot seat for failing to do much when it came to keeping terrorist propaganda and recruitment messages off their site. As recently as April 2017, the German government reportedly threatened to fine Facebook $53 million USD including a $5.3 million penalty directly to their chief representative in Germany if they did not remove extremism, fake news, hate speech, or abusive posts from their site.
Facebook has opted out of publicly commenting on the situation with Germany but has instead released statements saying”in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online.” While the UK government praised these new efforts, they made it clear that it must go even further, stating that, “This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place.”
In a recent blog post, Facebook described the technologies and methods they would utilize to parse out potential terrorist activity on their site. Stating that the goal of the artificial intelligence project was to find terrorist-related content immediately, they explained that they would take the following steps:
Image Matching: Any pro-terrorist videos or photos that users try to upload will be directly detected by AI software and matched to a database of previously removed content before they’re even on the platform.
Language Understanding: FB’s AI system understands text that may be advocating terrorism. Currently, they are experimenting with a program that will analyze text that was removed in the past for advocating terrorism and violence. That analysis is morphed into an algorithm that continuously learns how to filter out the language favored by organizations like ISIS and Al Qaeda.
Removing Terrorist Clusters: Radical terrorists are not usually “lone wolves” and tend to operate in clusters. Pages, groups, posts, profiles and comment threads that show traits of pro-terrorism will be identified.
Recidivism: Facebook is working to detect new fake accounts created by known problem users. With this, they will drastically reduce the length of time such repeat offenders are on Facebook. Of course, this work is never done because it is adversarial and the terrorists will evolve their methods as time goes on. But Facebook hopes to stay one step ahead.
Cross-Platform Collaboration: Facebook will also work on systems to enable action across all of their platforms, including WhatsApp and Instagram. While these apps collect less data, they hope that they will still be able to glean valuable safety information.
The Human Aspect
Despite these efforts, AI and machine learning isn’t a catch-all solution for stopping terrorists. For example, algorithms cannot yet solely determine if an image of an ISIS flag is part of a legitimate news story or a piece of recruitment propaganda. With this in mind, Facebook expects to call upon their user community and anti-terrorist specialists to be involved in identifying, analyzing, and reporting threatening content. They are also currently employing 150 people whose sole job is to focus on countering terrorist content on the platform.
In addition to technological advances in AI and human collaborations, FB says that they are committed to forming partnerships with other organizations and institutions to counter terrorism online and in the real world. These partnerships will involve forming alliances with other corporations, government agencies, NGOs, and counterspeech programs to recognize extremism and prevent terrorism.
AI Careers Outlook at Facebook
With FB’s renewed focus on artificial intelligence in light of terrorism, this means that they will likely be expanding their arsenal of AI professionals. In a previous Paysa blog, we discussed what it takes to break into and succeed in this challenging field, and there is no doubt that Facebook will only be recruiting top candidates to fill these roles.
AI is an ever-changing specialty where agility is crucial for success. According to Paysa data, Facebook’s AI engineers are highly educated, with 19% of them holding doctorates. and a further 46% holding master’s degrees.
Not only are Facebook’s AI engineers educated, but they also know a variety of skills and are continuously learning. According to data gathered by Paysa from 87 Facebook employees, AI engineers are expected to know coding languages such as Java, Linux, and C++ as well as several others.
Considering the competitiveness of the AI field and Facebook’s selectiveness when it comes to talent management, recruits looking to break into this company are wise to focus on academics and real-world experience. Anti-terrorism is just one way in which Facebook is utilizing AI. According to Facebook AI Research (FAIR), AI researchers are an integral part of the company, working to:
“[Develop] systems with human level intelligence by advancing the longer-term academic problems surrounding AI. Our research covers the full spectrum of topics related to AI, and to deriving knowledge from data: theory, algorithms, applications, software infrastructure and hardware infrastructure. Long-term objectives of understanding intelligence and building intelligent machines are bold and ambitious, and we know that making significant progress towards That’s why we actively engage with the research community through publications, open source software, participation in technical conferences and workshops, and collaborations with colleagues in academia.”
The Bottom Line
As far as the foreseeable future goes, Facebook will continue to seek AI-savvy talent and will be willing to pay top dollar to retain them. As of our most recent study, Paysa can ascertain that Facebook is prepared to pay AI engineers a market salary of $169K a year.
Large companies are always looking for AI-savvy talent, and are willing to pay top dollar for professionals with these skills. If you’re considering a job in AI engineering, let Paysa be your go-to resource. Paysa is personalized based on your recommendations and can be used for job recommendations, skill suggestions, and compensation information to help you navigate a career change or raise negotiation.
Sign up at paysa.com today to get started.