Until now, Facebook is
still the largest social networking platform in the world. At least 1.5 billion
users communicate with each other on Facebook every day. This number certainly
defeats other social network active users. Facebook's commitment to becoming a safe platform makes it reluctant to
take firm action against negative content that violates the rules. One of them
is by removing the content to temporarily disable the account. Facebook's
Product Management VP Guy Rosen, said it often gets questions about how
Facebook decides what is allowed to be present on its platform.
This report was compiled
between October 2017 and March 2018. There are six regulated areas, namely
related to graphic violence, adult nudity, and
sexual activity, terrorist propaganda, utterances of hatred, spam, and fake
accounts. To clear the platform of these inappropriate content, Facebook relies
on artificial intelligence (AI) technology. In fact,
Facebook has succeeded in removing tens of millions of negative content to
hundreds of millions of fake accounts on its platform. Dave Geraghty, as the
Director of Global Operations for Facebook, said this was a real step for
Facebook's seriousness in minimizing negative influences on its users.
Last week Facebook also
announced that it had removed 12 million terrorism content from its platform.
That number rose significantly from only 3 million content in the previous
semester. The announcement of removing the negative content was conveyed
Facebook a day after The New York Times reported on the struggle in the top
officials of Facebook in driving out negative news about the social media
company. The report, among others, mentions how Facebook hired a public
relations company in the US to vilify the names of competing companies like
Apple and Google.
In the period between April and September this year, Facebook stopped
about 1.5 billion fake accounts from its network and removed 2.1 billion
spam. Facebook reports surprising
numbers in its Community Standards Report. This report describes how Facebook
has handled issues such as adult nudity, hate speech, terrorist propaganda,
violent content and graphics, and fake
accounts. According to Facebook, many
fake accounts are a direct result of cyber attacks.
Facebook also dropped 21 million pieces of adult nudity and sexual
activity in the first quarter, of which 96% were found and marked by AI technology before users reported it. Rosen
estimates that every 10 thousand content seen on Facebook, 7-9 views are
content that violates regulated pornography and adult nudity standards. Facebook
also helped remove 3.5 million violent content, 86% of which had been
identified before being reported. As to discourse, the organization driven by
Mark Zuckerberg has diminished 2.5 million substance amid the initial three
months of 2018, of which 38% have been set apart by innovation.
This social network claims to be far better at recognizing and
preventing infringing content even before the content is seen by people and
reporting it. According to Facebook, 95.9% of the infringing content has been handled before being reported. Facebook
is still being watched over for false news propaganda, also criticized for
recent security violations, which affected 50 million accounts.
Facebook has recorded as many as 583 million fake accounts and 837
million spam content. Not enough to get there, Facebook also revoked 21 million
pornographic uploads and photos showing sexual activity. 96% of them have
already been detected by the Facebook security algorithm system before other
users report it. No less important, various utterances of hate and propaganda
of terrorism have also been the focus of Facebook. More than 1.9 million
uploads of terrorism propaganda were detected by 99.5% of the machines before
being reported by other users. While 2.5 million speeches of hate were also
successfully revoked but only with 38% accuracy.
Facebook is also increasingly making efforts to block negatively at the
request of the government of a country. Among the negative content, Facebook
has so far carried out the most blocking in India. Not only that, but a group of NGOs engaged in the field of
human rights also asked Facebook to release data on how often to recover
content that was deleted due to an error. Currently, Facebook has not provided
the requested data, but the company is said to offer more open offer
information than last year.
This is because various expressions of hatred are a little difficult to
detect by the scanning machine. Therefore, besides utilizing machines, Facebook
also works with teams from various parts of the world to distinguish
user-written content in various languages.
0 Response to "This Year Facebook Deleted More Than 1.5 Billion Fake Accounts"
Posting Komentar