THIS REPORT HAS BEEN PROVIDED TO THE FBI, FINCEN, FTC, FEC, SEC, OGE, DOJ, INTERPOL AND CONGRESSIONAL INVESTIGATORS (THE MASTER REPORT IS OVER 2000 PAGES). SEE WHY:
TOXIC COMPANIES (LINK)
It’s nice to get loads of advertising dollars but if you are selling your advertisers pure air that no actual people exist in… that might be a problem. If you buy millions of fake users from Russian and Chinese click-farms then your big dot.com social network might just be a pure load of BULLSHIT! FACEBOOK AND GOOGLE ALLOW, SEEK AND OVERLOOK FAKE USERS BECAUSE THEY PROFIT OFF OF SELLING FAKE USERS AS “REAL PEOPLE”!
HERE IS ANOTHER PERSON WITH AN THOUGHT ON THESE GUYS: 190708-Filed Complaint Loomer Corruption, Bribery, Payola, Sex Trafficking, Politicians – THE TECH SEX CULT AND ABUSES
- Facebook’s recent transparency report revealed that it took down 5.4 billion accounts in 2019 thus far, a huge jump from 2018’s 3.3 billion removals.
- Facebook claims that this jump in take-downs is due to improved methods for identifying fake accounts, but it has to be assumed that some are still slipping through the cracks.
- What are the primary activities of these fake accounts?
Recently, Facebook CEO Mark Zuckerberg revealed that the social media company’s main platform has removed 5.4 billion fake accounts this year, dwarfing the 3.3 billion they removed in all of 2018. Many of these accounts were flagged as fake simply because of user misclassification — Facebook does not permit nonhuman entities to have accounts, only pages. However, a significant amount of the banned accounts were malicious, representing scammers or distributors of fake news. In total, Facebook’s recent transparency report estimated that 5 percent of its current active monthly users are fake, however, outsiders estimate that figure to be much higher — potentially as high as 20 percent.
Facebook established its transparency report in 2013 as a response, in part, to Edward Snowden’s revelations of that year, in which he leaked information regarding widespread government surveillance programs. The transparency report offered a means of letting users know how frequently Facebook received government requests for data.
In recent years, however, more attention has been paid to the report’s focus on fake accounts and community standards as the social media company has been under fire for its role in the spread of fake news, the Cambridge Analytica scandal, and Russian interference in the 2016 election.
The huge spike in fake accounts that Facebook has taken down this year alone is partly due to improved methods for identifying them. Facebook claims that more than 99 percent of fake accounts on their platform are, today, automatically taken down within minutes of their creation before any user reports them as being fake. This doesn’t necessarily apply to past years’ figures, though.
Nevertheless, according to the company, the majority of malicious accounts created this year, in particular, don’t have the opportunity to take advantage of unsuspecting users. However, it’s still likely that many are slipping through the cracks.
With the 2020 elections and the new census on the horizon, a great deal of attention has been paid on how Facebook handles potentially manipulative content on its platform. Twitter CEO Jack Dorsey recently announced that it would no longer be accepting political ads, while Zuckerberg affirmed that Facebook would continue to do so and does not fact-check the content in those ads. This decision, coupled with Facebook’s fake accounts issue, has generated significant criticism over the social media company’s impact on public discourse.
What are these fake accounts actually doing?
One high-profile use case for fake accounts is to spread propaganda in an effort to influence political outcomes. For instance, many fake social media accounts were developed by Russian agents prior to the 2016 election to drive traffic towards the DCLeaks website, a front for the Russian espionage group known as Fancy Bear which contained the stolen personal information of various prominent politicians. While this site contained real information intended for propaganda purposes, other accounts actively spread fake news stories. Leading up to the 2016 election, one fake news headline, for example, read “FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide,” while another asserted that people were using food stamps to buy pot in Colorado.
These challenges aren’t limited to just the U.S., either. A great deal of the online support for the far-right German party AfD has been found to be connected to suspicious accounts, and 30,000 fake accounts were removed in France prior to the 2017 election.
However, not all fake accounts are made by state actors with the purpose of spreading disinformation. Romance scammers often impersonate attractive individuals — frequently military members, oddly enough — to gain others’ trust before announcing a fictional emergency. Then, the scammers ask their confidants for money to help out. Other scams exist, but they all follow the same basic formula: gain a target’s trust; mine them for useful information regarding, for example, their background, their hopes, their family, their problems; and then manipulate them into giving the scammer money.
It’s hard to imagine that Facebook will be able to crack down on every fake account out there. That’s why the best thing we can do in the face of these adversaries is to better educate ourselves on basic cybersecurity practices, develop our critical thinking skills to evaluate fake news, and learn to be more suspicious of others online. Unfortunately, the sheer scale of Facebook users and fake accounts means that at least some individuals will fall victim to disinformation campaigns and scams from time to time.
Facebook-Created Fake Users Proved Algorithm Radicalized People in 2019
Leaked documents shared NY Times, CNN, NBC and others also show Facebook was unprepared for the spread of lies and conspiracy about the 2020 Election
In 2019, researchers at Facebook started creating fake accounts as part of an experiment to test how the social media app’s algorithm promoted disinformation and polarization. The result: Facebook ended up with incontrovertible proof that it contributed significantly to radicalization, particularly of conservatives.
Worse, almost as soon as it was happening, Facebook employees were aware that the algorithm was increasing the scope and reach of right wing lies and conspiracy theories about the 2020 election in the weeks after Joe Biden’s victory over Donald Trump. But worried that Facebook wasn’t doing remotely enough to stop it.
The news comes from several damning articles published Friday night by a consortium of media outlets who received leaked internal Facebook documents from attorneys representing Whistleblower Frances Haugen. This includes NBC, CNN and the New York TImes.
One example is “Carol Smith,” a “conservative mom” from South Carolina who joined Facebook in 2019. In reality, Carol was a faked account created by one of the researchers. And according to NBC, “in just two days,” “Carol” started receiving recommendations to join QAnon-affiliated groups.
The researcher running the “Carol” account didn’t respond to those prompts, NBC said, but nevertheless Facebook continued to force extremist right wing content to the account. The “Carol” account’s feed soon became “a barrage of extreme, conspiratorial, and graphic content,” researchers said.
The information was presented to Facebook leaders in a report called “Carol’s Journey to QAnon.”
Meanwhile, documents show that Facebook employees were attempting to bring attention to the spread of propaganda related to “Stop the Steal,” the movement based around lies and conspiracy theories that the 2020 election was stolen by Democrats.
According to the Times, just 6 days after the 2020 election one of Facebook’s data scientists had discovered “that 10 percent of all U.S. views of political material… were of posts that alleged the vote was fraudulent.”
And according to CNN, the Times and NBC, similar warnings were raised by multiple other employees.
A subsequent analysis of the spread of such propaganda found “the policies and procedures Facebook had in place were simply not up to the task of slowing, much less halting, the ‘meteoric’ growth of Stop the Steal,” according to CNN.
Part of the problem, the leaked documents explain, is that Facebook insisted on treating every instance of conspiratorial or extremist content individually, instead of as part of a larger movement. “Almost all of the fastest growing FB Groups were Stop the Steal during their peak growth. Because we were looking at each entity individually, rather than as a cohesive movement, we were only able to take down individual Groups and Pages once they exceeded a violation threshold,” internal documents said.
According to all three outlets, Facebook did eventually attempt to do something about these problems, but many employees worried the effort was too little, and too late. CNN expressly compared this to how Facebook failed to address the problem of “coordinated inauthentic behavior,” primarily from Russian sources, during the 2016 election.
However, the internal documents do confirm that after the Jan. 6 insurrection by a mob of Trump supporters attempting to overthrow the government, Facebook took direct action to stop the growth of affiliated groups.