Ban

Lies are now the standard response!

August 2024

# WARNING - The following article discusses pornographic and online child exploitation material. Reader discretion is advised.

Antigone Davis (Pictured) - Vice President and Global Head of Safety of Meta, recently gave evidence to the Joint Select Committee on Social Media and Australian Society.

When questioned by Senator Sarah Henderson regarding Meta's President of Global Affairs, Nick Clegg touting a 96% efficiency rate in stopping juveniles’ access to pornography, Ms Davis made the following statement;

    "I'm not sure about pornography. We don't have pornography on our site (Meta), so let me just correct you on that statement."

Given my extensive experiencing in dealing with online harm on Meta, I was mystified by such an openly misleading statement. As such, I initiated action to negate this statement made by Ms Davis. I therefore conducted an experiment on Instagram over a 10-hour period.

In the experiment, I initially used an Instagram account which represented the user as a male aged 15 years.

PORNOGRAPHY ON INSTAGRAM

Over a period of days, I spent 10 hours scrolling the Reels feature of Instagram on the juvenile account. Over that 10-hour period, I identified the following;
    • 860 Active User Accounts posting pornographic content and sexually explicit material.
Of these 860 accounts;
    • 95 accounts also posting pornographic video content.
    • 34 accounts posting images or videos of Child Exploitation Material, including the sexual abuse of children.
    • 16 accounts posting images or videos of bestiality.

CONTENT FEED ALGORITHM

In the first hour of the 10 hours scrolling Reels, I was exposed to accounts containing graphic adult content every 3 minutes on average.

In the second hour, that had dropped to seeing accounts containing pornographic content every 90 seconds on average.

By the fifth hour, I was being exposed to separate accounts displaying adult content every 28 seconds on average, a rate which continued for the remaining 5 hours of the experiment.

By the fifth hour, continuing through to the tenth hour;
    • 64% of accounts displayed via the Reels feed contained pornography or adult sexual content.
Of these 860 accounts;
    • 21% of accounts displayed openly sexual content, not including pornographic content.
    • 10% of accounts displayed violent content, including gun violence, physical assaults and graphic vehicle crashes.
    • 5% of accounts displayed content which was general in nature and separate from content pushed by the algorithm.

ACCOUNT REPORTING

I reported all 860 accounts identified to be posting pornographic content using the experimental account, as well as a number of other Instagram accounts under my control.
    • 11 of the accounts (1.28%) were removed after being reported 1 to 5 times.
Of these 860 accounts;
    • 82 of the accounts (9.5%) were removed after being reported 5 to 10 times.
    • 131 of the accounts (15.2%) were removed after being reported 10 to 20 times.
    • 202 of the accounts (23.4%) were removed after being reported 20+ times.
    • 243 of the accounts (28.2%) when reported, returned a message advising the account "Does not go against our Community Guidelines", and remained active on the network.
    • 191 of the accounts (22%) were reported without any response from Instagram and remained active on the network.

BLOCKING OF MY ORIGINAL ACCOUNT

After identifying the pornographic accounts above, I started reporting them with the same Instagram account used in the experiment; After 68 report submissions, that account was suspended for "suspicious activity". I have been unable to re-access that account.

EXPERIMENT SUMMARY

    1. There is pornography on Instagram in contradiction of the statement made by Antigone Davis and in direct contravention of Meta’s Community Guidelines.
    2. There is nudity on Instagram which is in direct contravention of Meta’s Community Guidelines.
    3. Juvenile Instagram users are exposed to an aggressive content Algorithm which does not offer adequate content variation or warning regarding the continued exposure to harmful content. This has a clear potential to expose a user to content addiction, impacts on mental health and abuse.
    4. Over 50% of reported content which is in clear breach of Meta’s Community Guidelines and User Policy, is not removed or goes ignored.
    5. Harmful content on Instagram is rarely removed when reported in lower numbers, with removal increasing in percentage the more times a user account is reported.
    6. Accounts which are used to aggressively report inappropriate, illegal and harmful content on Instagram are suspended by the network.

Juvenile exposure to pornography on Meta remains a concern for all professionals in the realms of online safety. Such throw away statements from executives like Antigone Davis, do nothing to build trust in Meta’s alleged efforts to combat online harm. Instead, they reflect a clear attitude of indifference and a lack of true understanding of exactly what is happening on their network.

Meta can no longer be trusted to run their network under current laws and regulations. Their continual failures to act on harm are well known across the globe, and the demand for immediate change is now being echoed across every nation on this planet.

It is time to listen to the truth being presented by those who are witnessing online harm every day, instead of the lies of Meta executives.