Facebook published its biannual Transparency Report on Wednesday, including data relating to its photo-sharing app, Instagram, for the first time in its disclosure.   

The report quantifies the number of posts published on the platform related to terrorist propaganda, self-harm, sales of drugs and firearms, child exploitation, and the measures taken by the social network to police illegal content from its platform.

According to the report, Instagram removed 753,700 posts related to child sexual exploitation and child nudity, while 133,300 posts that were taken down that contained terrorist propaganda.

Facebook also removed 1.5 million posts on Instagram featuring content related to the sale of illegal substances.

Including Instagram in the report is also a step Facebook is taking to assure its subsidiary platform is not allowing the dissemination of “misinformation” pertaining to the 2020 election.

“Last month, the left-leaning human rights group Avaaz reported that stories containing misinformation were viewed almost 158.9 million times on the social network, continuing to spread even after they were proven to be false,” The Verge reports.

Facebook is including Instagram in its report for the first time to assure users are subject to the same community standards on Instagram as they are on Facebook, Facebook’s vice president of integrity Guy Rosen explained.

“In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda,” Facebook stated in a blog post.

Rosen added, Facebook “will continue to invest in automated techniques to combat terrorist content” in hopes of bringing its percentages even higher.

Facebook did not disclose statistics detailing the number of posts containing “hate speech” it removed from Instagram, but assured it’s employing new strategies to shutdown hate speech before it circulates through its newsfeeds.

“Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate,” the blog post notes.

Facebook was plagued with even more instances of fake accounts and instances of child abuse.  

The social media network removed 3.2 billion fake accounts from its platform between April and September of 2019, according to the report, while it removed 11.6 million posts containing content depicting child nudity and child exploitation.

Facebook CEO Mark Zuckerberg has announced plans to provide users more privacy by encrypting the company’s Messenger app, however, law enforcement officials have requested Facebook suspend its plans for end-to-end encryption to better fight pedophilia, terrorism, and election meddling.

FBI Director Christopher Wray warned last month the encryption modification would morph the platform into a “dream come true for predators and child pornographers.”