San Jose CNN Business  — 

On Monday, Facebook’s chief technology officer, Mike Schroepfer, tested my ability to tell the difference between broccoli and marijuana.

He showed me two pictures of green blobs and asked if they depicted the cruciferous vegetable or the mind-altering plant. I guessed both were cannabis; I was wrong. One, apparently, was an image of tempura broccoli.

Unlike me, Facebook’s content-filtering artificial intelligence technology can now determine which image is of food, and which is of marijuana, according to Schroepfer.

At Facebook’s annual F8 developer conference on Wednesday, Schroepfer detailed just how much AI has improved over time, and the ways the world’s largest social network uses the technology to, uh, weed out, posts and ads it doesn’t want users to see.

It remains tricky for AI to consistently identify objectionable content. Yet Facebook and other social networks increasingly rely on a combination of artificial intelligence and human moderators to police content posted by users. Facebook has been criticized for relying too much on human contractors, as well as for its AI not catching violent live-streams like the New Zealand mosque shooting in March.

“The New Zealand video was an awful example of a place where we need to do a lot better,” Schroepfer told CNN Business ahead of F8 over video chat.

Schroepfer said AI is now Facebook’s top method for finding bad content across most categories. And he explained that over the past four years, the company’s machine-learning capabilities have evolved from not being able to catch “the really obvious stuff,” to noticing content that isn’t easily recognizable as the kind of thing you wouldn’t want on Facebook.

Facebook CTO Mike Schroepfer explains at the F8 conference how AI is being used to find objectionable content.

Now, he said, Facebook’s automated systems are so capable at taking down images of ads with pictures of marijuana in them that people are posting ads for drugs with images of packaging, or of what appear to be Rice Krispies treats along with a short text description — both of which Facebook’s AI has also been able to detect.

“Today we’re catching stuff on a regular basis that I put it in front of regular people and they have to stare at it for a while and try to figure out why this is bad,” he said.

Yet while Schroepfer touted Facebook’s AI advances, he also acknowledged the recent shooting highlighted the challenges AI still faces in policing content.

When the suspected terrorist in New Zealand streamed live video to Facebook of a mass shooting, Facebook’s AI technology was of no help: the gruesome broadcast went on for at least 17 minutes until New Zealand police reported it to the social network. Recordings of the video and related posts about it rocketed across social media while companies tried to keep up.

Since then, Schroepfer said, Facebook has been studying what happened and how to find such violent videos faster. He thinks the company can improve as AI gets better at understanding why the content in the video is bad, or at picking up signals about the behavior of the person posting the video (like if they’d been flagged for issues in the past).

“We’re not where we’d like to be, so if people are criticizing us I get the criticism,” Schroepfer said. “I’m frustrated as anyone when we make a mistake or something happens.”

Many of Facebook’s AI advancements come from what’s known as supervised learning — an AI technique in which a computer is given labeled data (for instance, pictures of people that have each been labeled) so it knows what it needs to be on the lookout for.

But this can be labor-intensive for humans, making the technique ill-suited to tackling fast-moving content such as memes or election-related misinformation. To help find these quickly, Schroepfer said, Facebook has also made progress in what’s known as self-supervised learning, where a computer learns directly from data in an automated fashion.

He said Facebook recently rolled out a tool based on self-supervised learning to help with translating content related to the 2019 India election, which is currently underway. In March, the company deployed new software for spotting English-language hate speech in the US.

“Given the tenor of the conversation that’s happening and, honestly, the fact that we make mistakes on a daily basis, makes it hard for people to believe we’ve made a whole ton of progress,” he said. “I expect us to make a whole ton more progress over the coming years.”