Facebook, YouTube and Twitter are struggling to halt the spread of horrific footage that appears to show a massacre at a mosque in New Zealand as it was taking place. Dozens of people were killed Friday in shootings at two mosques in the city of Christchurch. One of the shooters appears to have livestreamed the attack on Facebook\n \n (FB). The disturbing video, which has not been verified by CNN, ran for nearly 17 minutes and purportedly shows the gunman walking into a mosque and opening fire. “New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video,” Mia Garlick, Facebook’s director of policy for Australia and New Zealand, said in a statement. Facebook declined further comment on when exactly it took down the video. What we know Hours after the attack, however, copies of the gruesome video continued to appear on Facebook, YouTube and Twitter, raising new questions about the companies’ ability to manage harmful content on their platforms. Facebook is “removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” Garlick said. Twitter\n \n (TWTR) said it suspended an account related to the shooting and is working to remove the video from its platform. YouTube, which is owned by Google\n \n (GOOGL), removes “shocking, violent and graphic content” as soon as it is made aware of it, according to a Google\n \n (GOOGL) spokesperson. YouTube also declined to comment on how long it took to first remove the video. New Zealand police asked social media users to stop sharing the purported shooting footage and said they were seeking to have it taken down. CNN is choosing not to publish additional information regarding the video until more details are available. Tech firms ‘don’t see this as a priority’ This is the latest case of social media companies being caught off guard by killers posting videos of their crimes, and other users then sharing the disturbing footage. It has happened in the United States, Thailand, Denmark and other countries. Friday’s video reignites questions about how social media platforms handle offensive content: Are the companies doing enough to try to catch this type of content? How quickly should they be expected to remove it? “While Google, YouTube, Facebook and Twitter all say that they’re cooperating and acting in the best interest of citizens to remove this content, they’re actually not because they’re allowing these videos to reappear all the time,” said Lucinda Creighton, a senior adviser at the Counter Extremism Project, an international policy organization. Facebook’s artificial intelligence tools and human moderators were apparently unable to detect the livestream of the shooting. The company says it was alerted to it by New Zealand police. “The tech companies basically don’t see this as a priority, they wring their hands, they say this is terrible,” Creighton said. “But what they’re not doing is preventing this from reappearing.” John Battersby, a counter-terrorism expert at Massey University of New Zealand, said the country had been spared mass terrorist attacks, partly because of its isolation. Social media had changed that. “This fellow live streamed the shooting and his supporters have cheered him on, and most of them are not in New Zealand,” he said. “Unfortunately once it’s out there and it’s downloaded, it can still be (online),” he added. The spread of the video could inspire copycats, said CNN legal enforcement analyst Steve Moore, a retired supervisory special agent for the FBI. “What I would tell the public is this: Do you want to help terrorists? Because if you do, sharing this video is exactly how you do it,” Moore said. “Do not share the video or you are part of this,” he added.