Conversations can quickly spiral out of control online, so Facebook is hoping artificial intelligence can help keep things civil. The social network is testing the use of AI to spot fights in its many groups so group administrators can help calm things down. The announcement came in a blog post Wednesday, in which Facebook rolled out a number of new software tools to assist the more than 70 million people who run and moderate groups on its platform. Facebook, which has 2.85 billion monthly users, said late last year that more than 1.8 billion people participate in groups each month, and that there are tens of millions of active groups on the social network. Along with Facebook’s new tools, AI will decide when to send out what the company calls “conflict alerts” to those who maintain groups. The alerts will be sent out to administrators if AI determines that a conversation in their group is “contentious or “unhealthy”, the company said. For years, tech platforms such as Facebook\n \n (FB) and Twitter\n \n (TWTR) have increasingly relied on AI to determine much of what you see online, from the tools that spot and remove hate speech on Facebook\n \n (FB) to the tweets Twitter\n \n (TWTR) surfaces on your timeline. This can be helpful in thwarting content that users don’t want to see, and AI can help assist human moderators in cleaning up social networks that have grown too massive for people to monitor on their own. But AI can fumble when it comes to understanding subtlety and context in online posts. The ways in which AI-based moderation systems work can also appear mysterious and hurtful to users. A Facebook spokesman said the company’s AI will use several signals from conversations to determine when to send a conflict alert, including comment reply times and the volume of comments on a post. He said some administrators already have keyword alerts set up that can spot topics that may lead to arguments, as well. If an administrator receives a conflict alert, they might then take actions that Facebook said are aimed at slowing conversations down — presumably in hopes of calming users. These moves might include temporarily limiting how frequently some group members can post comments and determining how quickly comments can be made on individual posts. Screen shots of a mock argument Facebook used to show how this could work feature a conversation gone off the rails in a group called “Other Peoples Puppies,” where one user responds to another’s post by writing, “Shut up you are soooo dumb. Stop talking about ORGANIC FOOD you idiot!!!” “IDIOTS!” responds another user in the example. “If this nonsense keeps happening, I’m leaving the group!” The conversation appears on a screen with the words “Moderation Alerts” at the top, beneath which several words appear in black type within gray bubbles. All the way to the right, the word “Conflict” appears in blue, in a blue bubble. Another set of screen images illustrates how an administrator might respond to a heated conversation — not about politics, vaccinations or culture wars, but the merits of ranch dressing and mayonnaise — by limiting a member to posting, say, five comments on group posts for the next day.