A dozen countries and global tech giants including Facebook, Google and Twitter have pledged to find ways to keep internet platforms from being used to spread hate, organise extremist groups and broadcast terror attacks.
World leaders led by French President Emmanuel Macron and tech executives gathered in Paris on Wednesday to compile a set of guidelines dubbed the "Christchurch Call to Action", named after the New Zealand city where 51 people were killed in a March attack on mosques.
Part of the attack was broadcast live on Facebook, drawing public outrage and fuelling the debate on how to better regulate social media.
The agreement, which was drafted by the French and New Zealand governments, aims to prevent similar abuses of the internet while insisting that any actions must preserve "the principles of a free, open and secure internet, without compromising human rights and fundamental freedoms".
The call was adopted by US tech companies including Amazon, Facebook, Google, Microsoft, Twitter and YouTube, along with France's Qwant and DailyMotion, and the Wikimedia Foundation.
The countries backing it were France, New Zealand, Britain, Canada, Ireland, Jordan, Norway, Senegal, Indonesia and the EU's executive body. Several other countries not present at the meeting added their endorsement.
The White House also said it agreed with the overarching message of the "Christchurch Call" but did not want to endorse it.
In Wednesday's agreement, which is not legally binding, the tech companies committed to measures to prevent the spread of terrorist or violent extremist content.
That may include co-operating on developing technology or expanding the use of shared digital signatures.
They also promised to take measures to reduce the risk that such content is livestreamed, including flagging it up for real-time review.
And they pledged to study how algorithms sometimes promote extremist content. That would help find ways to intervene more quickly and redirect users.
Facebook said it is toughening its livestreaming policies with a "one strike" policy applied to a broader range of offences.
Activity on the social network that violates its policies, such as sharing an extremist group's statement without providing context, will result in the user immediately being temporarily blocked. The most serious offences will result in a permanent ban.
Previously, the company took down posts that breached its community standards but only blocked users after repeated offences.
Facebook, which also owns Instagram and WhatsApp, said it's investing $US7.5 million ($A10.8 million) to improve technology aimed at finding videos and photos that have been manipulated to avoid detection - a problem the company encountered with the Christchurch shooting, where the attacker streamed the killing live on Facebook.
New Zealand Prime Minister Jacinda Ardern welcomed Facebook's pledge.
She said she herself inadvertently saw the Christchurch attacker's video when it played automatically in her Facebook feed.
"There is a lot more work to do, but I am pleased Facebook has taken additional steps today ... and look forward to a long-term collaboration to make social media safer," she said in a statement.