I don't know much about the Christchurch accord, but to me its one of those things where a tech company cannot win. Do you make a conscious effort to reduce online crime, or do you allow unrestricted free speech? You kind of lose either way.
Online crime is certainly complex. There has been drug dealing, black market organs, weapons, assasination traded online for awhile; there are child slave rings, and sexual black mail porn rackets, as well as terrorist networks operate like ISIS. Most of it on the deep web, but some in quite mainstream places.
This accord isn't targeting them though, it's targetting 'online extremists' particularly of the white nationalist persuasion (hence the name, Christchurch call). You might consider it a sort of hunt.
Me? It baffles me that it isn't obvious to people how to increase the fundamental humanity of online interactions; which is IMO the issue that actually leads to lost empathy, tribalistic conflict, bullying and cultures of antagonism BEHIND the rise in say, white nationalism (or equally soon to be terrorist groups like antifa). This massive rise in political polarisation did not occur before the rise in the internet, and to lesser degree physical globalism. They are in a way, problems of societies at impersonal, anonymous scale. Things we have known about the internet since the 70s in scientific studies, have no once been intergrated into our social media; like the empathy inducing effect of seeing a live face, rather than a wall of text. The positive impact of a smaller community with a culture of ettiquette.
Or maybe they realise that the WAY we interact online is the fundamental problem and they just don't care because a corporation given orders to censor speech and rifle through peoples underwear drawers is a free pass to profit and power.
Either way, there's no good outcome to further disenfranchishing the sorts of people they are targetting (and the ones they are not).
There's using solid evidence to go knock on a door, and then theirs armed me knocking on doors with no real evidence, and asking pointed questions. The former is fine, for online crime. The later is not (and it's already happened here in NZ due to tweets and timeline posts post christchurch where actual armed police have shown up intimidating and asking about online posts)
It's quite clear people are already being disenfranchised by online bans, shadowbans, language policing and so on. To take that up and notch, encode it into law, to have all big tech co operating and sharing data about it.....
I'm sure the CEO's and representatives in boardrooms all feel like they are doing a good thing, motivated by right, rather than fear. There's often vast applause to some of the most fool hardy courses of action.
The world might be better off, if we simply had a psychologist in every room making major decisions about human life to inform people as to how we tick, when emotion blinds the obvious.
They could be like "have you considered that white nationalists already have a victim complex, and that's their entire motivation?" "Don't you think living up to the delusional paranoia of fringe identitarians might set them off?" "And what about ordinary people that get caught up in the filter, won't they be resentful?"
It reminds me a little of how we bush went to war in iraq, and went after osama. There was a chorus of people saying 'you'll kill people, make them resentful, and produce more terrorism'. Today, we have ISIS, and we'd probably gladly swap them for al qaeda.
The voices are saying the same here "You'll make people resentful, and produce more extremists", and like then, no one is listening. I guess in that way, due process, and things like 'rights' that we have when it comes to govt agencies and police, don't just make things fairer, they prevent antagonism between the people and the state. Which is always a delicate balance.