Specialist police units plan to use cutting-edge artificial intelligence to comb through online posts and flag up British towns and cities as "hate crime hotspots".
The Online Hate Speech Dashboard is being designed by researchers at Cardiff University.
It uses advanced algorithms to identify 'hate speech' in Twitter and Facebook posts before pinpointing the areas where they come from on a map of the UK.
Police will use the technology to pre-empt hate crime in the streets by monitoring online speech about inflammatory topics like Brexit and terror attacks.
But free speech and digital rights campaigners warn the surveillance tech could stifle online discussion when it is launched in March 2019.
They argue ordinary people expressing legitimate political views may be caught in the AI's crosshairs.
Big Brother Watch director Silkie Carlo told The Sun Online: "It is difficult to understand how monitoring lawful speech online, even if it is angry or hateful, will help police to prevent crime.
"Interfering with free speech does pose a real risk of chilling open and democratic discussion online.
"The public should be able to freely discuss topics like Brexit, terrorism and politics without being suspected of pre-crime.
"We will fight any attempts to curb free speech online."
The dashboard is being developed in partnership with the National Online Hate Crime Hub – a specialist police unit launched by the Home Secretary in 2017.
Cardiff University researchers behind the tech believe the major terror attacks last year caused a spike in hate crime.
They also claim the UK's departure from the EU could trigger a further spike.
Project head Professor Matthew Williams blamed Brexit and the "impossible promises made by leave-backing MPs" for causing Britain's "most severe crisis in peacetime".
He added: "There is concern that events will motivate more hate crime.
"As we saw following the 2016 vote, and to a more extreme extent following the 2017 terror attacks, surges in online hate speech coincided with significant increases in hate crimes offline."
Computational lead Professor Pete Burnap said: “Access to real time public communications from social media platforms will allow Hub staff to monitor hate speech at an aggregate level using cutting-edge ethical artificial intelligence."
However, free speech campaigner and Academy of Ideas director Claire Fox said we should be wary of ordinary Brexit voters getting "caught up in the criminal justice system just for expressing their views online".
She accused the Cardiff professors of working from "the biased presumption that Brexit is associated with hate crime".
Fox said: "Perhaps well-meaning academics do not see the dangers of this draconian proposal while sitting behind their screens and algorithms in their ivory towers.
"It threatens to put a gag on the electorate, stopping them from participating in debate for fear of being labelled or criminalised for speaking frankly.
What are the hate speech laws in the UK?
There are a number of different UK laws that outlaw hate speech.
Among them is Section 4 of the Public Order Act 1986, which makes it an offence for a person to use “threatening, abusive or insulting words or behaviour that causes, or is likely to cause, another person harassment, alarm or distress”.
This law has been revised over the years to include language that is deemed to incite “racial and religious hatred”, as well as “hatred on the grounds of sexual orientation” and language that “encourages terrorism”.
Section 127 of the Communications Act 2003 makes it illegal to post "grossly offensive" content publicly online.
A person is also guilty of an offence under the act if they aim to cause "annoyance, inconvenience or needless anxietry to another" via electronic communication.
The penalties for hate speech include fines, improsonment or both.
"It is a threat to free speech; a threat to democracy itself."
Speaking to The Sun Online prof Mathews admitted the artificial intelligence is only 85 per cent accurate when identifying hate speech online.
He said: "The machine is trained to recognise if something is hate speech online.
"Human experts teach it phrases and it can then identify them when they're posted again in the future.
"This is a very subjective process – even humans don't know what hate speech is sometimes.
"We know that the AI is imperfect but recognise the biases and iron them out continuously by updating our algorithms and consulting hate speech experts."
It is understood that the software does not identify individuals allegedly found posting hate speech on social media.
Instead the AI automatically attributes the posts to the area they came from and builds an image of the "hate speech hotspots" in the UK, prof Mathews said.
Assistant Chief Constable Mark Hamilton, National Police Chiefs’ Council lead for hate crime, said: “This new tool will help us to better understand the extent and nature of online abuse and to help to define the links between online abuse and physical hate crimes.
"It will not be used to ‘trawl’ for offences but will help us to identify early rises in tensions, to inform our deployment decisions and to work with communities seeking to reduce hostility at an early stage.”
The HateLab is now assessing if events of national interest led to genuine rises in hate crime perpetration, in a bid to counter arguments that recorded spikes were only down to victims being more willing to report crimes to police.
The HateLab research will be showcased to MPs, Lords, senior civil servants and policymakers at an All-Party Parliamentary Group meeting at Westminster in early 2019.
Source: Read Full Article