Facial recognition is under the microscope again, after football fans and civil liberties groups have criticised its deployment at the upcoming Wales derby between Cardiff City and Swansea City this Sunday.
Most football fans turn up to enjoy the match and have a good time. Some may have a bit too much to drink and shout at the referee. But this sort of behaviour, although a bit OTT, doesn’t pose a danger to other fans and the players and is generally considered part of the ‘atmosphere’ of a big match.
However, some take it too far. Some use the game and stadium as a platform to incite hatred with racial slurs, to start fights and assault football fans, run out onto the pitch, inconvenience the players and spoil a great game for everyone. These ‘hooligans’ are specifically the people that South Wales Police are hoping to identify with targeted facial recognition and prevent from entering the stadium.
How does targeted facial recognition work?
Targeted facial recognition scans faces in large crowds and compares them to its ‘watch list’. The ‘watch list’ contains the faces and data of people who have past criminal convictions, obtained from police records. The watch list employed at Cardiff Stadium on Sunday will presumably include offenders who have been banned from football stadiums under court order for antisocial and violent behaviour.
South Wales Police Assistant Chief Constable, Andy Valentine, said: “Football banning orders are issued by the court to those who have misbehaved at a previous football game and hence this provides us with a clear rationale in our strategy to prevent any crime and disorder.” Source: BBC
Football fans and their families who have never been issued a football banning order by the court should have nothing to fear from an additional layer of security, put in place for their protection.
However, logic and assurances by South Wales Police that any data captured of those not on the watch list will immediately be deleted does not appear to have mitigated public outrage.
Why do people mistrust facial recognition?
The answers to this question are key in moving forward with the use of new technologies in policing and for the purpose of ensuring safety and security. The problem is that, potentially, addressing the answers may not be simple as there may be a number of issues at play.
Privacy advocates fear that facial recognition will be used as a tool to produce and collect data, without permission of those whom it is collected from, who then have no control over how it may be used.
But those who want to feel safer in public spaces, such as stadiums, would argue that a level of privacy should be sacrificed for increased security and the reduction of risk.
Of course, many people would prefer not to be watched by a camera, but security measures are often necessary to protect large crowds of people and to deter antisocial and violent behaviour. CCTV is widely used and accepted throughout the UK and, despite still being watched by a camera, many people feel safer when this security measure is present. Targeted facial recognition offers the public an additional safeguard against the risk of violence by identifying known offenders and raising the alert to the authorities. From there, the decision on whether to act is up to police and on-site security.
It’s easy to understand why people may feel concerned about images of their faces ending up in databases. The more personal the data that is collected and stored, the more the risk of cybercrime to obtain that data increases. The public are often warned about the dangers of cybercrime and the risks of volunteering sensitive data. It has been drilled into us over many years to be wary of any technology or any organisation collecting personal data.
Facial recognition could be a major asset in improving public security and safety across the UK, but its use and development should go hand in hand with efforts to strengthen cybersecurity and transparency around data regulation guidelines.
Potential for bias
Some reports claim that facial recognition technology is less accurate at identifying BAME and female faces and that this has the potential to produce biased results. This, understandably, may increase fear and non-acceptance of the technology, particularly in people who are part of those demographics.
However, a recent survey of “a weighted sample of 1,092 Londoners” into the use of facial recognition technology in trials conducted by London Metropolitan Police revealed that “more than 57% felt its use by police was acceptable.” The acceptance rate “increased to 83% when respondents were asked whether the technology should be used to search for serious offenders.” Source: The Guardian.
While it is extremely important to be aware of any issues in the technology that could affect its results and have a negative impact on members of the public, this shouldn’t necessarily mean that it should not be used to improve the effectiveness of police operations and in reducing violent crime until the technology has a 100% success rate.
The way to combat the potential for bias is to provide adequate training for police officers in how to deal with such an issue, should it arise. Ethical guidelines should be a priority in the continued use of facial recognition technology.
Despite outcry from fans, Big Brother has won this match and will no doubt be making a return appearance in the future.
By Georgie Bull
Senior PR Account Executive