After celebrating his win on Monday at the French Open by writing a pro-Kosovo message on the lens of a TV camera, Serbian tennis star Novak Djokovic surely received some responses, both positive and negative, on social media. He certainly got a response from French Sports Minister Amelie Oudea-Castera who said Wednesday that Djokovic’s “militant, very political” statement about Kosovo was “not appropriate” and “must not be repeated.”
While 22-time major champion Djokovic, who won his second-round match against Marton Fucsovics on Wednesday, can’t do anything to avoid hearing Oudea-Castera’s thoughts on his behavior, he does have the option of using artificial intelligence to monitor and vet what social media users have to say about his actions.
In an effort to shield players from abuse at Roland Garros during the 15-day Grand Slam tournament, the French Tennis Federation (FFT) has enlisted the services of Bodyguard.ai to provide the tourney’s athletes with software that uses artificial intelligence to block negative comments on Twitter, Instagram and Facebook. As of the start of the week, a few dozen players had signed up for the free service, Bodyguard told the Associated Press.
Retirement Is Hard. Roger Federer Will Make It Look Easy.Everything comes naturally to the Swiss tennis legend, even stepping away from the game
In theory, using AI to help athletes block out negative messaging at the French Open and everywhere else seems like a pretty reasonable idea. After all, as we saw at the French Open two years ago when Naomi Osaka withdrew from the tournament due to struggles with her mental health, negativity can take its toll on even top pro athletes.
However, enlisting AI to vet messages on social media puts a huge amount of trust in an emerging technology that has yet to earn it and could actually lead to some clear red flags being ignored. That’s because, in addition to weeding out and deleting comments that are “hateful or undesirable” after an analysis that takes less than 100 milliseconds, Bodyguard’s AI software is tasked with screening for death threats.
“It’s a nice way to kind of help us feel a little bit less pressure with the comments and stuff. It makes us more comfortable posting or sharing and talking about matches when we know we’re not going to get like 100 death threats after. It’s crazy,” said 29-year-old American player Jessica Pegula. “I mean, I get them, like, every day.”
It’s entirely reasonable that Pegula and every other player at the French Open doesn’t want to read death threats or any other toxic commentary. But, does that mean that no one aside from an AI bot should be reading them and determining if they are legitimate or not? It’s a tough question, but a true threat slipping through the cracks will provide a very tough answer.
Of course, there’s no obvious reason to think any of the threats players are receiving at the French Open are real and hopefully none of them are, but trusting AI to make that determination is certainly something organizers at the U.S. Open and Wimbledon, who are considering enlisting Bodyguard’s services, should certainly be wary of at this point.