By Catarina Demony
LISBON (Reuters) – Chelsea Manning, a former U.S. army analyst and WikiLeaks source, said on Tuesday that technology tools can be more efficient in protecting people’s privacy and information than legal or regulatory mechanisms that risk being tampered with.
“I believe very strongly that there are technical means of protecting information and those are more reliable,” Manning told Reuters in an interview during Europe’s largest technology conference, the Web Summit, in Lisbon, Portugal.
Manning was convicted by court-martial in 2013 of espionage and other offences for leaking an enormous trove of military reports, videos, diplomatic cables and battlefield accounts to online media publisher WikiLeaks while she was an intelligence analyst in Iraq.
Former President Barack Obama later reduced Manning’s sentence, and she was released in May 2017.
Manning currently works as a security consultant at Nym Technologies, a network that aims to prevent governments and companies from tracking people’s online activities.
The 35-year-old said that “technical means”, such as cryptography, data obfuscation and end-to-end encrypted messaging platforms such as Signal, are ways to ensure privacy and a level of anonymity online.
Legal or regulatory mechanisms “can change on a whim … legislators can be lobbied … rules can be reinterpreted by courts and burdens of proof are very hard to meet”, Manning said.
“Regulation can set the tone of what the standards should be,” she said. “(But) actual math and actual technology … are much easier to control and have much more guarantees.”
‘SIDESTEPPING ETHICS’
Artificial intelligence (AI) is the big topic at this year’s Web Summit, which draws tens of thousands of participants and high-level speakers from global tech companies, as well as politicians.
AI is transforming the world and can be applied in diverse sectors, from improving the early detection of diseases to sorting out data and solving complex problems, but there are also concerns around it.
Some tech and political leaders have warned that AI poses huge risks if not controlled, ranging from eroding consumer privacy to danger to humans and causing a global catastrophe.
Manning, who herself has worked with and uses the technology, said that “damage has already been done” to some AI training models and that it will be “very hard to undo.”
“These companies have been overlooking and sidestepping the ethics of these things knowingly in many cases for many years,” she said. “The best thing we can do about it now is to try to address it.”
Earlier this month, at the UK’s artificial intelligence summit, leading AI developers agreed to work with governments to test new frontier models before they are released to help manage the risks of the rapidly developing technology.
(Reporting by Catarina Demony; Additional reporting by Supantha Mukherjee; Editing by Aurora Ellis)