It’s been a year since the European Union’s GDPR went into effect. The legislation brought a host of new laws and regulations governing data privacy and data handling meant to help consumers better understand and choose how organizations use their data, as well as to protect the data of consumers, customers and employees.
The regulations, though forcing many technology organizations to update their policies and platforms, granted consumers more protections and more transparency in the ways companies use their data, even as some organizations continue to misuse their users’ data.
In this Q&A, Bjørn Stormorken, CFO and co-founder of the privacy-centric social platform Idka, discusses the importance of governmental data regulations, as well as issues around AI and data privacy. Stormorken has a long history of human rights work that includes stints at Amnesty International and the Human Rights Directorate of the Council of Europe. His work in human rights and the international court system helped lead to some early privacy laws that crafters of the GDPR referred to when they were writing the GDPR. “Data privacy, for me, is a sort of personal thing, and has been for my whole life, basically,” Stormorken said.
According to Stormorken, his background has shown him how individuals’ information can be used “in a very bad manner.”
Generally, what are your views about issues like AI and data privacy? How has your background helped shape them?
Bjørn Stormorken: I think when you talk about privacy, you have to sort of clear the woods a bit, because there is a great deal of confusion out there, and I think that that is probably to the advantage of Google and Facebook. With data privacy, there are many different pieces that are mashed together, and that makes it very difficult for people to get the handle on what is actually going on, on the idea that there is something that is possible to do. People feel that they are only able to make very small changes and that it’s too complex and is too big a task to take home.
So, I’m very interested in trying to get privacy protection out of this big smoking cloud out there, because we can do something about it.
Of course, data privacy laws can be looked upon as draconian measures maybe, but still, it is truly possible to do it. Everyone understands wiretapping or someone entering your home without a search warrant is an invasion of privacy, but today, we say it’s OK for technology companies to come into our cars and into our houses.
We need to get people to understand what’s happening today, and that it’s very, very serious. I am really afraid that if we don’t do anything drastic, that we will slide into this, in my view, hell.
We’re giving up all of this information for what? For a mantra, which is to ‘improve the service.’ And what improving the service means it that the service can extract more personal information, more sensitive information, in order to sell products maybe in a better way. But, they also sell that information to people that want to influence you.
So, it’s my general view that this problem is no less than the environmental problem facing mankind today.
Do you think it’s possible for people to improve their way of living with artificial intelligence without giving up too much personal data? What’s the balance between AI and data privacy?
Stormorken: This is the main point — are we giving this information up because it’s actually worth it or not?
First, I would like to say that, I venture, less than 0.1% of the content that is given to different services providers is informed consent. So, when people make judgments about whether or not it’s worth it, it’s based on the fact that all these codes and procedures is available to them. I think a lot of people would not be sympathetic to that bargain if they really knew what was going on with their data. It’s very important to educate people on what data is collected, how it is collected, how they are being profiled.
Bjørn Stormorken Human rights advocate
I think information can be used for good if it is used only for the purpose that the user intended it to be used in an AI algorithm.
For example, one method could be saying that if I’m going to give all this information, I will have an informed view on what I’m actually doing. So, any service that I’ve received, it will make a log in plain text that I could easily read that shows what data is collected and how it is used.
Companies like Facebook might show you as many as 20 ratings and categories about you based on the data they’ve collected, but we know that they have somewhere between 5,000 and 6,000 of these ratings, and it’s growing all the time. So, it’s all smoke and mirrors. But, I think there are ways to get around that by being transparent and open.
I think that it’s fully possible to make a law that says that you have to be transparent, not only about what information you collect, but what is the outcome on the AI algorithm that you put it in.
People can knowingly volunteer information that will make these AI systems better, but of course, that will limit the commercial value on the information very much, so companies will fight it.
Sure, transparency in AI and data privacy is important, and a hot topic right now. I’m wondering, though, how the West can keep up in technological and AI development with China, which has few data privacy laws, if it enacts more privacy laws?
Stormorken: The reason why America’s been so successful is because it has attracted talented people from all over the world. Do you think that a well-educated person from Boston would like to leave and live in a country that punishes that person if they don’t do exactly the right thing, or … don’t have the right friends? I don’t think so.
China has built a fantastic economy in a short time span, but this has been done using means which [some view] as slave labor.
Editor’s note: This interview has been edited for clarity and conciseness.