Luke Vargas: Welcome to Wake, where we explore how events overseas affect our shores. I’m your host Luke Vargas, here for this week’s dip into the waters of foreign policy.

Earlier this month, Facebook said it was handing over information about suspicious advertising purchases made on its network by Russian companies and individuals. The disclosure raises questions about whether the world’s largest social networks were used to influence the 2016 election.

This week on “Wake” we’re going on a world tour of internet regulation, looking at how countries, including ours, are trying to balance competing interests of national security, privacy and free speech. Is such a balance even possible when internet and information companies like Facebook wield so much power?

Stay with us next. 

Thanks for joining us. We’re coming to you today from United Nations headquarters in New York.

And with us today by phone from Toronto is Natasha Tusikov, Assistant Professor at York University and author of the new book, “Chokepoints: Global Private Regulation on the Internet.”

Natasha, we’re so glad to have you and welcome to Wake.

Natasha Tusikov: Thank you. Glad to be here.

And also with us is Alan McQuinn, a research analyst at the Information Technology and Innovation Foundation in Washington. Alan, welcome to Wake and thank you for being here. you for joining us.

Alan McQuinn: Thank you for having me on.

Luke Vargas: Alan, we’re going to get to a lot of issues today, but I think we should start with the headline mentioned in the introduction, that Facebook is turning over to the Mueller investigation data about advertising purchases made on its network by Russian individuals and entities. Put simply, did Facebook do anything wrong in selling the ads, and are they doing enough now?

Alan McQuinn: So the short answer is no, they didn’t do anything illegal in selling these ads. Unlike other types of political media such as TV and radio, the onus for displaying this information is on the person that buys the ad, and the Federal Election Commission failed to come to a decision about whether Facebook should require disclosure on these political ads in 2011.

However, the question is, are they doing enough? So I believe, and by Facebook’s own admission, they were not doing enough. So now they set forth a few reforms, they’re helping with U.S. investigations, they are requiring these disclosures that other TV and radio media also use, and they are creating advertiser pages so that when you click on a political ad it will go straight to that advertiser’s page and you can see other advertisements that that advertiser has put out for any audience on Facebook.

A Facebook server facility in Sweden. Courtesy: Facebook

Luke Vargas: Natasha, Facebook previously said no ad buys from Russia were made. That’s obviously false now, but taking them at their word that they only found these ad purchases after a review months later. Doesn’t that still suggest these companies fundamentally don’t know what’s happening on their networks?

Natasha Tusikov: Yes it would. And I think one of the challenges is these social media companies are advertising platforms – Facebook, Twitter, companies like Google – they are very large digital advertising platforms and that’s how they make the majority of their revenue. And so they depend on being very complex, very fast-moving, real-time ad platforms, which means they gather ads from a lot of different places, and these ads may only appear for a few minutes, hours as a time, so it’s very very difficult to track where these ads come from.

I’ve done some interviews with advertising agencies, advertising non-governmental organizations in the United Kingdom, and they admitted that this is a very complex area and most of our ads end up on the right places, most of the ads are good ads, but a few bad ads slip through.

And this is what we’ve seen happen here. We’ve seen ads from actors that they didn’t think they were buying advertisements from slip through.

And this points to a bigger problem that these companies haven’t faced a lot of pressure to regulate their platforms for bad ads before. They’ve faced some, certainly in relation to illegal online pharmacies in the United States, but they haven’t faced this degree of government pressure.

Luke Vargas: Alan, you mentioned that Facebook is making internal changes to stop its service from being weaponized for political gain. Are regulators and Congress of the mind that self-regulation is enough in such an important sector?

Alan McQuinn: Well I can’t speculate on what a lot of lawmakers on considering, but in general, we’ve seen a lot of regulations across the world for platforms on how they prohibit content, how they sell advertising, and how they prioritize search results.

Something that is important to note and should be considered as people look at creating policy in this area, is that these platforms use algorithms to take down prohibited content, and it takes time and trial and error for these algorithms to get good at this, and change here can be gradual.

A good example of this is when Facebook came under fire a year ago for taking down a Pulitzer Prize-winning photograph of a nude Vietnamese girl, which is a very famous photo from Vietnam, and they came under scrutiny for taking that down due to the algorithm.

This Pulitzer Prize-winning photograph showing the aftermath of a napalm attack on a Vietnamese village in 1972 was removed by a Facebook algorithm. AP Photo/Nick Ut
This Pulitzer Prize-winning photograph showing the aftermath of a napalm attack on a Vietnamese village in 1972 was removed by a Facebook algorithm. AP Photo/Nick Ut

Luke Vargas: Natasha, I know algorithms aren’t perfect, but there is also a human element to involved in pulling down content. So, looking at all of the tools companies like Facebook and Twitter have at their disposal, what are the advantages and drawbacks of letting the companies be the ones to monitor content?

Natasha Tusikov: Well on the side of benefits, it’s certainly faster and easier for Facebook, Twitter, Google, these other companies to take down content that they consider inappropriate for their sites, so maybe hate speech or bullying, or information that’s actually illegal – child pornography or sale of counterfeit goods, terrorism content.

They can take things down globally through their internal terms of service. These are the agreements that often we don’t read whenever we click on these news services.

The problem, however, like Alan mentioned, is that these companies are often poorly equipped to distinguish legality from illegality online. It can be very difficult to determine if something is actually hate speech, or if it’s simply unpopular speech – if it’s child pornography or, as Alan said, a very famous photo of a nude girl that came down as something that was obscene for Facebook, but obviously it was art, it’s a very important political photograph.

So these companies can be not really best placed to identify what’s legal and illegal, and they use a mixture of algorithms and human content moderators, and these moderators only have a few seconds to determine whether something belongs or not, and that’s an awful lot of pressure on humans to decide if something belongs or not.

And then the algorithms, as Alan said, they’re a work in progress. They’re either taking down lawful information, lawful content and the companies are trying to fix them, but this is very very complex.

Alan McQuinn: To add to that, regulating in this space is also very difficult. So, if we look at how Germany recently created a hate speech law that would openly fine social media platforms for not taking down hate speech in an appropriate amount of time, normally online platforms use algorithms to regulate this hate speech, like we just discussed, where you’re able to take down content, but it’s not always as perfect and it needs a lot of fine-tuning.

However, if faced with a choice to get a huge 50 million Euro fine or just take down content, they’re going to take it down. So it might actually involve more removal of legitimate speech.

Courtesy: Electronic Frontier Foundation
Courtesy: Electronic Frontier Foundation

Luke Vargas: Natasha let me put that to you. What Allen is suggesting there, that to avoid legal risks these networks are eager to take down content sometimes, that seems like we might get into questions of free speech and intellectual property, because it could lead to content that has every right to be posted actually being taken down –

Natasha Tusikov: Absolutely, and we’ve seen many many cases of this. The Electronic Frontier Foundation has catalogued a series of wrongful takedowns, as in, mistaken and abusive takedowns.

So we’ve seen businesses target, for example, rival businesses for negative ads or reviews they didn’t like. We’ve seen movie companies take down reviews of the latest film that they didn’t like, and we’ve seen musicians who have their own blog posting samples of their own music with full permission from their record company, their record label, and we’ve seen that taken down.

So we’ve seen a variety of different cases where legitimate content has been removed. And this is a big problem in terms of, like Alan said, the removal of legitimate speech.

And if we turn to people who make their living, supplement their living selling things on eBay or a marketplace site like TaoBao, this can be a big punishment for a small retailer who sells things and then has their ads taken down, alleging that they’re selling counterfeit goods. And they’re selling used Gucci purses, or they’re selling Nike products that they bought on an overstock site. So this can be a real hardship for people trying to make a living, people who depend on their reputations.

Alan McQuinn: And in some of these cases, like the German hate speech law, the person who has their content taken down has no ability, has no recourse. They can’t even sue in court. In other cases, like with copyright, at least the law allows people to have their time in court.

Luke Vargas: Alan McQuinn is a Research Analyst at the Information Technology and Innovation Foundation in Washington. We’ve got to take a quick break. We’ll be right back.

~

Luke Vargas: Welcome back to Wake, where we explore how events overseas affect our shores. I’m your host Luke Vargas at U.N. headquarters in New York City.

We’re talking this hour about internet regulation around the world today with Natasha Tusikov of Canada’s York University, author of the book “Chokepoints: Global Private Regulation on the Internet.” Also with us is Alan McQuinn, Research Analyst at the Information Technology and Innovation Foundation.

Alan, let’s pivot to international security now. Terrorism is of course a global threat, and yet it surfaces locally. Threats are whispered in many languages all around the world. Have internet firms and governments figured out a way to share information so that threats, no matter where they are, can be properly addressed?

Courtesy: Twitter
Courtesy: Twitter

Alan McQuinn: Well, it is difficult to get a grip on from a national security perspective.

I will say that governments are generally good at sharing terrorist threat data and threat data in general and that they have already collected, and multinational companies must comply with legal law enforcement request for data from all the countries in which they operate.

Multinational companies are also starting to get together to create best practices to combat extremism. We saw several tech companies get together and create the Global Internet Forum to Counter Terrorism, which is a working group where companies share best practices to help remove terrorist content from their platform.

There are gaps here, however. We still function on antiquated way in which law enforcement are able to share digital evidence or gather digital evidence that is located in other countries. These mechanisms, which are in – I apologize for the jargon – Mutual Assistance Treaties, mean that one law enforcement agency in one area must go through a very long and antiquated process in order to get information stored in another country.

Another gap is the fundamental barrier that encryption has to a lot of law enforcement getting access to data. And obviously, different countries have different norms and laws about free speech, about privacy, about what constitutes some of this content.

Luke Vargas: Natasha, when we talk about the forces at play in setting internet policy, you’ve got governments, which pretty clearly want as much data as it can get so then can sift through it all and decide what’s a threat.

But on the opposing side, I think it’s a bit harder to describe what internet companies want. On one hand they use all the data about customers to make advertising revenue, but they also profess to be these guardians of privacy. So how would you characterize the position of these internet firms?

Natasha Tusikov: Certainly, well and there has been since Edward Snowden leaked the classified information revealing the U.S. National Security Agency’s surveillance programs there’s been a lot more emphasis in these internet firms on protecting their customers’ data.

Customers were understandably frightened, confused, alarmed to hear that the U.S. government was siphoning up their data, in some cases through secret court orders and in some cases, especially for overseas clients, without any court orders, as they don’t have to have court orders.

So customers were very concerned, and this is something that internet firms have pivoted to, to emphasize privacy, to emphasize the protection of their customers’ data.

Of course, these companies say when they are severed with a formal legal order, some kind of warrant, they will hand over the user’s data. What they’re trying to stop then are these handshake agreements, or the pressure from law enforcement to simply hand over their data without court orders.

They’re trying to say, come to us with a legal court order, [and] we will be pleased to do our lawful duty and hand you that information. So it is pretty complex, and these companies do face liability. They fear that if they simply hand over information outside of court orders, they could be liable to their customers. Their customers could sue, especially if they get that information wrong.

On the other hand, they are being pressured by the government to be good corporate citizens, to participate in this fight against terror. So some of these companies are caught in quite a difficult situation.

A Google graphic shows the types of content allowed and prohibited on Youtube. Courtesy: Google
A Google graphic shows the types of content allowed and prohibited on Youtube. Courtesy: Google

Luke Vargas: Alan, you mentioned that there’s “decent cooperation” between internet firms and law enforcement on terrorism. But a word like terrorism can mean very different things depending on the country you’re in. Is there concern from a free speech perspective that one country’s democracy activist could be flagged by another government as a terrorist, and that these internet companies aren’t a really good referee on what’s what.

Alan McQuinn: Yes. From a free speech perspective, countries that are non-democratic will continue to create rules that impinge upon the privacy, the anonymity and the free speech of the people within those countries.

One thing that people in countries that already protect free speech, that already protect privacy can feel safe about is the fact that the laws in their countries apply to them.

And one of the things that is often very worrisome to me, because the internet is a global entity, is when other countries, especially those that are non-democratic, try and exude influence on multinational companies or others to extend the reach of their laws beyond their borders to other countries.

A concrete example of this is with the right to be forgotten in France, which is the ability for a French citizen to request a search engine to de-link or de-list certain search results that result for them, such as inaccurate information or information that is no longer relevant.

And France, over the last couple of years, has tried to expand this right to all domains globally, in effect expanding their privacy regulations for everyone, even though other countries, such as the United States for example, put a higher value on free speech and the ability to look up journalist’s articles that may be removed as a result of this law.

Luke Vargas: Natasha, we’ve just come off a presidential election in which candidates from both parties said it’s time for Silicon Valley and Washington to have a serious conversation about the growing risks on the internet and find some common ground. And yet I haven’t really seen this develop much beyond just rhetoric. Are there any areas of potential agreement that we could actually start to see emerge here?

Natasha Tusikov: It is a very complex issue, and there is quite a bit of difference between internet companies and the government.

So I think the question of common standards is a very difficult question, but what I think is emerging is we have some common themes emerging. And certainly one of those themes is that these internet companies are very well equipped, they’re highly-specialized, and they should take on greater enforcement roles.

The United States is certainly heading in this direction and has been for quite a few years. We see this in the United Kingdom. We see this more broadly in the European Union. This idea that politicians are delegating enforcement responsibility, kind of handing off enforcement responsibility to these companies. And they’re saying quite bluntly: you guys have some excellent engineers, you have some very good technical expertise and algorithms, we want you take a bigger role in identifying and addressing bad content and bad actors on your platforms.

U.K. Prime Minister Theresa May recently said that she wants to see tech companies remove illegal content in under two hours. So there are huge expectations on these companies to do something very effectively and very quickly.

I think what some of the problem is, as we’ve talked about throughout the show today, is that these companies often are ill-equipped to actually distinguish in very complex areas like terrorism, what is actually terrorist content from what is maybe images of a street in Syria, protest content.

Luke Vargas: Natasha, I understand encryption and anonymity as a potentially good thing when it comes to individuals trying to stay away from snooping governments. But I think the Facebook/Russia ad purchase shows us how powerful one purchased message can be online. And just like you’ve got to have your real name on a credit card, is there a view that anonymity of ad buyers on the internet is just not tolerable anymore?

Natasha Tusikov: Certainly on the advertising front I would say that rights-holders of intellectual property – so these big companies, Gucci, Nike, Apple – have been a very very strong force is trying to reform online advertising. And this is because they do not their very expensive ads for their goods showing up on bad sites. And in particular, sites like the Pirate Bay.

So they don’t want their well-known brand names being associated with copyright piracy. So there’s been a lot of pressure from intellectual property rights-holders to say, you’ve got to clean up the advertising ecosystem, or we just won’t use it.

So as I said before, this is a very complex area – ads can be served in a microsecond, they can be served in real time, they fly all over the place and it’s very very difficult to track them. So this will be a big challenge for companies like Facebook, Twitter, Google, these big ad companies to try and solve.

Courtesy: Google

Luke Vargas: Alan, it’s one thing to regulate the internet when everyone is who they say they are. But when we consider the power of the internet to be weaponized for geopolitical gain, I think it’s clear that we’ve got actors, real countries now, that are exploiting the anonymity of the internet. Isn’t that right?

Alan McQuinn: Maybe not in the advertising space, obviously, where disclosure helps shed light on the people behind the scenes behind advertisements. But certainly for citizens, anonymity is often very important, especially for democratic countries like the United States, and we see the opposite happening in China, for example, where they require administrators for messengers to be personally liable for the content on those messengers, and for everyone who interacts online to verify who they are.

As a result, we see people in China pushing toward more encrypted channels so that they have more privacy. We see similar effects here in the United States, where a lot of companies are moving toward encrypted channels to take themselves out of the equation, so that their users are able to communicate with one another without fear of either the company or a government or anyone else spying on what they’re saying.

Luke Vargas: A quick question for both of you. Is this Facebook story, their unwitting role in perhaps helping tip the election, is this the story that finally leads to a discussion about how to regulate the internet, or is the internet just going to remain a wild west? Natasha?

Natasha Tusikov: Well certainly I think this is the latest crisis, and I think this is part of the problem, is we lurch from crisis to crisis – whether it’s terrorism or the Apple-FBI encryption debate or fake election ads, and we’re lurching from crisis to crisis instead of thinking how should we actually approach this, what rules and standards should be in place to govern these very big companies.

And we should also should be very cautious about handing over too much power to a very small number of companies and allow them to really set the rules of what we can do and see and share online.

Luke Vargas: Alan?

Alan McQuinn: So as we’ve discussed, countries have different levels of protections for free speech, for privacy, for what they do, and so reaching a global consensus is going to take a while, I think especially around cyber issues, around free speech issues, obviously. So we need to be able to empower multinational companies to deal with some of these threats in the meantime, while our countries work together to establish these global consenses around these issues.

So while it’s not a perfect system, our algorithms will get better and we will be able to tackle these issues a lot better over time and with trial and error.

Luke Vargas: Alan McQuinn from the Information Technology and Innovation Foundation in Washington and Natasha Tusikov of Canada’s York University, thank you both so much for being with us – 

Alan McQuinn: Thank you for having me.

Natasha Tusikov: Thank you very much.

Luke Vargas: From United Nations headquarters in New York, I’m Luke Vargas, signing off, join us again next week on Wake.

NO COMMENTS

Leave a Reply

Your email address will not be published. Required fields are marked *