
As Executive Director of the Global Network Initiative, Jason leads a dynamic multi-stakeholder human rights collaboration, building consensus for the advancement of freedom of expression and privacy amongst technology companies, academics, human rights and press freedom groups and socially responsible investors. Jason also worked in the State Department’s Bureau of Democracy, Human Rights, and Labor and led the State Department’s efforts to protect Internet freedom.
Links:
Transcript:
Rob: Well hello everyone, another session of TurnSignal. We have Patrick Sullivan, Rob Pirno, and today our guest is Jason Pelmyer. He’s Executive Director at the Global Network Initiative. As executive director, Jason leads a dynamic multi-stakeholder human rights collaboration, building consensus for the advancement of freedom of expression and privacy amongst technology companies, academics, human rights, and press freedom groups, and socially responsible investors. Jason previously served as deputy director and policy director at GNI as well. And prior to this work, Jason was the special advisor and section lead for the Bureau of Democracy, Human Rights, and Labor at the US Department of State. And prior to that, he did ranging work from an associate at a Manhattan law firm to grassroots community development in the rainforests of Guatemala, which we’ll definitely cover that as well. Jason has a Bachelor of Science from Northwestern University and received his law degree from Yale Law School. Jason, we’re so glad to have you here today to talk a little bit about discussion between technology and its relationship to democracy and human rights and what’s happening today here in the United States, but as well abroad as well. So thank you so much for joining us, Jason.
Jason: It’s a real pleasure. Thanks for having me.
Rob: Yeah, let’s jump into, so how do you see current advancements in technology helping to advance freedom of expression in privacy amongst technology companies, academics, human rights groups, and other companies? Because I think probably the world on the street is, a lot of times we think that folks might see technology as reducing our personal freedom and limiting advancements in freedom of expression. through like, you know, doom-scrolling algorithms on social media platforms that might mute kind of the rational voices and focusing on getting that clickbait material. But how do you see it actually advancing freedom of expression and privacy abroad?
Jason: Yeah, it’s a great question to start with. When I started working on these issues at the State Department, I mean, I guess I started thinking about these issues probably in many ways as a Peace Corps volunteer and I was in this, you described it earlier, kind of rural rainforest environment where cell phone towers were introduced during the time I was there. So when I first got there, there were two satellite phones in the community of about 10,000 people where I lived. One was owned and operated by the municipality and almost always didn’t work because they were behind in paying their bills. And then there was one entrepreneurial store owner who had bought himself a hookup and would rent out his phone by the minute. And over the course of my three years living in that community in northern Guatemala, we had an internet cafe that opened up. That was actually funded through a USAID project. And then eventually, mobile towers came in and people started to get access to cell phones. And these were cheap flip phones, but just the opportunities that that created for people to be able to communicate with loved ones to get access to information, it was very clear that that was changing their lives in really fundamental ways. I went to law school after that and a lot of my work in the Peace Corps was actually focused on traditional property rights and questions of land tenure for indigenous populations. That’s what got me interested in the law, went to law school thinking, I’d study real property issues and ended up kind of going in a lot of different directions, but including starting to think about access to information, access to knowledge, the legal regimes that limit and open up possibilities for information exchange. Came out and did did some other work for a little while for coming back to this at the State Department and it was really. I started working on these issues in the era of the kind of Arab Spring and it was a real time of possibility a sense that. Social media platforms in particular were creating spaces for people to express themselves, to share information, to organize around some of the traditional barriers and blocks that existed in more repressive contexts like in the Middle East and North Africa. So it was a very heady time and this was the era in which then Secretary of State Hillary Clinton coined, well sort of she made a couple of very high profile speeches kind of defining the concept of internet freedom. So how do human rights, which are, you know, that’s a framework that’s existed for a long time. It just sort of draws its origins from the post World War II era and, you know, how those rights apply in the context of a global interoperable communications network technology like the Internet. And that changed pretty quickly. I ended up sort of taking responsibility for the work around internet freedom in the Bureau of Human Rights, Bureau of Democracy, Human Rights and Labor, right around when a gentleman named Edward Snowden made a bunch of disclosures about the ways in which US government was accessing communications for national security and intelligence purposes. So what had been seen as a very sort of positive, proactive US-driven agenda that aligned both the sort of economic interests of US-based tech firms that were leading the development and expansion of these internet-based products and services with the foreign policy interests and the sort of traditional First Amendment values of the US quickly became a much more complicated story after Snowden. So to fast forward to today, I think the story has just continued to become more complicated. So on the one hand, you know, smartphones are ubiquitous and the impacts and implications of the tearing down or undermining of previous gatekeeping functions, which was part of the source of inspiration and optimism in the early days of the expansion of the internet, has now revealed itself to have consequences in terms of a real proliferation of all kinds of content, including harmful content and challenges that both users, companies and governments are facing in figuring out where are the right points to intervene and address those harms and those vulnerabilities. So, you know, the arc of my experience working on these issues really, I think, has made me a bit more circumspect about ultimately the role of technology, which I think was your question. I think at the end of the day, communications technologies are disruptive and the pace with which we see change and innovation in the organization and the use of of those kinds of technologies has quickened and society’s ability to adapt and respond, whether that’s through cultural norms and behavioral patterns or through legal and regulatory adaptations has been really significantly challenged by the pace of that change.
Patrick: Isn’t the story really that like this is the answer is that’s a huge net positive for democracy. I mean I think that the internet and the ability of you know what if there’s a repressive regime and a resulting diaspora of dissidents they can now with the internet and zoom they can not only coordinate essentially a shadow government but they can among each other in a way that they probably could have done in person before you know but now they can. also take on many of the other duties of a government like disseminating out to the world their message. So they have there, they can coordinate internally and they can also act like a government. And I’m thinking in particular of Syria, right? So with a sod leaving, there’s this gap and everybody kind of wondering who’s going to step in. But the fact is that there was a group of people who had been coordinating. They were from all over the world of Syrian dissidents who were very much ready to step in and did, right? And they had been using, you know, these technologies to coordinate with each other and allowed them to move with great speed to kind of step in there and really simultaneously do a lot of assurance, you know, we’re, yes, we’re Muslims, but we’re not Al-Qaeda, you know, we’re not You know, we’re not what they say we are, you know, and engage in this, you know, public relations battle of defending themselves. But, I mean, it’s amazing. So, I think, you know, how has technology, like, you know, advanced, you know, democracy, and this podcast we’re interested in democracy, right? I think on the whole, huge net positive, right? Would you agree?
Jason: I do think that as a whole, technology is, and I would say communications technologies, right? Because there’s all kinds of different technologies. But I do think that the more people are able to communicate, the more democratic activity can be conducted and it flattens space and time. You were pointing to the example of expatriated or communities that are outside of a particular jurisdiction or outside of a particular country can engage across space and time thanks to the internet in a way that would have been much more difficult pre-internet. That can be positive. It can also be negative. There are examples like in Ethiopia where diaspora communities were deeply involved in sort of seeding hate speech and sort of feeding the fire of internal IRL conflict that was happening in that country and that has its roots. decades if not centuries of history. And so it can cut in both directions. I think the main challenge that I see is that the norms and institutions that had developed to mitigate against excess or extremism in the pre-internet world, those same norms and institutions in certain contexts were anti-democratic, so in Egypt or in Syria, you had closed regimes that had exercised effective control over mainstream media and press, television, radio, and the internet came in and disrupted that in a way that was positive in terms of opening up space and creating opportunities for people who had otherwise been suppressed or censored. But in democratic countries, we also now see voices that had been not politically suppressed necessarily, but political tendencies or positions that had normatively been sort of boxed in or otherwise censured from a societal perspective have now also found a resonance, right? The ability of people to connect with other people in certain chat rooms or closed spaces online. is creating challenges in a similar way. And what we are hoping will happen, and I think what needs to happen is for new norms and civic responses to evolve, to continue to manage this shifting set of institutions and technologies so that we continue to move forward in a democratic direction. And that’s what, at a very high level, the Global Network Initiative is, I think, intended to do. So G&I, just to give a little bit of history, it’s not a new organization. It’s been around for 18 years. It came about in response to some controversies some of the earliest controversies I think that some of the large internet, commercial internet providers faced in 2006, 2007 as Google was sort of starting to provide Gmail and Google search and Yahoo was providing search. Microsoft was expanding its services around the world. These companies We’re providing products and services on the internet in different languages that were these products and services were instantly accessible to anyone with a network connection and that population was booming around the world. governments, not surprisingly, especially those that were more repressive, were threatened by the fact that people could communicate privately by email or could get access to information from outside of those sort of controlled press and media environments. And so they sought to intervene. They issued orders to either censor certain content or to get access to people’s data or the content of their emails. And the company’s initial Response was to refuse to install a government backdoor Eventually I mean these are the early the earlier ventures were were much more simple It was just you know give us this person’s emails or you know take down this person’s account and The companies were generally not inclined to comply with those orders, but the pressure quickly ratcheted up. These governments threatened to block these services from their jurisdictions. That of course had commercial implications. There were some high profile controversies in that period in the middle 2000s of companies complying with these orders from governments that were seen as non-democratic and there being real significant consequences for individuals on the other end of that. And the companies recognized that they needed to have a more coherent approach because the internet was only going to globalize more. And so they came together with a group of academics, civil society organizations, investors who were also thinking about these issues, and essentially created a framework based on international human rights law, which is, you know, in theory, a universal set of principles that have been agreed to through charters and treaties that have been negotiated mostly through the UN and some other multilateral venues. And that, you know, they could use that as essentially a playbook to try and provide a uniform approach to how they responded to different kinds of government demands in different jurisdictions. And so that’s the global network initiative principles on freedom of expression and privacy were born in 2007. GNI, as an organization, was born a year later. The founding members were a set of mostly US companies, investors, academics, and NGOs. And the idea was not just, okay, we’ve articulated this set of principles for corporate conduct, what responsible business conduct should look like for these tech companies. How does that work in practice? We need to not just have a set of commitments that everybody can put on their website and pledge fealty to. We actually need to have a mechanism for reviewing and learning from these experiences. And so what kind of makes GNI unique is the companies that join commit to periodically undergoing what we call an assessment. It’s a little bit like a management systems audit. So we credit and train third parties that these companies then bring on. They get access to all kinds of sensitive, non-public information about how these companies are structured, how their teams work, what are the escalation paths that a particular government demand or challenge might follow in order to ensure that it receives appropriate attention and the applications are consistent over time. What sort of user notice is afforded in those circumstances? What kind of redress or remedy might be available to users?
Patrick: They probably have to do that for every jurisdiction they’re operating in, right?
Jason: Ideally, they will have a uniform approach to this. While, of course, the laws in different jurisdictions are different, the way the company responds to legal demands should be consistent.
Patrick: They should all come down the same pipeline with the same decision-making thresholds and that kind of thing.
Jason: Exactly and so our framework so our assessments why we don’t call them an audit is they do look at these kind of like the paper trails and the org charts and the training manuals and all that but we also look at case studies so specific examples of demands or sticky situations that companies have ended up in particular jurisdictions and how do those illustrate whether these organizational policies or systems are actually working as designed And a lot of that’s very sensitive information. A lot of that stuff could be put, the company is in even hotter water.
Patrick: They may not even be able to disclose some of those.
Jason: In some contexts, they are legally restricted from disclosing it. So there are limits to what we can ultimately address. But oftentimes we can find creative ways to talk around certain types of limitations. But the idea is these companies at the end of the day have an interest in protecting their users’ rights in demonstrating to their users and the public at large that they are standing by a certain set of values. And they want to learn from each other and they want to learn from these other stakeholders who have critical expertise. So when we evaluate these companies, we look at two things. Are they implementing this framework in good faith? And are they making improvement over time? And so these exercises are ongoing. And the idea is to move each company and also the wider industry forward.
Patrick: So what does good faith look like? Is that an adherence to the kind of universal human rights principles that you’re talking about, or is it more about compliance with jurisdictional requirements?
Jason: Yeah, all of the above and more, and obviously good faith could be a relatively subjective.
Patrick: Sure, yeah, I’m just curious.
Jason: But at the end of the day… There’s gonna be conflicts between those things. Absolutely. There is a recognition. The GNI principles say very clearly we don’t expect companies to violate the law. We recognize that there are going to be situations where the government is making a demand that a company feels like it has to respond to. not responding at all or blatantly disregarding the demand is not always going to be an option.
Patrick: You want to continue functioning in that jurisdiction.
Jason: Exactly. You have to balance the human rights impacts of, okay, we’re going to censor a couple of accounts that the government seems really worked up about. Maybe the freedom of expression impacts of that are balanced by the overall elimination of that service’s availability in that jurisdiction to hundreds of thousands of people.
Patrick: Which just happened. I mean, I think we had this here in the United States. I know this is on your list to get to, Rob. But in Mississippi, they passed a law which requires age verification. That’s really the extent I know about this law. You probably know much more about it than I do. But in response, Blue Sky, which is It’s an alternative to Twitter, but it’s designed as a distributed system. Different systems can connect to this blue sky network, which is why you have a full domain name as your username. Some of them are most are blue sky, but you can have your own. They want to have this open system. Their response to this law in Mississippi was to essentially say, we can’t comply with this. We won’t comply with this. We don’t think it’s a good idea. And I think it may not even be feasible with the way that they’ve designed their network. So they shut down service in Mississippi. And so now people in Mississippi who are missing out on this opportunity to participate in Blue Sky is a lot of public political discourse. So they’re no longer a part of it. How would you, if they came to you, obviously you can’t give them, you know, probably prescriptive advice, but what would be your framework for guiding them through making that decision about whether to stay or whether to go?
Jason: Right. That’s a great question. And it’s, you know, the example in Mississippi is really illustrative of a broader trend, both as with respect to child online safety, which is a real issue.
Patrick: Right. I don’t want to imply that the Mississippi law had any kind of bad faith intent because there’s a real issue of, you know, I mean, I think it’s probably overbroad and a blunt instrument, but there’s a real issue of, you know, child safety, you know, both, you know, the kids who are viewing it, but the kids who are sucked into, you know, this awful, you know, trafficking that, you know, becomes the product of this kind of thing. So anyway, I just wanted to say that I’m not saying that this is a bad lie. I don’t know enough about it. But I mean, I think that there’s legitimate arguments on both sides. For sure.
Jason: And that’s almost always the case. And I think in this case, but this is a good example. So just to be very clear, Blue Sky is not a GNI member. We have engaged with their trust and safety team. And they’re a very small organization. I think last count, it’s in like the low hundreds of staff total.
Patrick: And they’ve been exploding, so they’re very busy.
Jason: Yes, their growth is significant. So that is relevant because for a company like that to comply with these very, what can be quite onerous age verification requirements, it may not be a matter of Frankly, human rights, it may just be a practical reality that like they just don’t have the capacity. So rather than risk being in violation of the law, the safer thing for them to do and in some ways easier or more practical thing to do is just say, look, we’re going to restrict the availability of our service in that jurisdiction. Whereas, a larger company or a larger platform like Threads, which is owned by Meta, they have a lot of infrastructure that they can put in place to maybe help navigate through some of those laws. Each company is going to face these decisions with their own context, their business model. The other thing about Blue Sky is they are not an advertising-driven Platform so they have slightly different incentives in terms of how they’re structured as you mentioned technically they are structured in a really different way from, you know, from an X or a threads in that they are they operate off the protocol which is an open protocol and so. Yes most users are using the blue sky sort of centralized servers to connect but you don’t have to you can use there are other distributed protocols distributed servers that connect to that protocol and they don’t have control over those servers so even if they were in a position to be able to say. Yeah, we’ll restrict. We’ll make sure anybody connecting to the Blue Sky server has checked all these boxes in terms of verifying their age. There’s no way that they can necessarily control other servers that are connected to that same protocol and therefore visible. So each product or service, even within a company, is going to have its own characteristics that make compliance with any particular law or any particular demand, any specific request that comes through a legal regime different. And so our framework is intended to sort of recognize that diversity of approaches. One thing that’s interesting about GNI, so we have, although most of the sort of Examples that people are familiar with or can imagine where governments are making a demand of a company that will impact their potential rights as users is in a user-generated content context like a social media platform or a messaging service. or a video service like YouTube. We have members who are equipment vendors, so they don’t actually provide any product or service to URI, Ericsson, Nokia, these are companies that build the hardware, the infrastructure, the routers, that sort of undergird communications networks.
Patrick: But that may face pressure from regimes to put in backdoors and that kind of thing.
Jason: Exactly. So that’s where that kind of systemic level is.
Patrick: And who is the big Chinese vendor of routers?
Jason: Huawei. They’re not a member.
Patrick: They’re not a member and there’s some question about whether there is a backdoor, right?
Jason: Yeah, certainly there have been all kinds of allegations there and efforts to sort of route out, as it were, Huawei infrastructure from in the US and in other jurisdictions because of concerns about vulnerabilities So, yeah, so I think, you know, to come back to GNI and sort of what we do in, you know, part of what we do is, you know, look at how companies are approaching these different scenarios and legal regimes around the world and try and learn from them. Again, recognizing that there’s not always a single right answer. But the key to good faith to your question is, have they really thought this through? Have they sort of examined all of the available sort of paths here and Did they put in the work to try and find the path of least impact on people’s rights? Over time, we do see patterns or trends in terms of what good could look like. Companies can learn from each other’s experiences, whether it’s broad experiences about how products are designed or how their teams are structured or about how to deal with a particular jurisdiction. Our goal is to help push forward responsible business conduct and part of it is in the context of responding to these demands, but it’s much broader than that. It includes transparency, right? So like, okay, you’ve had to comply with X number of demands in different jurisdictions. To what extent have you been able to provide notice to users about that? transparency reporting in the tech sector didn’t exist until 10 years ago when Google put out the very first public transparency report about the government demands that they were receiving around the world. That was as a result of their implementation of the GNI.
Patrick: And are they still doing that?
Jason: They are.In fact, those reports have evolved and expanded and they now include very granular information, not just about the content that’s been restricted pursuant to a government demand, but also the content that’s restricted pursuant to application of corporate terms of service or community guidelines. Got it.
Patrick: Moderation.
Jason: So those are the kinds of bigger picture, industry-wide good practices that we have seen evolve in part because of the GNI framework and the opportunity to continue to accompany companies in these processes over time.
Patrick: So if a government, if the United States government came to Google and said, I want Patrick Sullivan’s Gmail. Give me the keys, and don’t tell them. If it were a court order, and I don’t know exactly which court, if it were a FISA request, I guess, They could do it with that, and they wouldn’t be allowed to tell me, I believe. But they could publish maybe the numbers that they got that year of request or something. Do they do it that way? Do they broaden that?
Jason: Right. If you think of it as a spectrum, in the context of an individualized request for someone’s user data or traffic history, whatever the data or the content issue is, You know, the first thing that we often, that companies may do is just wait and see if the company, the government will ask again, right? It depends jurisdiction to jurisdiction, but you know.
Patrick: I use that tactic at work all the time. Yeah. Wait and see.
Jason: I do that with my kids a lot too.
Patrick: Exactly. The third time’s the charm. Are they going to really remember? Do they really care?
Jason: But I think it’s a valid tactic, right? And maybe not as easy to do in certain jurisdictions that are much more formal, where it’s clear that if they have come to you with a court order, they are very serious about it. But we’re talking about 190 countries around the world. and many jurisdictions within those countries. So oftentimes you’re getting a demand from the chief of police of this particular municipality in this country and how much authority he actually has is not very clear. So A, sort of wait and see if this is really serious. Now, if it is serious, then you need to kind of look at the demand and examine it from a legal perspective. Does this particular authority that gave you this request have the legal right to issue that kind of demand under the law. And if not, you can push back and say, sorry, but you need to go to a court or you need sign off from a minister or whatever the law might require. Even if they have then sort of if the four corners of the demand check out and it is legitimate as far as you can tell pursuant to the country’s law, there are degrees to which you might comply. So if the demand is, I want Patrick’s emails for all of 2022 because I’m investigating a crime that happened in February of 2002. It’s not clear why your emails from post-2002 might start with February.
Patrick: Right.
Jason: So you could sort of do a more limited compliance and again, see if they push back and call for more. And then all the way to the point that you were getting at, Patrick, about, okay, so you’ve had to comply with a certain number of these requests. And in some cases, you maybe couldn’t tell the individual user about them because there was a gag order. But at least you can provide sort of aggregate numbers in your transparency report that then other civil society or media can pick up on to say, hey, there’s a trend here, right? And especially if you look across multiple platforms and say, in Turkey, we have seen an explosion in government demands under these particular laws or seeking this kind of content or to restrict certain things. And so I think that gives people the information that they need to then become more engaged civically, whether that’s through activism or political engagement.
Rob: So a lot of what you talk about, Jason, is the work that Global Network Initiative does is kind of companies sharing best practice with each other and learning each other, different stakeholders that Global Network Initiative focuses on. What I want to hear now is talk about the EU, Great Britain, and other jurisdictions that are actually focusing on regulations for these big companies like Meta and Google, and what they’re doing right now. I feel like we’re not necessarily in the Wild West anymore. Now, we find it advanced a little bit. Maybe we can talk about that.
Jason: Yeah. That’s a good question. These have become, unfortunately, somewhat politicized topics in the current environment. Yeah, so I think the reality is when GNI was founded back in the mid-2000s and for many years after that, there was a broad consensus across tech companies, civil society, academics that government attempts to regulate content in particular on the internet were broadly problematic and that even attempts by democratic governments that were good faith were likely to be ham-fisted and overbroad and have lots of unintended consequences. There was a real sense that while, of course, there were bad things on the internet, the impacts of those things was limited and could be mitigated through voluntary actions by companies and other kinds of community support. and that governments should should broadly stay out of it and you know in that time period most of the laws that we saw attempting to govern content and data were coming out of less democratic countries. I think you know safe to say that after 2016 that consensus dissolved and this is you know in the post election period in the U.S. with allegations of Russian interference and disinformation affecting democratic decision-making, concerns about hate speech, the recruitment of fighters and affiliates to the caliphate through Daesh. and everything that was happening with ISIS online. All of these challenges sort of bubbled up in that period and created a sense that no, in fact, those company-led efforts are not sufficient. Governments do need to step in and there needs to be some accountability mechanism here. So, for us as GNI.
Patrick: What was the origin of that consensus? The original consensus? You’re saying that there’s a move towards, yes, government should have a greater role in all of this. Is that just from the population at large? Is it from the regulated entities? Was it from the governments? I don’t think it was the
Jason: The media. But I think it was, yeah, I mean the media, but I think you just saw sort of cross bipartisan concerns here in the US, maybe for different reasons, but concerns about how much power these companies had in terms of deciding what sort of content was available and what was it, what was amplified and what wasn’t.
Patrick: I can see that kind of being two levels of it. I’m just trying to think of how an individual would actually act. And I think most individuals are like, I’m upset about all the ultra-right content on X or I’m concerned about all this woke stuff. And then the next level of it is like, I’m concerned about companies affecting democracy in general, right? So there’s the kind of self-interested part and I think that that absolutely consensus that everybody feels that the other side is taking advantage of technology and that kind of thing. But I think it’s probably a much smaller group of people who are thinking about You know, people like you who are thinking about how technology affects the democratic process.
Jason: Right. And that’s exactly where I wanted to go because – so once we realize that, okay, whether we like it or not, governments, including democratic governments, are going to create – they’re going to regulate the space. Then the question became, well, we could sort of take potshots and just attack all of that, or we could try and engage a little bit more constructively and proactively and think through what are the ways to do that that actually maintain the rights to free expression and privacy or protect those as much as possible. Because we’ve seen examples of very draconian censorship laws and very invasive surveillance regimes And that’s not the right answer, right? I mean, to some extent, those can be answers to questions of social harm or online harm.
Patrick: Just like having the National Guard on every block in DC will temporarily reduce crime.
Jason: But it’s not the right answer, right? And it’s not the sustainable answer, right? So from our perspective, again, it was always to go back to the kind of core principles in international human rights law. and think about how do we try and influence government regulation to focus more on, you know, to recognize the diversity of products and services that are out there on the internet. Don’t try and sort of color with one brush over all of these platforms and services. The need to recognize different vulnerabilities that will exist in different, based on the differences in business model, the differences in the jurisdictions, the specific intent of the law. So, concepts like proportionality and necessity that are rooted in international human rights law are very important to us. We worked closely with our members to actively engage in the processes that resulted in the creation of the Digital Services Act, for instance. use sort of broad approach to addressing digital services and then in the UK the Online Safety Act which started off as the Online Harms Act.
Jason: Yeah, and so, you know, and we were We appreciated some of the things in those laws, some of the sort of changes that were made throughout the legislative processes. We were critical and remain critical of some of the things that ended up in those laws. At the end of the day, though, I think they are good faith efforts to try and address legitimate concerns about online harms and digital safety. And I think that the frameworks that they impose are part of the new reality for operating on the global internet. And so, you know, to come back to the Mississippi example, right? So these frameworks, both the Online Safety Act and the Digital Services Act, plays a lot of focus on risk assessment and risk mitigation. So the idea is there’s not necessarily a single right answer, but all platforms and services need to go through the exercise of identifying what are the risks that users will face on your services based on how those services are designed and the different features and accommodations that are afforded on that service and who your user base is, who you’re targeting, who’s showing up and using your platforms.
Patrick: What are those top three risks? I’m thinking the risk that your government will intercept your communique, your privately intended communications and use it to… That might be a risk.
Jason: In this case, these laws were designed more to address not so much the risk of the government intervening, but rather What are people doing on the product and service? You mentioned, for example, child sexual exploitation or trafficking. That could be a risk. If you identify that as a risk, then you need to have mitigations. The laws don’t necessarily tell you what the mitigation should be, but you need to have thought through, here are the steps we’re taking. And one of those steps might be to restrict access to certain services completely or to certain affordances on a service based on the user’s age. And so then the question is, if you’ve identified that as the right mitigation to address a particular risk, how are you implementing that? And so then again, this is where the questions of proportionality and necessity come back into focus. One way you could do that is say, just tell me how old you are, right? And that’s not super burdensome on the user. It allows, it’s minimizing the risk that you’re actually gonna limit free expression of your users by inaccurately identifying them as underage, but it’s probably not very effective because kids are good at lying and well, everyone’s good at lying, right? And it’s low consequence. So then looking at kind of the other end of the spectrum, you could, require people to provide a lot of, you know, personally identifiable information, like, you know, take a screenshot of their driver’s license or passport or some sort of national ID. And that might be, it’s still not foolproof, you know, kids can find ways around that, but, you know, that also then creates a privacy risk because you are now in possession of sensitive information of that individual that you didn’t already have. So how do you mitigate that? Well, maybe there’s a way to sort of ephemerally capture that information, verify it, and then ensure that it’s deleted so you’re not creating a repository of personal information that would be susceptible to either government demands or some sort of hack and release cybercrime type activity. These are the kinds of exercises that companies are now having to work through as a result of these laws. Again, they’re not perfect. I think there are a lot of concerns with the overall direction that these laws are pushing companies to be more and more limiting in terms of what not just who can access their services, but what sort of content is permitted on those services. There’s a lot of legitimate debate to be had there. I think unfortunately what we’ve seen now in the last few months is that these laws have become sort of part of the broader transatlantic negotiations that revolve around national security and trade and questions around the broader sort of geopolitical alignment. So they’re not necessarily being evaluated strictly on their own merits, but rather kind of being sort of mixed up with the kind of broader geopolitical trends.
Patrick: Well, I think from Europe’s perspective, and I think this is true of GDPR as well, that some part of it for them is a response to the fact that these companies that are inhaling so much of their citizens’ information are all in the United States. They’re all US companies. They’re all making a shit ton of money, too, and they’re not paying taxes in the EU or those places. For me, it’s like, well, then you invent something. Then you get to collect the taxes when you’re a citizen. But in any case, my point is just that I think that some of this motivation for this response, which can be seen as an over-large response to the actual problem or the actual demand, this is kind of nationalism, techno-nationalism or something. Which is you know, I can see if it were the other way around we would absolutely respond the same way to you know, so many of our citizens, you know using these
Rob: And we have companies in China, you know, right? Yeah, people exactly That’s exactly right. Tiktok is the exact same response and that is how we’re reacting to it.
Rob: You have to sell to a US company, right?
Patrick: Yeah force them to do that Yeah, but I mean, I think part of where I was going with this and something that had occurred to me is that when it comes to, you know, GDPR and data protection, I think people don’t, you know, really care that much, right? Like as much as, you know, like, you know, when you can put a cookie banner in front of them, they’ll click, they’ll agree to anything without reading it, you know. They just want to get on with their day, right? People are not If you were to ask somebody what are your top 10 concerns that keep you up at night, this would not make the list. I think if you were an activist and you’re concerned about your own government surveilling you, then that might be something that keeps you up. But I think as far as this kind of technonationalism or whatever you want to call it, a lot of it is just driven by this kind of control issues about the location of these companies. And yeah, and you’re right, Rob, that TikTok is the exact opposite response, and we’ve responded just like these other jurisdictions are.
Rob: Yeah, so Jason, going forward, you have a magic wand. What are you, what are you going to do to, you know, expand freedom of expression without exploiting privacy as it relates to, you know, international freedom, international communication? What do you think needs to be done?
Jason: Yeah, great question. I mean, I think that there needs to be, and this goes back to sort of the beginning of this conversation, right? I think we need to develop socially the right sorts of norms and behaviors, right? I see this a little bit with my daughter is now 12, about to be 13. She has a phone, she’s not on social media yet, but I see a lot of differences in how younger generations interact with technology and I think that a lot of that is adaptive behavior to some of the risks that they face, not necessarily exogenous risks of trafficking or that kind of but more just the risks of how content, once it’s out there on a platform, can be manipulated, can be misused, can be spread and shared in ways that you might not feel comfortable with. So part of that is, yeah, as a society, we need to evolve to our behavior and our sort of approaches to technologies. I think there’s also a need for, again, this sort of, civic engagement around content online, you know, unfortunately for, for whatever reasons, social media has emphasized and accelerated a lot of confrontation around content. But I do think that there are other ways that people can engage in discussion and engage with each other. And finding ways to, for platforms to design for that, right, for more constructive engagement, giving users more control if they don’t want to see certain kinds of content or don’t want to have certain kinds of interactions is critical. And I do think there is a role for governments to play, but that has to be really, we need to be really careful about that. And I say that as someone who generally is not skeptical of my of the broad intent of at least my government, at least until recently. And I think the challenge is always, it’s not always what the existing government would do or the existing authorities would do with a particular authority. It’s what could happen if that authority sort of comes into the hands of someone who doesn’t have, who isn’t being held accountable or who doesn’t have democratic principles as their guiding. sort of norms. I think we’re starting to feel that here in the US. And so I think that’s why I am generally very cautious about creating a lot of government control. over these technologies. But that doesn’t mean there isn’t a role for governments to play. I think there can and there should be. And I think we’re working through that. I talked about some of these laws. The Digital Services Act, the Online Safety Act, these are very new. And we’re just starting to see how these laws are playing out in practice. And we won’t know for some time whether they are achieving the legitimate aims that legislators and governments around the world set out when they constructed these regimes, the very complicated regulatory regimes, until we have data and evidence, and that’s why transparency is so critical. The Digital Services Act actually has a provision for giving researchers access to data about how these platforms work, which would help in allowing for sort of independent evaluation of some of these controls, these mitigations that companies are trying to put in place. The UK government, the Online Safety Act is administered by something called OFFCOM, the Independent Regulatory Agency in the UK. They are considering data access as a requirement under that regime. So I think we’ll need to see how this stuff plays out. I’m hopeful that the current sort of political politicization of technology and tech regulation won’t interfere too much with the ability to actually evaluate in an objective basis all of the concerns, including the concerns that these laws are suppressing people’s ideological viewpoints. That’s a legitimate concern, and I don’t in any way dismiss that. I think there is evidence of that. But we need to really look at that from a neutral evidence-driven perspective and figure out what are the ways to disentangle efforts to address hate speech, which is a category of content that can be restricted under international law. It’s recognized in international law. from certain political perspectives and points of view, there is certainly potential that over-broad content moderation of hate speech can spill over and address what should be legitimate political speech. We don’t want to end up in a world where technology companies are driving a monoculture of political belief and ideology. So I think that’s a legitimate concern, but I think that we need to engage on sort of an evidence-based discussion of how these regulations are working and adjust them. And again, this is sort of the GNI philosophy that we need to make improvements over time.
Rob: That sounds like something’s done in any industry, right? You have regulations in place, you have walls, and then you adjust them over time to see how they work. Ideally. Yeah. So Jason, this has been a wonderful conversation. Patrick might have one final question. You said something early on, uh, I think right before the origin. So about 18 years global network initiative has been, is, has been around. But before that, you mentioned something about a gatekeeper for the internet. Like that used to be in place that no longer was in place. What were you referring to about that?
Jason: No, I think what I was referring to is gatekeepers in the sense that before the internet, most people got information through what we would sometimes call mainstream media. At least in the pre-cable days for primary networks on television, there were large publishing companies that owned You know, many of the big newspapers and they were gatekeepers, right? Sometimes localized gatekeepers, sometimes national gatekeepers. And the governments themselves, of course, are the ultimate gatekeepers because they would create licensing regimes for whether it’s, you know, bandwidth or spectrum. And the internet sort of undermined all of them, governments and mainstream media. And I think initially that was very exciting and energizing and created a lot of opportunity for all kinds of people to exercise influence and exercise their voice. But the reality is that when you had a certain number of established gatekeepers, then you also had sort of nodes of control that you could use if something problematic was happening. That could be used for bad, if to the extent that those gatekeepers were captured, then that could stifle people’s freedom of expression, access to information. But if there was a regime where you were legitimately trying to address, whether it was profanity or pornography, it was easier. There was a set number of distribution mechanisms, and you could try and control those in a way that was relatively effective. With the internet, that has become much more difficult to do. I think we have to recognize that we’re probably not putting that genie back in the bottle anytime soon. I think the reality is that’s why if we want to address these concerns about whether it’s eating disorders or extremist content. Part of the answer can be how we address those through these social media sort of mediums, but that can’t be the only way. We have to get to the underlying roots of these problems to why people are vulnerable to these kinds of messages, who’s exploiting those vulnerabilities, just addressing the sort of last mile Content sharing on social media is not going to be it’s in some ways the easiest step to try and take but it is probably the least effective Well said Patrick anything else to add I don’t know this has been a fascinating conversation.
Patrick: I really I really learned a lot and I hadn’t thought about I mean, you have your constituents, they really stuck. They’re making a lot of money, but they also don’t have all the choices that are probably ascribed to them when it comes to dealing with governments and staying open. So, yeah, I appreciate your your perspective on that.
Jason: Oh, thank you guys.
Rob: Yeah, thank you, Jason. Well, our guest today was Jason Pellamire, Executive Director of the Global Network Initiative. This was another with Patrick and Rob with Turn Signal today. And we thanks so much for listening.