Legal Affairs
space


Current Issue

 
 
 
 


printer friendly
email this article


space space space
space
Debate Club
DEBATE CLUB

Should the U.S. Regulate Tech Sales To China?

Richard A. Epstein and Derek Bambauer debate.

This Week's Entries: Monday | Tuesday | Wednesday | Friday

Cisco Systems, the multi-billion-dollar American tech company, earns an estimated $500 million a year from sales to China. Some of the tools it sells to the country's communist government are used to clamp down on the expression of Chinese citizens.

In the new issue of Legal Affairs, Derek Bambauer, a fellow at the Berkman Center for Internet & Society at Harvard Law School, argues that the United States needs to crack down on such sales. "Market freedom," he writes, "does not necessarily lead to personal freedom. The right to sell must sometimes yield to protect the right to speak." Should the U.S. government restrict a company's right to sell products that might be put to anti-democratic use?


Richard A. Epstein is James Parker Hall Distinguished Service Professor of Law at the University of Chicago. Derek Bambauer is a fellow at the Berkman Center for Internet & Society at Harvard Law School.

Epstein: 12/13/05, 11:53 AM
Derek, I've just had the pleasure of reading your thoughtful article on the angst-provoking question of whether the United States should impose tough restrictions on the right of American firms to do business with the Chinese government in ways that aid its repressive internal police activities. Much, indeed, most of what you say I agree with, so I'll just nip at you around the edges.

The core of your claim is contained in this sentence. "By taking advantage of market freedom and selling products to repressive regimes, however, these companies undermine another fundamental freedom: the ability of individuals to speak and think without fearing government retribution."

The source of my uneasiness with this claim is that it places the freedom to trade in conflict with the freedom to speak, in ways that hint at some deep structural weakness in the pro-market position. I don't think that this is the case. Move for a moment outside the international realm, and there are all sorts of cases where the state consistent with free market principles restricts certain voluntary transactions in order to protect the interests of third parties.

Here is why. The usual logic for markets has two key steps. The first is that all voluntary transactions generate mutual gains for the parties. That's true with the China trade. But the second step is every bit as crucial. The increased wealth or utility of the parties to the transaction increases the wealth or opportunities of third parties as well. Stated otherwise, ordinary market transactions have positive externalities.

So far, Derek, I don't think that you'd disagree with this characterization. But the negative implication of this position should give us both pause. What happens when the voluntary transaction generates negative externalities to third persons. That happens with the sale of guns for use in crime, for example.

Now the sensible defender of market principles finds good reasons to block this particular transaction, for, ironically, the greater the gains to the parties, the greater the peril to third persons. So the normally positive language of contracts gives way to such terms as conspiracy and trafficking. Ordinary people get the point as well.

The reason your case to control the sale of key software to China has such power is that there is less of a conflict of interest between free markets and speech than you suggest. Think, Derek, about the standard restrictions that we place on free speech, dealing with such negative third party effects as fraud, defamation, and incitement to violence. We have a unified view of economic markets and free speech. At this point the tough question concerns the design and effectiveness of any such restrictions, but not their basic legitimacy. I'll turn to those in later posts.

This Week's Entries: Monday | Tuesday | Wednesday | Friday

Bambauer: 12/14/05, 09:05 AM
Richard, thanks very much for taking up this debate, and for a thought-provoking first post. It's a privilege to debate with you.

I think we agree generally: negative externalities justify regulation. I don't think this resolves the free markets/free speech tension for technology sales, though, for three reasons.

First: In the standard scenario, the regulating government has an interest in the welfare of all affected parties, but here it does not. Normally, a state wants both to foster mutually beneficial exchange and to protect its citizens harmed by such dealings. Here, though, the U.S. government's perspective is much more limited. It need not necessarily consider negative externalities from technology sales abroad; after all, these harms happen far away to foreign citizens. The sellers, though, are American companies with lobbyists, and employees who vote. This tilts the government away from regulation unless we conceive of American interests as extending beyond financial results.

Second: The case that this technology creates negative externalities (on which we agree) is a contested one. Companies such as Microsoft and Cisco argue that half an Internet soapbox is better than none. They think that overall the externalities are positive: a filtered blog confers more benefit than no blog at all. Technology companies follow our approach in calculating the utilities involved; they just get a different answer. (Self-interest likely affects their math.) If this is, or at least appears, uncertain, should we default to promoting markets or speech?

Third: Dual-use goods complicate regulation. Fraud, defamation, and incitement to violence are unambiguously negative in their effects; surveillance and filtering technologies are not. Like guns, these products enable pernicious and desirable results with equal ease. Cisco routers use the same code to block computer viruses or Web sites about the Dalai Lama. Whether the externalities are positive or negative thus depends on the purchaser's conduct. Control over the technology itself must therefore be more nuanced.

In short, regulation must operate here under ambiguity. The direction of the externalities isn't clear. Moreover, the actors who determine that direction—foreign governments—are rarely subject to American control over their conduct. Dual-use technology forces us to choose between limiting transactions and risking enabling censorship.

Finally, we also face these questions domestically (as with handguns). Peer-to-peer software, for example, transports lawful materials as readily as those that infringe copyrights. Thus far, responses to dual-use P2P focus on the seller's knowledge and intentions, and on whether the product might be designed to favor legitimate uses. These questions of the design and effectiveness of controls are the ones I raise in the article, and that you highlight in your response. I hope we can take them up.

Epstein: 12/14/05, 04:22 PM
Given your response to my first post, it is clear that this discussion has moved away from China to the broader, if vexed, question of state regulation in an international context. In dealing with that issue, Derek, I'm going to sound like a (once) broken record. All the issues that you raise with respect to the difficulties in regulating speech can arise in the context of regulating standard forms of economic activity. So here goes.

First, I don't think that there is any "standard scenario" in which the regulating government cares about the welfare of all the parties. That is true when all effects are local, but in both the American and international context we have all sorts of cases where that assumption does not hold, and makes life miserable for us all. States can, and do, impose regulations that are designed to discriminate against out-of-staters. If allowed to do so, the discriminations will be explicit. If not, they will go underground in the form of neutral rules that are intended to have disparate impacts. The whole effort to use dormant commerce clause logic to constrain this behavior is testimony to its persistence.

In the international context we have just this same tension. We will subsidize American goods even if that causes harm overseas. We will authorize domestic cartels to operate free of state interference because the price is paid elsewhere. We will impose agricultural tariffs. The upshot on this point is frightening. There is always a conflict between good policy and national interest. The cynic will say that the latter always wins. The realist will say that the national interest wins too frequently, but presses back on principle nonetheless. It is an uphill battle, always.

Second, the half a loaf issue is not confined to speech-only situations. The entire question of sanctions to South Africa or Cuba, or for that matter China, rests on just those second-best calculations. If we trade with a dictatorship do we prop those folks up, or do we help citizens see what goods and services come from relatively freer societies? Again no clear answer. But we do know that the trading companies that gain more, are generally willing to have more trade than outsiders without that financial inducement. We should not want to ignore the gains to private firms; but they cannot operate as a trump.

Third, dual use problems are all pervasive and do not arise only with guns and blocking technologies. The exemption of "staples of commerce" from secondary liability is not done because they are not used in that fashion. It is done because the liability sweeps too deeply. In most cases we think that there is some party one step down the chain who is a better target for liability. What happens here also happens with the internet and illicit copying via Grokster. I think that the liability in that case was fully warranted on the intentional inducement theory. But I should hate to think that the big box companies could be held as contributory infringers.

Again I don't think that we disagree on fundamentals. My point is that every imperfection that one can find in the speech market can be found in standard economics markets for goods, which is why they are both markets, prey to ills that no system of regulation or governance can entirely eliminate.

This Week's Entries: Monday | Tuesday | Wednesday | Friday

Bambauer: 12/16/05, 08:04 AM
Richard, I'm enjoying our debate.

Your last post raises a critical question: Is speech special? Is the market for ideas different than the market for guns or butter? I think the answer to both questions is: Yes!

Guns, or butter, may help prop up a repressive regime, but technologies that limit speech can be far more pernicious: they can legitimize that regime's actions in the eyes of its people. The information environment in which we operate determines how we analyze events and how we make decisions. Technologies that filter expression alter that information system. Searches for "free Tibet" on one of China's search engines—say, Yisou—return no results. Queries on Taiwan turn up only sites favoring the PRC's position. This is the power of filtering technologies: when combined with positive propaganda efforts, they shift the information balance to favor the state employing them.

These products are powerful because they are invisible. Removing pages from search results—unlike clipping articles from a newspaper—leaves no traces. Users don't know what they're missing. Countries like Tunisia go one step further: They disguise censorship, telling users that a site (such as an opposition political party's page) is down or unavailable, rather than deliberately blocked. Tools that censor expression help countries move towards the goal of a totalitarian state: to control not only how citizens act, but how they think.

Why should Americans regulate U.S. companies to protect freedom of thought in other countries? I'd suggest two reasons. First, our national interest covers more than mere economic prosperity. America's "soft power" to borrow Joseph Nye's concept, derives from our values and our ideals. Aiding censorship for profit undercuts those beliefs and makes us appear hypocritical. It reduces our influence and appeal.

Second, more pragmatically, enabling governments to distort the information their citizens receive can have real consequences. In April, hundreds of thousands of Chinese citizens demonstrated against Japan, angry over the country's wartime atrocities. What they didn't know—because China suppresses such information—is that Japan has apologized repeatedly for those actions. The risk from nationalist anger, stoked by propaganda and the absence of alternative viewpoints, became clear even to China's government, which grew alarmed and had trouble defusing the protests. This pattern could repeat for other highly charged subjects—consider, for example, a confrontation in the straits off Taiwan, or a repeat of the Hainan Island spy plane incident. A skewed information environment can build pressure that explodes when sparked by external events. Enabling censorship makes us less secure.

The information we receive controls the decisions we make. Censorship technology is powerful, invisible, and potentially threatening to our national interest; I think this makes the market for speech different than the markets for goods.

Epstein: 12/16/05, 12:06 PM
Derek, your last post raises a set of questions, which take the form, which is worse? Start with the main libertarian premise that it is wrong to engage in conduct that involves the use of force and/or fraud. Which is worse the deceit or the old-fashioned blow to the head?

The first cut: Force is worse than the fraud, if only because the means of defense are so much more limited. If one thinks that he is in treacherous environment, the first mode of defense against fraud is just not to believe what is said. That form of self-help is not perfect. It may minimize the danger, but it does not allow a person to act just as well as he would with full information, which is of course why we think that fraud or concealment are wrongs in the first place.

But force is still worse, for even if you know the lay of the land the options for self-defense are much more limited, and more much risky than a healthy dose of peasant skepticism. You have to put life and limb in peril in order to resist incarceration, injury or death. Blows are usually more effective than deceit, whether used for good purposes or bad. But not always: sometimes deceit will get you want you want without having to risk physical retaliation.

What is most potent of course is the combination of the two. Use the deceit to soften resistance and use force to finish the job, which is why fighters will fake with the jab and go with the hook. The synergies are really deadly, and no doubt China uses combined strategies all the time. So here is the second guns or butter question: Which is worse, the use of fraud to disrupt business or political arrangements? The first can lead to bankruptcy and depression, the latter to massive political intolerance and unrest. Both can lead to riots, a horrible form of force. My own sense, Derek, is to agree with you that the pervasive insidious harms from political deception count for more in the long run, which is why the civil rights issues have more traction that the business issues, even if the dollars are greater in the last case.

And so we get to stage three: Which is worse, a fraud that leads people to be quiescent on domestic affairs or one that leads them to take bellicose steps against foreigners. This one is easy. The moral argument to stop the shipment of specific-suppressive technologies to China is that it allows the rules to exploit their own people, which is a wrong, even if it is not a wrong to us. But if what you say, Derek, has the slightest truth, that the use of this technology and others are intended to inflame passions outside the country, then the realist has to take up the cause of the moralist. Now, there's every reason to think that our nation could be endangered if these technologies are used to inflame the passions against us, so self-defense creeps back into the picture. But in an odd sense this gets back to the first dichotomy. It's far worse if those deliberate falsehoods lead to military confrontations. So force in the end is worse than fraud, even if both are pretty bad, which is why have to turn our heavy artillery against both.

Bambauer: 12/16/05, 04:00 PM
Your post raises the key question of the relationship between force and fraud. We agree that the combination is particularly potent. It offers repressive states the possibility not only of imposing control through violence (actual or threatened), but of causing their actions to appear appropriate. Thus, technologies of speech control can bridge the gap between firing on peaceful protesters and maintaining order against rioters: The question is the legitimacy of using force.

I want to raise a difficult issue. We've been debating the rules that should govern the sale of certain tools, such as Internet filters, abroad. This problem seems manageable when the purchasing country is China or Burma—regimes with questionable (or worse) records in treating their citizens. It's a harder question, though, if that state is Singapore or Bahrain. After all, these countries have elections and functioning courts and a press that can criticize the government (at least to an extent). Their censorship is much more mild—a few political sites blocked in Bahrain, the use of defamation laws against the opposition in Singapore. Would it be appropriate for, say, Secure Computing to sell its SmartFilter product to these states?

Let me suggest that we should employ four criteria in answering this question:

1. Accountability—is the government that imposes controls over speech accountable to the citizens whose expression is so limited? Are there countermajoritarian checks on this power? (It seems wrong, for example, to allow a country with a Sunni majority to block all Shia Web sites.) In short, to what degree are the governed able to participate in the decisions about what speech is, and is not, acceptable?

2. Openness—is the state overt about its actions that limit expression? Here, Saudi Arabia scores better, since its Internet filtering system notifies users when a page is blocked, and Tunisia does worse, as it misrepresents censorship as technical problems with requested Web pages.

3. Transparency—is the system clear about what material is and is not forbidden? Does the government understate the extent of its controls? China, for example, claims to focus censorship on pornographic materials, but actually targets politically sensitive material such as political criticism.

4. Narrowness—how closely do controls over expression track with a system's stated goals? If a state tries to block pornographic material, does it also prevent access to content on sex education? This concern regarding overbreadth is particularly resonant in American constitutional doctrine regarding government actions that affect speech.

Implicit in this scheme is that restrictions on expression are universal—you noted this in your first post by addressing fraud and defamation. We need to evaluate whether British Telecom's filtering of child pornography, France's filtering of hate speech, and South Korea's blocking of North Korean web sites are legitimate in the same way we judge the speech controls imposed by Iran or Vietnam. The framework above is intended as a starting point for such an effort.

Epstein: 12/16/05, 07:05 PM
The fresh issues that you raised in your last post are just the kind of problems that all conscientious would-be regulators dread. Your initial proposal had to do with the sale of filtering technology to those nations whose practices could not be defended on any view of the world. Faced with the situation in China or Burma, it is easy to see why some firms would be well within their rights to refuse to sell in these markets. Even so, it is hard to get any firms to stay out unilaterally when their competitors are willing to play under a set of rules that none of them like. Just yesterday Google decided—without reading our posts, it seems—that it too would enter the lucrative Chinese market subject to censorship restrictions that it finds repugnant.

Hence the competition between firms that does such wonders in economic areas creates such difficulties with these gray-market transactions. Ordinary business pressures lead each firm to defect from the position they all prefer, which is open-access for all materials. It therefore takes a law to stop them from entering, and even that approach is ineffective if other nations are willing to swallow their scruples in order to let their firms gain economic advantage. What your last post makes clear, alas, is that the problem is even more difficult than the two previous paragraphs suggest. We know that regulation, if it is to work at all, works best where there are sharp differences in kind between the practices that should be blocked and those which should be allowed. Your deep knowledge of the state of internet censorship reveals, world-wide, lots of shades of gray. Hence the new problem of deciding which gray is dark enough to require regulation and which not.Your answer is in effect a four-part test that looks to accountability, openness, transparency, and narrowness. It's obvious to me that you have the vectors all running the right direction. It would be absurd to think that less of any of these attributes improves the moral position of any foreign nation.

Unfortunately, four-part tests don't get us very far in the delicate task of drawing the line between those places we regulate and those which we do not. Is the nation that scores well on attributes (1) and (3) to be regulated because of its deficiencies on attributes (2) and (4)? Is there some other attribute—such as the willingness to review standards on an annual basis—that should be added into the mix? I don't know the answer to these questions, but I do know that it is just this sort of question that the opponents of any regulation will use to throw sand into the gears. How ironic it is that we have to fight an uphill battle to stop the admitted evil of censorship while it is easy to get a strong consensus behind many forms for regulation sporting much more dubious pedigrees.

This has been a pleasure, Derek. Keep up the good fight.

Bambauer: 12/16/05, 10:38 PM
Many thanks for a provocative, thoughtful exchange. You've highlighted the complexities that emerge from the mix of censorship, corporate regulation, and international law.

Your post is somewhat pessimistic regarding the possibilities of regulatory control over technologies that enable censorship (and similar objectionable practices). Unfortunately, I think this assessment is accurate. That said, let me close with three possibilities for progress.

First: We might improve the four-part test I proposed in my last post. You rightly note that this framework, like any multi-factor standard, is fuzzy, imprecise, and potentially subject to mischief from regulation's opponents. We could compare it to another four-part test—that of fair use under copyright law—which has been criticized as similarly flawed. Fair use, though, is problematic because it is a defense to infringement, rather than an affirmative right. The onus is on proving a use fair, not unfair. The key is who is responsible for meeting the test's requirements. Thus, we might remove some of the difficulty from the proposed standard by shifting the burden of proof to companies that seek to sell these products abroad.

Second: We should attempt to alter the calculus that leads to the prisoner's dilemma you identify: all technology companies (we hope) favor free expression, but the payoffs from enabling control over speech lead each to defect, and to help repressive regimes block communication. Game theory teaches us to remove the dilemma by changing the payoffs. Legal rules are one way of doing so. Pressure from parties such as shareholders is another, and one that can be more closely tailored to specific situations. It would be preferable if selling censorship technology to Burma led to a drop in share price rather than a rise. We see nascent opportunities for this possibility in efforts such as Boston Common Asset Management's shareholder resolution pushing Cisco on its human rights policy.

Third: We could press technology companies to re-design the products they sell abroad to minimize harms to free expression. Guns can be sold with trigger locks; cigarettes can be produced with lower tar content. Neither eliminates the product's danger, but both reduce it. Such design changes are difficult for dual-use items—it's hard to build guns that shoot soldiers but not demonstrators—but worth pursuing. For example, Internet filtering software sold abroad might enable the purchaser to block categories of content (such as pornography or extreme violence) defined by the manufacturer, but not to designate additional prohibited sites. Public pressure would mitigate the risk an American software company would create a "Chinese dissident" category of sites. While filtering states could create their own methods to block the Web pages of opposition political parties and the like, this will cost them additional time and money, and would mean that American ingenuity didn't aid their efforts.

Richard, you're right that this is an uphill battle. Yet I think we agree that it's one worth fighting. Thanks again.

This Week's Entries: Monday | Tuesday | Wednesday | Friday

printer friendly email this article letter to the editor
space space space











space
Contact Us