The Unauthorized Access Issue in Japan

Satoshi EGUCHI
Kyoto University
eguchi@fine.bun.kyoto-u.ac.jp

March 16, FINE99 Kyoto

Abstract

In this article, I refer chiefly to Eugene H. Spafford's famous article "Are Computer Hacker Break-ins Ethical?", and Ryoju Hamada's "The Act of Unauthorized Access to Computer Systems and the Internet". I shall examine Spafford's view that computer break-ins, even when no obvious damage results, are unethical. I will show that, contrary to his intention, his strong arguments against break-ins are in fact consequentialist, and his "deontological" arguments are relatively weak or misguided. I will suggest that we should re-consider the problem from a totally consequentialist position. Then I shall examine Hamada's argument for legal regulation of unauthorized accesses, and raise necessary questions.


The Cracking and Unauthorized Access Issues in Japan

Let me start with briefly introducing the current situation about the cracking and unauthorized access issues in Japan. Now we have no criminal law to prohibit an unauthorized access to computer system per se, though there are several criminal laws to prohibit wrongfully manipulating data of others, or tangibly/intangibly breaking systems. But recently we have had several examples of criminal use of an unauthorized access in Japan. For example:

In face of these cases, Ministry of Postal and Telecommunications and the National Police Agency are now planning a legislation for punishing unauthorized accesses in general, and now there has occurred a hot controversy.


In a previous presentation I have examined [1] Eugene Spafford's famous article "Are Computer Break-ins Ethical?"[2] and concluded that, pace Spafford, we should reconsider the problem from a consequentialist point of view. I start this paper with a summary of the arguments. Then, I consider arguments for punishing unauthorized access in Japan, referring to Ryoju Hamada's paper.


Spafford's Argument against unauthorized access

Now let me introduce Spafford's arguments and my criticisms.

In his paper, Spafford insists that unauthorized accesses are always unethical. He says, "we can judge the ethical nature of an act by applying a deontological assessment: regardless of the effect, is the act itself ethics? Would we view that act as sensible and proper if everyone were to engage in it? ... right is determine by actions, not results. Some ethical philosophers assume that the ends justify the means; our society does not operate by such a philosophy, although many individuals do. ... The process is important no matter the outcome, although the outcome may help to resolve a choice between two almost equal courses of action."[3]

Usually a "deontological" ethical theory is understood as a theory that judges the value or rightness of an act by its intrinsic feature. According to such a theory, for example, theft of an umbrella is to be (ethically) blamed, not because it will cause some bad result to someone (for example , the owner get unhappy and angry to be wet in a rain), but rather because it has an intrinsic feature of "stealing".

Deontological theories are often contrasted with "consequentialist" theories, which judge the value or rightness of an act by its consequences. The most popular consequentialist theory is utilitarianism, which advocates that act or rule is right when it maximize happiness of the parties involved.

The argument I quoted seems to show that Spafford's understanding of consequentialism has a problem. That is, Spafford seems to think of a consequentialist position as a position that judge the value (or anti-value) of an act by the its actual consequences, but for most consequentialist theories, what should be taken into account is "expected" results and their probabilities.

Arguments for unauthorized access

Next, let us see the so-called "hacker ethic". I doubt that there exists or has existed as a fixed set of belief such fixed set of beliefs, but the following claims below often called by the name of "hacker ethic": system-cracking for fun and exploration is ethically OK as long as the cracker commits no theft, vandalism, or breach of confidentiality.[4] Spafford enumerates four arguments for this claim.

  1. All information should be free, and thus there is no need for privacy and the intellectual property right.
  2. System break-ins have an utility of finding security holes, therefore non-malicious break-ins should be seen as beneficial to the system.
  3. Hackers do no harm, and make no changes. They are only learning and studying how a system works.
  4. Hackers prevent the "Big-Brother" from abusing information. [5]

Now let us turn to arguments for "ethical break-ins", and Spafford's objections to them.

"Information should be free"

Some "hackers" are said to argue that all information should be free, and that there's no need for intellectual property right or security. Spafford objects to this "hacker ethic" as follows.

(1) If all information is free, then privacy is no longer possible. (2) If information is not property of any individual, anyone can alter the information. If someone controls information and accesses to it, the information is not free. But, without control, we would no longer be able to trust the accuracy of the information.

Here it seems to me that Spafford stretches the "hacker ethic" excessively. Those who assent the GNU Manifesto, who are often seen as the main proponents of the "hacker ethic", would not want their program sources on his computer to be changed without their knowledge. And even the most radical "hackers" are not likely to deny the importance of privacy, or willingly accept an arbitrary manipulation of socially important data. Spafford and other critics of the "hacker ethic" have set up a straw person by attaching an extreme view to their opponents.

The Security Argument

It is sometimes claimed that break-ins have a positive role of finding out security holes of computer systems. This claim was put forward when the Internet Worm incident occurred. The defenders of the author of the Worm program claimed that he made his Worm program in order to point out security holes to system administrators. This type of break-ins are said to be not from malicious intent, but rather from a goodwill to call attention of system administrators to security holes, thus they say it should rather be recommended.

Spafford makes objections to this argument as follows.

  1. Now most of vendors and system administrators have been attentive to reports of security problems. We need not break-in systems to call attention to security, any more than, set fire to the neighborhood to bring attention to a fire hazard.
  2. Not every network site has enough technical and financial resources to fix their security holes. Vendors should not be responsible for fixing every possible flaw in every possible configuration. For, since many sites customize their software etc., it will cost too much money and time to prepare to fix every security holes.

I think Spafford's analogy in the first argument misses the point. A Fire always causes harm and damage, but break-ins may not do any harm. It is true that in the case of the Worm program, it overloaded many machines, and caused much trouble on system administrators, but not every break-in has the similar troublesome effects. Spafford's second point seems to be efficient, but it clearly appeals to the results (costs and harmful effects of being prepared for fixing security holes), not to deontological considerations.

The Idle System Argument

Richard Stallman says in his letter to Newsweek,

"I do not believe in absolute property right". What this means is that the property owner has the right to use the property but does not have the right to waste it deliberately. Supposed violations of property rights are only wrong according to the damages they do, and the good of all concerned must be considered, not just the owner's. [6]

Stallman's argument is apparently utilitarian, though it may be too simplistic. Spafford rebuts the "Idle System" argument, saying that (1) these systems are usually not in service to provide a general-purpose user environment. They are in use in commerce, medicine, public safety, research, and government function. Unused capacity is for future needs and sudden surges of activity, not for the support of outsiders. According to him, a person who uses a system for that reason is, for example, like someone driving my car off because it is not currently used.

But I think his analogy might be misleading us. Indeed, according to our intuition, driving an unused car away without permission should be ethically disapproved, but it is unclear whether the situations are similar in relevant aspects. Using an idle car without permission will surely cause some trouble to its owner, for in that case the owner will not be able to use his car. It should be counted as "harm". However, it is not clear whether using idle computer resources may have a similar effect to its owner. For example, Richard Stallman compares computers to typewriters, and says that we don't ethically disapprove using idle typewriters. Some may feel uneasy about his analogy (though I myself agree with Stallman), but at least it is clear that at this point what is in conflict are Spafford's and Stallman's intuitions. Let's think about another analogy. If I say that using idle computer resources are similar to playing catch in an empty lot, what do you think? Some of you may still feel uneasy. But at least playing catch seems much less controversial than using typewriters. Where our intuitions are in conflict, an appeal to an intuitions is not a satisfactory justification. We need some more. % I must confess that I think we should play, really :-)

In face of this objection, Spafford argues that if too many people use an "idle" computer, then it is overloaded and no longer idle, and it will have serious bad effects to the primary service of the system. But note that this argument is not "deontological" as he says, but apparently consequentialist (utilitarian generalization). And this argument has its problem, that is, why should we worry about an excessive use, if such an excessiveness is unlikely to happen.

The Student Hacker Argument

The last argument I discuss is "the Student Hacker Argument". Some say that system intruders cause no harm, make no changes to system --- they are only studying how the system works. Since computers are expensive, they take an opportunity to study the system. This also is a point that Stallman made in the letter to Newsweek. According to Spafford, this argument has three defects. First, there's no relation between intruding the system and computer education. Second, the students who are "studying" systems of course don't know all about the system. Therefore, they are likely to break the system and cause trouble unknowingly. Third, it is very difficult and troublesome for system administrators to distinguish a "student hacker" from a malicious cracker. Here also, while I accept his second and third arguments, I want to stress that these arguments are clearly consequentialist, and the first in fact deny the claim that break-ins have an educational effect. In any way, his three arguments are on a consequentialist background.

Conclusion so far

So far, we have examined Spafford's arguments for the claim that regardless the consequences, system break-ins are always unethical and unacceptable. His arguments is, from my point of view, somewhat misleading. Against his self-styled "deontological standpoint", the arguments I can accept, whether they are for break-ins or against them, seems to be consequentialist. Otherwise, they are relatively weak. Therefore I think it is very important to tackle the ethical problems from a consequentialist position. But if we stand on this position, Stallman's "the Idle System Argument" should be paid due attention, and even a weaker version of "the hacker ethic", that is, "some information should be free if necessary and even if it need break-ins" also should be well considered.

Hamada's argument for punishing unauthorized access

Next I shall try to examine arguments for regulation of unauthorized accesses in Japan.

In his very clear and suggestive paper, "The Act of Unauthorized Access To Computer Systems and the Internet" [7] , Ryoju Hamada argues for legislating a law to punish unauthorized accesses. He says that "not being (legally) punished for doing unauthorized access makes it psychologically easier to commit it and may also invite people to do."[8]

Then he expresses his worries as follows.

  1. Since awareness about the security problems is relatively low, there are many security holes of computer systems, which may be used for computer crime.
  2. Since the systems are connected, the result of an attack on one system may have influences on other systems.
  3. System intruders can get anonymity by an unauthorized access, and anonymity is the best hotbed of crime.
  4. Once they intrude a system, it is easier for them to get many more IDs. This in turn makes it easier to make further unauthorized access, which makes the entire network insecure.
  5. Most of other countries have some law to regulate and punish an unauthorized access.

Then he consider whether we can treat those he calls "hacker" (in his terminology, who intends to increase and display his ability, and has no intention to give damage or manipulate the data) and those whom he calls "cracker" (who break into systems, and change or thief the data, or give damage to the system) differently. His point of distinction is their intentions, but can we discern the intentions of intruders? Of course not. Then he considers what we would lose if we strictly regulated unauthorized access.

Here he discusses two important points (in fact he mentioned three, but I will omit one).

First, he makes an objection against the Security Argument, which is almost similar to Spafford's. Hamada argues that system administrators would prefer being supported by reliable individuals or companies, rather than unknown "hackers", and want to have at least some information of the result of the inspection.

Second, Hamada argues against the following line of argument. The Internet has been developed (partly) by way of peeping and copying other systems' software and configuration etc., and (partly) by free exchange of information; if we regulate an unauthorized access strictly, the development of the Internet might be slowed down. But Hamada stresses that the Internet is no longer the net for and by scholars and researchers only, and that the principle of information sharing has been broken down because of explosion of the Internet population and popularization of commercial use. According to him, in these situations, non-regulation of unauthorized access would only be taken advantage of by criminals.

Let me critically analyze his points. First, I still think there are not a few cases where an unexperienced system administrator would welcome an inspection of his system by other parties. For example, the most popular MTA (mail transfer agent), sendmail, has many known bugs and they are very often exploited for sending UBE (unsolicited bulk e-mail). Some groups (I personally don't know for sure who they are, and whether they are really reliable or not) now run checking programs which search sendmail hosts and test them. When security holes are found, the programs notice the system administrator of it. A sweeping-out kind of legal regulation that Hamada advocates would shut out such kind of services. It might decrease the expected utility.[9]

What about his second point? I admit that the Internet has been growing up, and some aspects of it has widely changed, and now it is changing its appearance and substance. As Hamada says, now the Internet contains economically highly sensitive information, which its owners don't want to be peeped or stolen. Nevertheless, I believe we still rely very much on free exchange of information and individual/collective ingenuity and work.

Moreover, does the current situation, where we have no laws prohibiting an unauthorized access, really invite people to commit it? Does a regulation really have effects on behavior of people, as Hamada says? (We have many minor offenses that should be punished by a law. But is it because the law exists that we don't commit them?) Aren't the costs we will pay for monitoring systems and taking/keeping log files too high? Wouldn't curiosity, which is sometimes necessary for a person to increase his ability, be chilled down by such an regulation? Admitting that an unauthorized access should be ethically disapproved, isn't education more effective than legislation? [10] I must confess I don't know the answers for sure.

Conclusion

From my point of view, what we need about the unauthorized access issues is a careful calculation of utilities and disutilities of regulating unauthorized accesses. I must confess now I don't know for sure whether a regulation will do us good in the long run. Of course I wouldn't hesitate to accept regulation if someone show me that it is really necessary for enhancing expected good consequences. I only hope that I could suggest a part of how to think about it.

Notes

[1] Satoshi Eguchi, "Cracking and Hacker Ethics",Technical Report of IEICE, FACE 97-22 p. 7-12, 1998

[2] Reprinted in Deborah G. Johnson & Helen Nissenbaum (ed.) Computers, Ethics & Social Values, Prentice Hall, 1995

[3] ibid. p. 126.

[4] The New Hacker's Dictionary, compiled by Eric S. Raymond, 3rd ed., The MIT Press, 1998

[5] I shall omit this last one from this discussion because I don't find it very important.

[6] "Are Computer Property Right Absolute? in Deborah Johnson & Helen Nissenbaum (ed.) Computers, Ethics & Social Values, p. 115.

[7] ISPJ Technical Report, Vol.98,No.52, p. 39, 1998. I use an on-line version. http://member.nifty.ne.jp/ryoju/hacker/ih11.txt

While the English abstract of his paper says that "it is required that this act (The act that prohibits unauthorized access) should punish the harmful unauthorized access while supporting Internet's tradition", but the body of the text in Japanese seems to insist that it be not required to support Internet's important tradition that he himself admits as such, i.e. harmless unauthorized access. }

[8] All quotes from Hamada's paper are my translation.

[9] I might be unfair to Hamada, because he himself admits that we should have exceptions, such as testing of already intruded systems, although he doesn't mention why such exceptions should be admitted.

[10] On this point, I agree with Shinji Yamane & Hiromasa Ozasa's paper "The Reason Why True Hacker does not Crack", Technical Report of IEICE. FACE96-21, which I couldn't mention in this paper yet.