This paper was written in Spring 1995 for an independent study course called “Foundations of Liberty” that I took within the Economics department at New York University. Course materials were provided by the Institute for Humane Studies (firstname.lastname@example.org) at George Mason University. I’d like to thank Professor Peter J. Boettke at New York University’s Economics program for the guidance he’s given me over the past two years. Prof. Boettke was the advisor for this project.
The paper is divided into nine sections:
- Efficiency of the Network
- Pornography on the Internet
- Usenet unregulated: A microcosm of the Net
- Property rights on the Internet
- Privacy and Network security
- Public access and pricing schemes
Textual notes: The paper contains endnotes, and there are numbered links within the text that point to the relevant endnote. I realise this may be a little cumbersome, but embedding the footnotes within the text either in parantheses or otherwise, would have been quite tedious. There are a number of relevant links contained within the endnotes. The paper itself is relatively bare of links, since I thought it wise to cite references in the endnotes, as is common in most academic papers.
— Subir Grewal
The future of Computer Networks: Implications, Pitfalls and Conjecture.
“However if we are not careful about the decisions we are making today with respect to regulations, laws, policies, and the public interest, these [networks] may…
* Allow for comprehensive forms of censorship and regulation of information exchange, while destroying the press economically.
* Allow for the invasion of privacy in rather unique ways.
* Be regulated out of existence or emerge as a shadow of the potential inherent in the technology”
“A virtual community is a group of people who may or may not meet one another face-to-face, and who exchange words and ideas through the mediation of computer bulletin boards and networks. … We do everything people do when people get together, but we do it with words on computer screens, leaving our bodies behind.”
This paper is an attempt to show why the Internet (and Computer Mediated Communication in general) should not be subjected to the same level of pervasive regulation that both the broadcast media (radio, Television) and the telecommunications industry have been. The reason for this, I believe, is that Computer Networks are fundamentally different from other mediums of communication. In this I have been persuaded that the network is so malleable that there is room for all users to customize incoming communications and that the Net community is capable of handling misuse of the Net (in most cases) by itself.
“Orderly patterns can occur without being designed. In some cases such natural order is useful to human beings. Sometimes the only way too bring about a certain state of affairs is to set up the conditions in which a spontaneous process will lead to the desired result”
The very existence of the Internet provides us ample evidence of spontaneous order. Computer Mediated Communication (CMC) is an entirely unplanned phenomenon. The original networks were created for computers to share data, human to human communication simply piggy-backed over existing connections and technology. The owners of the original networks (the Department of Defense, Arpanet, for instance) were not too concerned about this until what had once been incidental use of their resources began to grow exponentially. As people discovered novel ways in which to use this new technology to intereact with each other, the benefits of such communication were quickly realized and the means for Computer Mediated human interaction were acknowledged as important tools for facilitating better research and more efficient operation in general. The very first CMC applications on these networks were the rather humble (though revolutionary) precursors of today’s global e-mail and Usenet utilities. The users of CMC soon realized the necessity (and it quickly became one) to be able to communicate with other networks (and thereby individuals at other organizations). One of the initial problems was standardizing the different transports that were being used to transfer data over unlinked networks. Translators were, however, quickly written and inter-network communication began to take place. There were limits to translators though and they soon became complicated as more networks with different data transfer protocols began to join the larger networks. The need for a single multi-purpose protocol was realized, the obstacle was to convince network administrators to adopt a particular protocol. A small group of programmers who had developed a flexible suite of transmission methods called TCP/IP (Transmission Control Protocol/Internet Protocol) began to frequent networking conferences to preach the gospel of TCP/IP. With gentle persuasion and an impassioned defense of TCP/IP they succeeded in making it the standard for a large number of existing networks. It was now possible for computers to understand each other even if they were situated on separated networks. Local Area Networks (LANs) now found it easier to join Wide Area Networks (WANs) if they used TCP/IP. As organizations found CMC essential to the successful completion of tasks such as research, the growth of networks experienced a surge that has yet to lose momentum and is far from reaching its zenith.
What can be seen even from this incomplete and cursory synopsis of the history of CMC is that it appears to have a life of its own. CMC has grown and will continue to grow because individuals find it both interesting and useful as a means of communicating, gathering information, entertainment and building communities. The phenomenon that comprises the Internet and a myriad of other dispersed computer networks is fundamentally the result of individual action and initiative, coupled with the emergence of a spontaneous order. This phenomenon has come about because some of us have while undertaking actions for our own benefit, created something that is valuable for others as well. In this we are like the merchant who is
“led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for society that it was no part of it. By pursing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.”
There is no regulating body on the Internet to make decisions concerning its future development and present working, these decisions are made by individuals working within a mechanism that gives them opportunities for experimentation and rewards them when the experiments turn out to be useful for the Network community. To some this lack of structure may seem like anarchy, but as we shall see, it more closely resembles the structure of a truly free market in information.
TCP/IP and network protocols in general are designed to make efficient use of existing data transfer capabilities. Rudimentary networks often work within the web of wires that forms the telephone system. The difference between the telephone/fax machine and something like TCP/IP or ethernet is that many transfers can take place simultaneously over one connection. By splitting up one transmission into small IP packets containing data, computers can transport a number of transmissions (each consisting of multiple IP packets) over one link in a continuous data stream. Therefore unlike telephone conversations, TCP can and does support the utilization of the periods of silence when the two parties (in this case computers) at either end of one connection are not speaking.
A speedy network ensures that data and software programs do not have to be stored on each computer that requires them. This not only saves on data storage needs, but also makes it easier to maintain the most current information. Further, since information on networks is generally transmitted only on the client or user’s demand, it significantly reduces the incidence of unsolicited/worthless information (or noise).
Parallel Processing and time sharing systems
A single computer is seldom operated at maximum capacity, it is often idle for at least a part of the day. One way to ensure more efficient use of a computer system is through time-sharing where a number of users are authorized to use a single computer capable of handling multiple tasks. The assumption is that not all of them will wish to use the machine at the same time. Networks can also facilitate the use of such idle machines for computations that do not need to be performed during normal business hours. Such a scheme would make available millions of processors to users willing to compensate the owners for the use of their computers. In this manner, the cost of a network connection may be lower for individual users willing to hire out their computer’s resources. This does bring up issues of security though, and who is to be permitted to use such computers. This is part of the reason the plan has not yet been realized. Yet we must acknowledge that like parallel computing in general, such an integrated network would be a much more efficient use of existing resources. As such it is likely that the benefits of such efficiency will outweigh concerns of security. This will come about however, only if the cost of processing time is higher than the cost of creating and maintaining a distributed computing structure. It is by no means clear that this will be true in all cases.
“… the principle requires liberty of tastes and pursuits; of framing the plan of our life to suit our own character, of doing as we like, subject to such consequences as may follow: without impediment from our fellow creatures, so long as what we do does not harm them, even though they should think our conduct foolish, perverse or wrong.”
Starting from this position let us try to examine the effects of pornography on computer networks, independent of the damage it may cause to the sensibilities of conservative users by its very existence. Pornographic material is certainly available on the Internet, yet pornography is not as pervasive as it has been made out to be. Less than 200 of the 11,000 wide area distribution Usenet newsgroups are devoted to pornography. Despite this, it is this marginal material that has become the most visible aspect of the Network culture in the traditional media. Keeping in mind that pornography composes an extremely small part of Network use, we can look at solutions for the concerns that it may be accessible to young users.
The popular conception seems to be that the network is littered with pornographic material which even the most innocent user is bound to stumble on to. This is not the case. It is the user who generally requests a specific piece of information and as such, can steer clear of material he may find offensive. In other words, no material on the network is forced upon a user, she makes the decision to view or read it. In addition, the Network and its users ensure that there are strict repercussions for those who misrepresent material and information. Due to this, and the fact that contributions which are irrelevant to a particular discussion are frowned upon, it is generally the case that pornographic material is restricted to certain areas of the Network. This, coupled with the fact that most authors and providers of pornographic information warn prospective readers of the nature of the contents, ensures that one does not accidentally stumble onto such material. This clarity in describing content is only a mechanism for self- preservation. Someone who misrepresents pornographic material is bound to elicit a very unfavorable response from the Net community. In essence, only a user actively seeking pornographic information would find it. This demonstrates how the network has tended to maintain a level of decorum and order in this sphere and provide its own solutions to this concern. This is accomplished with the active aid of those individual users who punish users, who transgress on Network traditions and indulge in misrepresentation, with social ostracization.
This order (and it is spontaneous in that it is free from the regulation of an institution) is in place so that a user unfamiliar with the local environment can easily locate and select information she desires and reject that which she does not. In the same manner, one can restrict access to certain information with little effort. Thus the user has greater flexibility, and as far as choice of information available to children is concerned, can easily restrict access to certain services at any time. Such a flexible system is entirely in keeping with the essentially liberal culture of the network, allowing the user a potentially infinite number of options when it comes to limiting access, exactly as it does with the availability of information. In this case the Network is vastly different from media such as broadcast TV or Film where only a single option is available to the viewer. The fact that the user can set her own standards of tolerance as she deems fit can only be an improvement over a situation where one is forced to limit oneself to the values of the rest of society (or worse, those of the least tolerant member).
The argument that users may find it irksome to filter out information that they find distasteful lacks much merit. A scenario where the user would not have to make the effort to customize her access would necessitate the adoption of the values of the median or least tolerant user. This can only be a desirable objective if the dispersion of tolerance levels across the world is clustered close together, this is certainly not the case. This separation is amply evident in the different attitudes towards pornography and free speech current in different societies and cultures, not to speak of individuals. A solution to this problem is the existence of competing service providers. Some service providers are geared towards children (Prodigy for instance) and tend to shy away from “adult” content. This ensures that some standard, as regards to content, is maintained on the local Network. The user can then choose between a wide variety of service providers, find the one with the content she finds most desirable and then customized her access further if necessary. In this way the majority of the filtering is done by system administrators who would of necessity take into consideration the wishes and preferences of their users. This practice is a reality on the Internet, organizations tend to carry only such information on their systems as they deem appropriate or that their paying users request it (provided others do not oppose it). The balance has to be achieved between convenience and tolerance. The possibility that a user can circumvent such constraints exists; but at that point one is dealing with a problem weighing the effort and will of a user, against the supposed benefits of restricting her access to information she desires. I am unsure many except the user herself are capable of making such a decision. Further, it may not be worthwhile to monitor the use of each and every service simply to pry on users, especially technologically proficient users who may be able to outguess and outmaneuver the censors themselves. The decision as to whether or not blanket bans on pornographic material on the Internet would be worthwhile would have to be arrived at only after a careful analysis of the costs and benefits of such practices. On the one hand we have the effort it takes a system administrator or user to customize services for the various users she may have (along with the possibility of the errant minor who may gain access to pornographic information-entertainment), and on the other hand we have the utility an user at the higher end of the tolerance spectrum (which tends to be broad in a global network) derives from such information. I believe the latter far outweighs the former. If this is the case, better means of customized access will be developed (as they are) on the demand of users who do not find the scenario of having to wade through gigabytes of information they do not wish to see. There is no pressing need then, for regulation that demarcates what should and should not be made available to users, if we consider them capable of deciding that for themselves and their families.
Usenet news is a collection of bulletin boards that span the globe. Since the contents of the bulletin boards are not stored at any centralized location, but are located at myriad sites across the world; Usenet, like the Network in general, appears at first to be anarchic. Yet, there is an order to Usenet. It is true that this order takes an informal form but it exists nonetheless. When a user decides to post an article on a particular bulletin board, it is first sent to the local news server. From the local news server it is relayed (usually at times when Network capacity is underutilized, such as at night) to other sites which then send it further, till it is finally carried by all news servers (again provided they carry the newsgroup the article is posted to). Usenet is unregulated in that there is no central clearing- house through which contributions have to pass. Each individual news server carries articles it wishes to and rejects others.
Since the Internet was originally designed to withstand physical attacks it has formidable re-routing capabilities. A data packet can be relayed between two sites in innumerable ways. This means that if one particular site is down, there is always an alternative route that can be used. This ensures that a given packet reaches its destination even if the most direct routes are unusable. Each machine on the Internet is essentially an individual entity. It makes its own decisions, (based on the configurations set by the system administrator) as to what activities it should perform and which it should not. Yet messages do cross continents and borders, and a wide variety of information is available free of monetary cost. What prompts these machines (or their programmers) to agree to providing these services for strangers.
Since the Internet does not have an accounting system that can directly charge end users for information, most services provided on the Network are not compensated monetarily. In the case of commercial organizations the information services are primarily a testing ground for future services that users will pay for. Yet, there are a number of other information providers whose services are available, and will be in the foreseeable future, to the user free of charge. Some of these services are simply advertising, but in other cases they are very different. Such varied organizations as Universities (research papers), libraries (catalogues), Netscape (Yahoo search engine), NASA  (images and updates on space exploration) and individual Usenet contributors make information freely available to user of the Internet. In part what drives the providers of this information (and this is particularly true of individual contributors) is the desire for recognition within the network community. Yet, this is not all, they also provide a service to their friends and neighbors (in the case of organizations to their local users), and often the fact that the resources are also available to users all over the globe is largely incidental though by no means insignificant. The same is essentially true of Usenet. A conversation or thread on Usenet is a debate engaged in by people who share similar interests. That others are listening in on the conversation is to an extent almost unintended. Yet the fact remains that these articles are stored on numerous machines on the Internet and made available to a wide audience. The fact that an individual contributor is using resources on machines all over the world is a problem only if the article is itself worthless, the decision as to the worth of a contribution is however to be made by the individual user or the system administrator on behalf of the users. It is often the case that the news administrator will stop carrying a newsgroup if no users evince an interest in it.
The question as to who bears the cost of this distribution has a slightly complicated answer. Since most organizations pay for a connection with a fixed data carrying capacity (bandwidth), the Network resources available to them at a particular price have an absolute limit. The marginal cost of transferring data is so low as to be insignificant. The organization generally incurs the same cost independent of actual usage, provided it is below the capacity (calculated in the quantity of data transmittable per second) of the Network connection. This is true of the Network as a whole. The most significant portion of the cost of communication is the actual hardware required. The high speed fiber-optic cables connecting sites and the routers that direct traffic on the Net have absolute maximum carrying capacities. The marginal costs of transmitting data (electricity, depreciation and used CPU time/cycles) are negligible unless we are close to or over the maximum capacity, at which point the routers tend to drop packets. In such a case the cost of a successful transmission must be measured in terms of the value of the transmission it “crowds out”. In this environment certain applications tend to be more efficient than others. In particular an application that does not require an immediate response (e-mail, Usenet) is more efficient than one that is time- critical or uses synchronous connections (telnet, IRC, web browsers). By the same criteria an application that involves large data transfers tends to be costlier than one that involves small data transmissions. Since e-mail and Usenet involve text transfers only, the size of an individual transfer is relatively small. A Usenet newsgroup that thousands of people read will often have a smaller size than one binary image or a short film clip. Thus applications that use synchronous or near-synchronous connections and involve multimedia place a proportionally greater strain on the Network’s transmission capacity (especially during peak hours) than text based non-synchronous applications.
Since both the providers and users of information pay for their network links, the costs of transmission are borne by them equally. The intermediary networks over which the data is transferred are paid for by both parties. In such a scenario then, a voluntary exchange of data that both parties are aware of does not affect other users directly. Organizations that provide software and information on the Net not only pay for the cost of developing and maintaining (data storage, CPU time) the service, they also incur one half of the actual cost of transmission. The decision to affect the transfer must then be mutually beneficial to both parties if they are to agree to it. Both provider and user have then voluntarily accepted the terms of the transfer and are paying for the exchange. As is to be expected this severely limits the availability of information resources on the network. A far wider number of uses would be facilitated by the network if a mechanism to charge the user for information requested were to exist. By the same measure we would expect more effective advertising on the Net if the advertiser were to have the ability to pay for the user’s Network connection (this is analogous to prepaid postage or 800 numbers). On the other hand this may cause a proliferation of junk e-mail on the Net, this is easy to circumvent though as the user enjoys far greater customization capabilities on the computer networks than she does with the traditional media. It would be expected that such a network would require some sort of governing institution to guard against misuse of these capabilities. This may not be necessary. As we have seen Network users tend to take individual action when they deem it fit. In addition the institutions of tradition and acceptable use exist to guard against misuse. Ubiquitous advertising will be counter- productive for most organizations, as the culture of the Network is such that users do not appreciate unsolicited information that may be a nuisance. These news uses will however, not be possible until a safe method of transmitting financial information over computer networks is available. Such technology already exists yet its application is severely restricted by government regulation. As a result the full capabilities of computer networks remain unexploited.
On Usenet servers (most of which serve only the local network) the system administrator decides which newsgroups to carry and for what period of time articles are to be retained. The user then decides which newsgroups she would like to read/write to. Since the local users are presumably bearing the cost of the local network, this use does not affect others. True news may take up valuable storage space and network bandwidth but these decisions are to made by the owners of the network (the organization and users, through their agents, the system administrators). Usenet itself then, is neither ubiquitous nor wasteful, it is on the contrary more malleable and offers a wider range of choice for the user than traditional media.
The question of property rights is fast becoming a critical issue for the future of the Internet. More often than not, when we speak of rights on the Network we are dealing with intangible resources. This tends to complicate matters in a webbed and labyrinthine environment even further, especially when large institutions are the “owners” of these resources and their users are in turn part owners of the institutions. We can begin with one of the more obvious resources, access to specific information. Since information on the Network is generally provided without remuneration, the provider is playing the role of a philanthropist who underwrites a public service. Coupled with the fact that this information and part of the resources used to distribute it are the property of the provider, it seems to me that the information provider is perfectly within her rights if she decides to remove a particular resource from the public domain. This will not however endear the erstwhile provider to the Network community at large.
The case appears to be very clear cut when dealing with a single provider with a single policy. There are still some unresolved issues though. If the resources of a service provider are subjected to abuse or employed for what she considers inappropriate use she may decide to withdraw access to the service selectively. In such a scenario a single irresponsible user may cause an entire organization to be banned from using the resource. Since the effects are tangible and affect the whole local network, the organization the user is part of will, in all probability, not be inclined to take a lenient view. When the activities of one user harm the privileges (and these are privileges, not rights) of another, there is an infringement of the user’s rights (in the sense that she is now deprived of a resource that would have been otherwise available to her). A mechanism exists to guard against this though. A service provider will generally approach the local system administrator to take corrective action against the errant individual. In certain cases if the local system administrator does not act to the service provider’s satisfaction, she may yet decide to ban the entire organization from using the resource. Though this is regrettable, the service provider is entirely justified in doing this in the interests to ensuring that her resources are not utilized for what she considers to be inappropriate use.
The issue gets murkier when considering a resource that is not owned by one organization or individual per se. The classic example would be a mailing list. The contributors to a mailing list are generally individuals form different parts of the globe. However, the list is generally administered, and mail collected and distributed, from one site. Generally the administrator of the mailing list would also serve as a moderator, screening contributions for offensive or inappropriate content. This screening is reasonable since the moderator (or moderators) are expected to help maintain a level of decorum in the mailing list. In this case the moderator does exercise a degree of control over the individual contributor. It is important however to ensure that this control is limited. That the moderator is severely limited in the extent of his control is easily discernible when we examine the structure of a list. The mailing list is essentially a matter of convenience. The individual user may if she so wishes attempt to contact the members individually. The process would be tedious though, and it is generally considered wiser to use a mailing list to reach the audience. This convenience comes at a price however. Since the moderators and the site acting as a hub contribute to the actual working of the list to a greater degree than any individual user or site, it would only be reasonable to assume that they should exercise some degree of control over a discussion they are facilitating. The moderator is in a sense acting as a host at a party, and as the host she reserves for herself certain privileges. If the moderator decides to discipline an individual user or edit a contribution, she would be, in my opinion, entirely within her prerogative as the host. Major decisions on the mailing list’s objective and general policy would however, have to involve the individual contributors if they are to enjoy any degree of success. To carry the analogy of a party further, if the host appears too dictatorial the guests reserve the right to leave and join another party. The ease with which a competitive mailing list can be set up ensures that moderator’s policies and their actions closely mirror the preferences and values of the individual contributor. This is in a sense a competitive equilibrium in a market where entry is unrestricted and the costs of setting up the original enterprise are reasonably low.
We enter a very different environment when dealing with Usenet newsgroups. Since Usenet is not centralized its operational costs are far more widely distributed than those of a mailing list. As such, there is no single identifiable entity who “owns” Usenet. If such a concept existed, we would have to acknowledge that the owners are the members of the Net community at large. It is consequently difficult to identify those who are capable, and justified, in setting a general policy for Usenet. Certain traditions have evolved, and continue to evolve though, and they are for the most part, respected by users. One of the most significant is the concept of restricting discussion to an appropriate forum and correctly identifying the contents of a contribution. Users violating these principles run the risk of incurring the wrath of individuals and the entire Usenet community.
As contributions and discussions on Usenet tend to be copious, there exist mechanisms for the user to sort through the millions of words posted to Usenet bulletin boards every day. Usenet is divided into a plethora of individual newsgroups each dealing with a particular topic. Within each newsgroup a number of discussions or threads flourish under individual subject headings. Once a user has decided which newsgroups she wishes to read, she may ignore threads that are of no interest to her. Nevertheless this is only a method of classification and as such is of no particular relevance to this paper. What interests us is the filtering mechanism the user can develop with these tools to skip over those articles or threads or newsgroups she is not interested in. One such tool is what is known as the “kill-file”. A “kill-file” generally contains headings and names, posts related to which the user does not wish to scan. This is analogous to having a VCR that will ignore all broadcasts containing content that one is not interested in, and record all others. The user does of course run the risk of ignoring the occasional article that may be of interest, but the convenience of not having to scan through the hay-stack tends to diminish the regret at having missed the needle. This is an entirely individual decision, the user’s time is her own and she is not obliged to entertain the views of other via their writing.
It is possible however, for a user to act as a moderator without the approval of the other members of a newsgroup. An utility permitting a contributor to erase her own articles can be misused by proficient users to cancel contributions made by others. Despite the, in many cases, laudable objectives of these “vigilantes”, and their concern for the nature of discussions on Usenet, this does not seem to me an appropriate action. In cases where a user has posted an article to multiple newsgroups –regardless of the posts relevance to the topic under discussion– it may appear reasonable to cancel a message which is utilizing resources across the globe. This is not exactly as it sounds at first. Each Usenet article is assigned a unique identification number, and an article that is cross-posted to multiple newsgroups need only be stored by the news server once. The individual user if she so wishes, may filter out such irrelevant articles or the system administrator may do this on the behalf of the local community. A more appropriate response to perceived misuse of Usenet would be an attempt to have the errant user’s posting privileges revoked. If the misbehaving user’s system administrator is reluctant to take the appropriate action, it is possible to petition the organization’s service provider. As the network expands this situation will become more common and it might be necessary to provide an institutional solution. In fact a proto-solution already exists, albeit in a rudimentary form.
When a newsgroup is founded, or soon after its conception, the users often propose the adoption of a code of conduct for the contributors. The newsgroup’s charter generally deals with the objectives behind the formation of the newsgroup, what discussions are inappropriate for this forum etc. Since the only tangible asset of a newsgroup is the discourse its members are engaged in, the participants are in a sense the “owners” of the newsgroup. As those most directly affected by the conditions of the newsgroup, they are also those most capable of setting policy. The articulation of such a code of conduct, and the awareness of its existence by the users, clarifies the position of a user who violates the norms of the newsgroups. This user is now an offender in the discussion, and the other participants can move to attempt to revoke his privilege of participation. Now, the system administrator who knowingly protects such a user is, in all probability likely to have her reputation harmed within the Net community. There are in this sense real costs to harboring the Net offender.
The fact that the network community has not delegated the responsibility of disciplining and punishing errant users to a formal institution with discretionary powers ensures that the decision reached as to the appropriate action to be taken against the user is as near to what is acceptable to the whole community as possible. This also ensures that users are not punished for minor infractions or those that do not inconvenience other users. Though users will often talk of the possibility of making a complaint against an individual, the threat is carried out only very rarely. The need for collective action ensures that the community attempts to discipline only the most serious offenders. The ease with which the individual can ignore articles she finds to be of no interest allows her to implement a more individual selection process as well. Contrary to expectations this does not appear to lead to a sense of apathy among users. On the contrary, the small size of individual newsgroups tends to create closely guarded fiefdoms within the Byzantine labyrinth that is Usenet. The balance thus achieved within the various rooms that are the newsgroups and the maze as a whole that contains these meeting places appears to be very conducive to fostering a desirable order.
Of particular interest to those concerned with the social or sociological implications of the expanding network are Multi User Dungeons. A MUD is essentially a set of communication protocols housed on one or more servers. The user logs in to the server (often at a remote site) to converse with other users who are currently logged on. Experienced MUDers are capable of modifying the actual MUD to create textual and graphical objects that can be displayed even when their creator is absent. A MUD is perhaps the closest Net equivalent of a cocktail party, town-hall meeting, art gallery, tree- house and brain-storming session, rolled into one. Since MUDs generally have a specific home , a particular MUD is operated under the aegis (and with the financial support) of a specific individual or organization. Since MUDs are notorious for consuming large portions of the available computing resources on the machine they are housed in (not to mention taking up bandwidth), not many organizations are willing to support what is often considered an unnecessary and recreational utility, unless they are compensated for it in some way. The reason MUDs are viewed as entertainment is probably because people tend to obsess over them without the prospect or opportunity of financial gain. MUDs are however, microcosmic communities, and like most communities the concerns of their members appear to be rather marginal and unusual to those uninvolved. The MUDs position can become precarious if the system administrator is brought under pressure (by those on the ‘outside’) to regulate or ban it’s users activities. As the MUD is being operated on a specific computer, it does in a sense have an owner, as far as the material resources required for its operation are concerned. The remote user is obliged to adhere to the code of conduct of the host organization while using the MUD. This is after all a courtesy being provided to the user. A remote user who has no relationship with the “owners” of a MUD and succeeds in harming the host’s network, or misusing it, must and should be held responsible and punishable for her activities. Once again it does help to have clear guide-lines as to what constitutes appropriate use of the host organization’s resources. Some organizations, and individuals may tend to be more lenient than others, this can create a problem. A user switching between MUDs may be tempted to apply the rules of one to the other. Not only may these alien rules be incompatible with the new host organization’s policies, they may irk the other users of a MUD. If each MUD is a whole world in itself, complete with it’s own history, folklore and laws, then all the MUDs taken together are a compendium of dissimilar constructs and the traveler/MUD-hopper must respect the traditions of each and every MUD she visits. Of particular relevance are the desires of the “owners” of such a world.
A very different perception is necessary when dealing with the contributions made by users to a MUD. Unless a specific agreement exists between the MUD host and user, objects created by the individual user must be treated as her property, the results of a creative process and as such ones that can be withdrawn from the public sphere at the author’s will .
A MUD that is under this risk may thus be moved from one host to another if a problem arises between the users and the host organization . The environment the users create with their presence and creative abilities is distinctly separate from the facilitating machines. This environment is as much a product of the entire network (in that it brings individuals who are geographically separated to a common site) as it is of the host organization’s generosity.
The equivalent of Usenet in real-time communication is IRC. IRC (Internet Relay Chat) is a protocol that allows users to talk to each other simultaneously without having to login to a particular remote site. Like Usenet, IRC servers are widely distributed and the typical IRC user would connect to the local server to get on IRC. The IRC network is then, by virtue of having distributed costs, not “owned” by any single individual or organization. Individual discussions or channels within the global IRC network are a different thing altogether.
The IRC network is divided into channels and a user can generally participate in only one channel at a time. An individual user can either join an existing channel or create a new one. In the latter case, the user becomes the channel operator, with the means to set the terms of the debate. A channel operator (channel op) can ban certain users from the channel, remove them forcibly, set the subject under discussion  etc. The channel op is here operating as a moderator of a mailing list. The nature of IRC however necessitates delegating more power to the channel op than to a moderator, this is to ensure that a user who is abusive or onerous can be dealt with immediately . This can and often does, lead to a situation where the channel op can misuse her powers. The ease with which a new channel can be set up, and the ability to “invite” individual users to join the new channel, ensures that a tyrannical channel op will soon be left with an empty channel. If an user consistently misuses IRC networks, a complaint can be made to the IRC op (the person maintaining an IRC server) against the user. A mechanism to punish those who do not abide by the norms of IRC exists and is employed to discipline errant users. The malleability of IRC further provides a group of users with the ability to tailor a channel to their own needs and preferences.
“It is easy to run a secure computer system. You merely have to disconnect all dial-up connections and permit only direct-wired terminals, put the machine and it’s terminals in a shielded room and post a guard at the door.” 
“For better or for worse, most computer systems are not run that way today. Security is, in general a trade-off with convenience, and most people are not willing to forego the convenience of remote access via networks to their computers. Inevitably, they suffer from some loss of security.” 
It has been made abundantly clear by this point that no computer linked to a network can be completely secure. The balance we must maintain is, as Cheswick and Bellovin point out, between convenience and security. This is as true for the individual user as it is for a system administrator.
Since IP packets pass through a number of networks before arriving at their destination, it is possible that a user’s conversations (via e- mail, telnet, talk etc.) may be monitored, or tampered with, in transit. This is a serious breach of privacy, but as yet no institutional structure exists to trace or punish the eavesdropper/saboteur. Indeed in many cases a user may be unaware that her transmissions are being recorded or modified. One of the ways a user can increase the privacy of her transmissions and reduce their vulnerability, is by using encryption programs. Various public key encryption programs exist that allow a user to encrypt data with a ‘public key’ and transmit it over the Network, assured that even if it is intercepted it would be close to impossible to decode the data unless one had the ‘private key’ that corresponds to the ‘public key’. The severe restrictions placed on the distribution of such powerful encryption techniques by the US government has stunted their application in popular network utilities. The result of this policy is that unless the user makes a determined effort to encrypt her data it is by default transmitted in unencoded form. This situation is equivalent to posting a postcard as opposed to a letter enclosed in a sealed envelope because no envelopes are available in the market and one has to construct them by hand .
Encryption techniques are not currently widely used by general users. This means that more often than not the user who fails to encrypt data must accept the initial violation of her privacy and can only take precautions against further breaches. Unless encryption programs are incorporated into the most popular network applications it seems doubtful that users will make the effort to encrypt individual transmissions. The current situation also retards the growth of the commercial Network as financial data (Credit Card numbers, Bank account numbers etc.) cannot be transmitted over the network in a secure form . In a case where a user’s privacy has been violated, the disciplining of the offender cannot be accomplished by the normal methods (ostracization, loss of status). It is true that people who invade other’s privacy are not respected (though they may be feared) on the Net, but such persons are generally not very concerned about their status among the wider community.
“When people seldom deal with one another, we find that they are somewhat disposed to cheat, because they can gain more by a smart trick than they can lose by the injury it does their character.” 
The Network is such that users tend to have an unique view of the personalities they converse with. The anonymity the Network provides can be counter-productive if it lets people deceive each other, or worse makes the prospect too enticing to resist. Not only does a Net address provide extremely limited information about it’s owner, addresses can be easily fabricated and misused. When to this is added the screen afforded by anonymous remailers , it becomes apparent that identity as understood on the Net is not the concept we are familiar with. This is not as alarming as it sounds. We can, after all make a few assumptions about any user, we know that the user is human  and their style of writing and the opinions they defend afford much more information about them. So one can know an user well without ever having met her In The Real World (ITRW).
Anonymity brings up other questions though. It is generally accepted that posts routed through anonymous remailers are not accorded the same of respect posts from a user’s own account are. To an extent this may be because anonymous users are perceived to be more aggressive and confrontational . The argument that some anon users do abuse the cloak their anonymity provides them would be difficult to disprove. Yet it is also true that the Net as a whole is a community of distance, and people tend to be more visceral on networks than they would be in ‘the real world’. The Network does break down accepted codes of conduct while creating new ones. Thus though a person can be much more effusive on the Net than she would be in a traditional social setting, it is also easier to ignore completely users whose opinion one is not interested in. Since these properties are inherent in a Network that does not support video conferencing, it would be ingenuous to argue that anonymous remailers created such an environment, they only aggravate an pre- existing phenomenon.
Yet true anonymity is hard to come by, this was amply demonstrated by the Church of Scientology incident . In this case, the owner of the anon.penet.fi site, Julf, was forced to divulge the identity of one of his users because he had posted material to Usenet that was copyright of the Church of Scientology. Admittedly this was done partly due to the fear that the Finnish police would confiscate the equipment. Yet it is apparent that even the administrators of anonymous re-mailers are willing to comply with the wishes of the Net community (of which they are an integral part), even to the extent of disallowing their users to post to a particular newsgroup. It must be acknowledged however, that the ability to comment without one’s identity being divulged is very important to a discussion. If only for this reason, we must tread carefully when constructing regulation that may affect the degree of anonymity afforded to the general user.
Security on local computer systems
System security poses questions that are very different from those we have already discussed. A network or computer when compromised can directly affect the work of an organization and thousands of users. Keeping this in mind unauthorized use of a computer system can only be regarded as theft and must be treated as such. Thus the system administrator is justified in transgressing certain boundaries to protect her system from attack.
“Within the bounds set by legal restrictions, we do not regard it as wrong to monitor our own machine. It is after all, ours; we have the right to control how it is used, and by whom. (More precisely, it is a company-owned machine, but we have been given the right and the responsibility to ensure that it is used in accordance with company guidelines.) Most other sites on the Internet feel the same way. We are not impressed by the argument that idle machine cycles are being wasted . They are our cycles: we will do with them as we wish.” 
The position Cheswick and Bellovin take must be the starting point for any discussion on Network security. Since an organization owns it’s computational resources, it must have the last word on the appropriate use of such resources. The ‘cracker’ who breaks into a system without the permission of its owner, is then infringing and violating the organization’s property rights. In the interests of liberty without aggression, we cannot defend the actions of the cracker.
After establishing the injustice of the actions of the cracker we must examine the means available to discipline such a malefactor. As with all criminals, it is by definition a difficult task to catch a cracker on the Net. This difficulty is further compounded by the nature of the Network and it’s propensity to support anonymity. If a system has been subjected to successful attacks, often the only thing a system administrator can do is attempt to patch the holes in her system’s security. Apprehending the cracker is a tricky business and one is never assured of success. Even if the cracker is identified (by the site the attacks originate form) it is difficult to verify whether or not the attacking site is itself a front that has been compromised. In the event that the cracker is tracked down to his home site, the security administrator can attempt to contact the remote site’s system administrator but there is no assurance that action will be taken against the cracker . The situation can be thought of as being analogous to that between independent states (that may even have different laws) without extradition treaties. The multiplicity of hosts and the global nature of computer networks implies that users are subject to vastly different laws and regulations , and in some cases (due to the tendency of the Network to metamorphose rapidly) none at all.. As such, the victim of an attack can only trust that the system administrator at the other end will take the appropriate action against the cracker. Arrangements –when they exist at all– between system administrators, are generally informal and operate on assumptions of reciprocity. This should not come as a surprise. No host  on the Network can exist independently. A system administrator who refuses to cooperate with those who have been affected by the actions of her users can expect to have her access to many services revoked. Sites that are notorious for cracker activity are generally barred from services at many other sites (that are aware of the nature of the activities at cracker sites and their disrespect for the rights of others). Since this inconveniences other (innocent) users, the system administrator at a cracker site would probably be pressured by her own users as well. Thus in the absence of an institutional framework, there exists a method to punish the offender and prompt local authorities to consider the matter seriously. This mechanism does not of course, aggress upon the offender, it works much like ostracism and in this respect gives every individual in the (global) Net community the opportunity to take actions she deems to be appropriate. There is no aggression on the part of the victims, and attempts are made to resolve disputes outside of the ‘tit for tat’ mindset . In an environment where all transactions are voluntary, outlaws are punished by being denied services available to others.
An issue related to that of privacy and security is the concern about the reliability of information on the Net. This is where the stark contrasts between the Net and broadcast media are most glaring. The network confers on each and every user the ability to disseminate information and her views to an exceptionally wide audience at a very low cost. The Net has turned all of its users into publishers capable of making their work available to the entire Network community. This then brings up the question of who id to be responsible for dangerously incorrect information or even viruses and other malignant computer programs. Traditional laws cannot safely be applied to this environment as on-line publications and writing are generally not subjected to the sort of scrutiny traditional broadcast media are. In a sense the ease with which information can be distributed and corrected, reduces the cost of dissemination and correction thereby lulling the provider into complacency. This may not be very significant on a Usenet newsgroup where knowledgeable persons are eager to correct the mistakes of others, but can pose a problem on the World Wide Web where one is unsure of the extent to which the erroneous data has traveled and where there is no continuing discourse on the topic . A correction or revision of information or a program may reach the user too late. In cases where a publicly distributed computer program may be used as a “Trojan horse”, actual damage can be done by the provider (knowingly or innocently). As it is, the user is expected to treat every transmission with caution and a healthy skepticism, much as one would in the real world. Yet a degree of gradation has evolved so that information from certain sites is deemed trustworthy and software from reputable institutions is understood to be safe. Since the Net unleashes the power to inform, information regarding changes in an institution’s stance towards quality also travels very quickly.
It can be seen then, that the network is not a place for anyone to be sinecure. The responsibility to place sufficient guards on her own property rests with the individual user or organization. Even in the event of a security breach, the user or system administrator must often rely on her own good-will and capabilities in tracking down the cracker. This is the situation today. It may change in the future as computer networks are subjected to laws similar to those protecting one’s privacy on telephone and postal services. We must however, when implementing these laws acknowledge that an internal system to deal with crackers and irresponsible users already exists, and as far as possible enact only those laws that compliment this order rather than attempt to replace it.
A related, though perhaps less exciting, problem is that of plagiarism and acknowledgment on Networks . It is conceivable that ideas and information may be plagiarized without the author’s consent. The global nature of the Network certainly makes such a thing easier than ever to accomplish and extremely difficult to control. We can treat such incidents as plagiarism and the Net community, by applying the system of morals and ethics that has developed within traditional publications, does take such incidence seriously. There is now a growing awareness of the distinction between e-mail, mailing lists and Usenet newsgroups as far as the privacy they afford is concerned. An E-mail message is generally considered confidential and though it is possible to involve users without the consent of all parties engaged in the correspondence (with blind cc for instance), this practice is considered impolite. A similar convention applies to mailing lists (especially moderated and restricted lists) which are by their nature considered closed discussions among users who trust one another. It is considered good netiquette to inform the participants of a mailing list before bringing non-subscribers into the discussion. A very different perception is held of Usenet. Usenet newsgroups are freely accessible and as such are public discussions. Newsgroup contributions are considered to lie in the public domain as they are posted on a global bulletin board by their author . An exception may be made when a particular thread involves the administration of a particular newsgroup. In such a case to cross- post the thread to other newsgroups may change the structure of the forum and skew the discussion. There exist however, social conventions that deal with these issues and it may not be necessary to regulate or adjudicate such discussions. In most cases the issue is resolved by the participants themselves, without the necessity of an outside arbiter.
To date the Internet has been subsidized in some way by the federal government. The nature of the Network has changed so much though, that it is no longer possible for the federal government to continue this support. The increasing popularity and commercialization of the Internet will ensure that the cost of networking will continue to fall, and that its growth will not be stunted after the withdrawal of the federal subsidy. The Internet has finally come of age. The question that now remains to be addressed is how access to the new commercial  Internet will be distributed and the pricing structure that will be implemented.
Currently access to the network is provided by the large service providers to an organization, at a flat fee for a line with a fixed maximum carrying capacity. A university would thus pay a fixed amount for a bandwidth of 1.5 Megabits per second regardless of whether or not it used the entire bandwidth at all times. This process certainly minimizes the need for monitoring traffic and accounting that a usage based price structure would necessitate. On the other hand it is inefficient since it does not allow for any flexibility for the user. One possible pricing system that may emerge on the Internet in the years to come is a mechanism that allows the user to bid for a level of priority that will be assigned to the transmission . In such a model (much like the postal service’s classification of classes), the user bidding the highest would have their messages sent quickest if the network cannot handle all incoming traffic at any time. Due to its complexity and the heavy computations involved, this system may prove to be prohibitively expensive or unworkable. What might develop, in my opinion, is a usage based pricing mechanism at the point where the service provider receives incoming traffic from its customers. This structure would price-discriminate between peak-time usage and data transmitted during hours of low utilization (similar to the telephone system). The organization would then decide on whether or not to use a similar structure within its internal network. This seems like a reasonable pricing structure as it would reflect the actual (opportunity) cost of communication, without requiring the additional computing resources required to maintain and run auction tallying algorithms
What is certain however, is that not all users will have similar access to the Network, in fact some of us may not have access at all. This may be regarded as a gross inequality in a heavily networked world, or it could mean something else. We must establish that different people will have different levels of access to the network, if only because some of us will desire more and be willing to spend more time on the Net. It would be counter-productive to subsidize access to the network in any way except perhaps for very specific purposes .
We have seen Computer Networks develop and evolve for over 30 years. All these advancements have been made in a decentralized environment without an over-arching plan to consolidate and organize these efforts. Progress has been made because people have been free to experiment and try out different solutions to problems that confront them, from this experimentation have developed utilities and programs that respond best to the desires of their users; often more than one solution exists alongside another and we are free to choose among them. In this process the Net communities have developed their own norms, standards and even informal laws. They have discovered ways to enforce these laws and maintain the degree to order they desire. The network has become a vast kaleidoscope, ever changing as more is added or taken away. It is, in a word, vibrant.
Regulation, even well-meaning regulation, that is developed by those inconversant with this new culture, may well retard or even destroy the Net. Even now, this technology is chaffing against laws imposed on it form the outside; laws that do not acknowledge its environment or particular nature, that are unenforceable or if enforced would change it beyond recognition [a href=”#54″>54]. If even more laws — which do not evolve from within the Net community and fail to blend in with its pre-existing norms– are formalized without the consent of those they will affect, the conflict that shall arise may well have consequences no one desires.
Laws imposed on an unwilling community incite rebellion. And rebellion has always been the response of societies that have been colonized. This is a sentiment that is not uncommon on the Net, and even more reason for regulators to tread carefully.
2 Rheingold, Howard “A slice of life in my Virtual Community” in Global Networks: Computers and International Communication edited by Linda M. Harasim. Cambridge, Mass. : MIT Press, c1993. As quoted in EFF submits Amicus brief
3 For information on regulation that attempts to treat the Network in terms of broadacst media, see in EFF Action Alerts EFF submits Amicus brief, Robert Allan Thomas and Carleen Thomas v. United States of America 4Steele, David Ramsay, 1992 (pg. )
5 One notable exeption is the predominantly academic network BITNET, BITNET sites can communicate with sites on the Internet but transalators are still required to convert messages from one protocol to another. One factor in the resiliance of BITNET is the existence of a large number of seriously academic mailing lists which only exist on BINET (most are mirrored as newsgroups on Usenet, but since they are closed moderated mailing lists general users cannot easily participate in the discussions).
8 The popularity of client-server technology demonstrates the efficiency of sharing programs and data among computers on a local network by storing them at a central location. This technology which has revolutionized Local Area Computing is fundamental to the WANs as well.
9 A note on this is the incident involving two Californian lawyers who advertised their services during the Green Card “lottery” in all Usenet newsgroups, even those entirely unrelated to the issue. The lawyers were reprimanded by a number of users and their accounts were taken away by their system administrator. As is often the case this involved collective action by a large number of people concerned about the appwopriate use of Network resources.
14 For more information see: EFF’s statement on S314
15 An analogy in the broadcast media would be cable television. Subscribers can decide whether or not they wish to receive pornographic channels and can easily block those channels at specific times by means of an access code.
16M Such as the New York Times which publishes an eight page synopsis of its daily paper.
17 Nando Times for instance.
20 This point was brought to my notice by Prof. Hal R. Varian at a lecture entitled “Pricing the Internet”, delivered on October 28, 1994 at New York University.
21 The popularity of Usenet (and its small size) ensures that most requests are satisfied by the local news-server thus utilizing only the local network. Services that require large data storage capacity are generally inhibit local sites from mirroring them, thus the data transmission passes through a number of networks before reaching the user. In the case of countries other than the U.S., where the major cost of networking is that of the high speed line linking the nation-wide network with American sites, most popular distant sites are mirrored locally to prevent multiple transmissions of the same data over the high-speed international link. Here the cost of storing the information has to be balanced with the actual cost of multiple transmissions.
24 Jesse Lemisch, “Point of view” in the Chronicle of higher education, January 20, 1995, pg. A56. Thomas J. DeLaughry, “Gatekeeping on the Internet” in the Chronicle of higher education, November 23, 1994, pg. A21.
26 David L. Wilson, “Vigilantes gain quiet approval on Networks” in the Chronicle of higher education, January 13, 1995. Also Rheingold, Howard, “The virtual community”, pg. 32-37. For the narration of the incident involving Blair Newman, who erased all of his own contributions to discussion on the WELL (Whole Earth ‘Lectronic Link) before commiting suicide.
30 A famous Usenet precedent for such a migration exists. The archives of an explicitly pornographic newsgroup were moved over- night to a site outside the US when it was discovered that they were subject to local anti-pronography laws. The ease with which data can be transported over vast distances ensures that a single organization hosting a particular service cannot attempt to dictate the actions of the users. This applies as much to a particular country as it does to any university.
31 There can be more than one operator on a channel and any channel op can give another user operator privileges, or take them away. As is to be expected “op wars” are not uncommon and they serve as a source of entertainment and channel folk-lore.
32 Since users on IRC are present during discussions in real time, any comment posted publicly by a participant is visible to all. IRC does not permit the degree of individual filtering that Usenet does and so the utilities have evolved a little differently here.
35 For a report on the history of RSA and public key encryption see Kolata, Gina: “Hitting the high spots of computer theory”, New York Tines, December 13, 1994, pg. C1. For a discussion of the consequences of limiting the utilization of encryption techniques on the Net, see “EFF anounces steps to defend PGP author Phil Zimmermann” in “EFF Action Alerts” and “Phil Zimmermann: Overviews of the case. 36 Another solution would be to resort to direct connections (similar to telephone links) when transmitting data of a sensetive nature. This however means we would be unable to utilize the significant efficiencies that paket-switching networks have over directly links. The latter tend to be both slower and more expensive than the former.
38 To get more information on a popular anonymous server (and an anonymous ID), send mail to email@example.com
42 See also in this regard: Steele, Ramsay 1992, pg. 293-310 “unused capacity as a symprom of Restricted Production”, “Wasteful activities in the market”. In this case it is highly likely that the benefit derived from the efficiency of using a machine at full capacity will be overshadowed by concerns about security. A distributed computing system would be a possible solution for people interested in selling computing time on their own machines.
44 Most system administrators are, however, leery of harboring crackers and take malevolant users very seriously. This is only a measure of self-defense, since the system administrator is concerned about her own system’s security as well. This may however take time to sink in. See the case of the Dutch authorities who ‘discovered’ laws to apprehend crackers only when Dutch systems had been broken into. Cheswick and Bellovin; pg. 178
48 “If there be time to expose through discussion the falsehood and the fallacies, to avert the evil by the process of education, the remedy to be applied is more speech, not enforced silence.” U.S. Supreme court justice Louis Bradeis, 1927
50 For an interesting spin on this idea, and the consequences of posting copyrighted material on the Net, see EFF opposes Church of Scientology Usenet Censorship.
(c) Subir Grewal, 1995