Skip to Main Content

LIB Basics: The Internet

Glossary of Internet Jargon

Check out UC Berkeley Library's Glossary of Internet Jargon page for definitions to common internet terms.

Timeline of Computer History

These timelines provide historical context for the development of computers and the Web.

What is the Internet?

As mentioned on the home page, the Internet is a network of computer networks. As such, computers from all over the world can share data (and information) with other computers. There are several protocols that various software applications use to "talk to" other computers. For instance, Post Office Protocol (POP) is used by many email programs and File Transfer Protocol (FTP) allows software to transfer files between computers.

The Internet is often (incorrectly) used interchangeably with the World Wide Web, which uses a separate protocol to share information. The Web is a collection of electronic documents that are linked together like a spider web across the globe. Web documents are written in Hypertext Markup Language (html) and use the Hypertext Transfer Protocol (http) to move the files from computer to computer across the Internet using Web browsing software such as Safari, Firefox, Chrome, or Internet Explorer. The Web allows us to communicate in a rich way, by displaying text, graphics, photos, sounds, and even video.

These electronic documents are stored on computers called servers located around the world.

One common description of the Internet compares it to a library where all the books have been dumped on the floor and the lights turned out. 

There is certainly a lot of good information on the Internet, but it can be difficult to find, and furthermore it is interspersed with a lot of information of very low quality. 

A great strength of the Internet, and in particular the World Wide Web, is that anyone can publish a Web page for the rest of the world to see. This has led to a great explosion of information that might have been difficult or impossible to find before. However, this strength is also a weakness. Because anyone can publish, there is no guarantee the quality of information available. There is no editorial process. 

Compare this with books: In order for a book to be published in a market economy, a publisher must first of all decide that the book will be economically viable – that enough people will want buy it to make the publisher a profit. This obviously prevents a great deal of information from being made available, information which might be of interest to only a small number of people. However, after a publisher does decide to publish a book, an editor employed by the publisher works closely with the author of the book. The editor does things like making sure the grammar and spelling in the book are correct, verifying questionable facts, and asking the author to explain unclear points a bit more thoroughly. For most documents on the web, this editorial process is entirely absent. 

Compare this with scholarly journals: We learned that most scholarly journals are “peer-reviewed.” This means that before an article is accepted, it is sent to a number of experts in the field, who usually don’t know the identity of the author. These experts comment on the article, and often point out problems with the article’s methodology or conclusions. The author then has to correct these problems before the article is accepted for publication. This means that pretty much all the information in articles published in scholarly journals has been thoroughly checked by experts – that even if not most scholars do not always get the same experimental results or share the same opinions, the results and opinions that they present are well-reasoned and methodologically sound. No such process exists for information on the Web. 

This work is licensed under a Creative Commons Attribution 4.0 International License.