24. History of the Internet cont'd, course summary

MIT OpenCourseWare
18 Mar 201451:11
EducationalLearning
32 Likes 10 Comments

TLDRThe video script provides a comprehensive overview of the history and development of the internet, focusing on the transformative events and technical challenges of the 1980s, 1990s, and the 2000s. It discusses the congestion collapse in the mid-1980s, the creation of the Internet Engineering Task Force (IETF) and the standardization of protocols like TCP/IP. The professor highlights the importance of adapting to network conditions through algorithms developed by Van Jacobson and others. The lecture also touches on the issue of route hijacking on the internet, which can lead to significant disruptions, using examples like the YouTube outage caused by Pakistan Telecom. The content is rounded out with a discussion on the evolution of the internet, including the rise of peer-to-peer networks, the significance of security threats like worms and spam, and the concept of classless addressing to tackle IP address depletion. The summary emphasizes the internet's growth, the crucial role of reliability and sharing in communication systems, and the enduring challenges of routing security and scalability.

Takeaways
  • 🌐 **Internet History**: The lecture covers the evolution of the internet from the 1980s to recent decades, highlighting significant developments and challenges.
  • πŸ“ˆ **Congestion Collapse**: Discussed is the serious problem of congestion collapse in the mid-1980s and how it was addressed, previewing topics for the course 6.033.
  • πŸ›£οΈ **Internet Routing**: The ease of hijacking routes on the internet is explored, with real-world examples of how this can be done and its potential impacts.
  • 🀝 **Internet Protocol Standardization**: The formation of the Internet Engineering Task Force (IETF) and the process of creating internet standards through RFCs (Request For Comments) is explained.
  • πŸ—οΈ **Network Growth Challenges**: The 1980s saw rapid growth in internet usage, leading to issues like congestion and address depletion, which required innovative solutions.
  • πŸ“Š **Throughput and Utilization**: The relationship between offered load and network throughput is discussed, including the concept of bandwidth-delay product.
  • πŸ” **Adaptive Congestion Control**: The importance of adaptive algorithms for managing congestion and the principle of conservation of packets is covered.
  • 🌟 **World Wide Web (WWW)**: The invention of the World Wide Web by Tim Berners-Lee and its profound impact on the internet and society is mentioned.
  • πŸ“ˆ **Commercialization of the Internet**: The transition of the internet from a government-funded project to a commercial enterprise with the rise of internet service providers is highlighted.
  • πŸ”„ **IPv4 to IPv6**: The shift from 32-bit to 128-bit addresses with the introduction of IPv6 to solve the problem of IP address depletion is discussed.
  • πŸ€” **Security and Trust**: The honor code basis of internet routing and the vulnerabilities that arise from it, such as route hijacking, are examined with real-world examples.
Q & A
  • What is the significance of the '80s and '90s in the history of the internet?

    -The '80s and '90s were pivotal decades in the internet's history, marked by the resolution of significant issues like congestion collapse and the growth of the internet with the advent of the World Wide Web. It also saw the organization of internet protocol design and the establishment of the Internet Engineering Task Force (IETF).

  • What is congestion collapse in the context of the internet?

    -Congestion collapse is a phenomenon that occurred in the mid-1980s where the internet experienced a drastic drop in throughput as the offered load increased, sometimes to the point of near-zero performance. It was a serious problem that led to the development of new algorithms for managing data transmission and window sizing.

  • How did the implementation of TCP in the 1980s resemble the lab exercise of the students?

    -The implementation of TCP in the 1980s was almost exactly the same as the sliding window protocol from the second lab exercise the students had completed. This protocol did not yet incorporate many of the new ideas that are now standard in TCP.

  • What is the Internet Engineering Task Force (IETF) and what role does it play?

    -The IETF is an international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is responsible for creating and maintaining standards for the internet through a process that involves writing proposals known as RFCs (Request For Comments).

  • What is the concept of 'rough consensus and running code' in the context of IETF?

    -The phrase 'rough consensus and running code' refers to the approach used by the IETF where decisions are based on a general agreement and evidence of code that implements the proposed standards. It emphasizes the importance of practical implementation and experimentation over formal voting or political processes.

  • What was the impact of the US Department of Defense's decision to standardize on TCP/IP in 1982?

    -The decision by the US Department of Defense to standardize on TCP/IP had a significant impact as it legitimized and promoted the use of these protocols across the military and, subsequently, in civilian sectors. Given the Defense Department's substantial influence and IT consumption, this decision greatly influenced the direction of networking technology.

  • What is the longest prefix match rule in internet routing?

    -The longest prefix match rule is a fundamental concept in internet routing where, when a packet's destination address matches multiple entries in a router's forwarding table, the router uses the entry with the longest matching prefix to determine where to forward the packet.

  • Why did the creation of the Border Gateway Protocol (BGP) become necessary?

    -The creation of BGP became necessary with the rise of multiple commercial Internet Service Providers (ISPs). As the government divested from the internet service provider business, BGP was developed to manage the routing between these autonomous networks, allowing them to cooperate and compete at the same time.

  • What led to the development of content distribution networks (CDNs)?

    -Content distribution networks were developed to serve content more efficiently and reliably by creating overlay networks on top of the internet. This was in response to the need for better load balancing and improved content delivery as the internet grew and traffic increased.

  • What are the two main themes that the professor identifies in the design of digital communication networks?

    -The two main themes identified are reliability and sharing. The professor emphasizes that designing communication systems that are reliable despite the lack of a perfectly reliable medium at any layer is crucial. Additionally, the ability to share resources efficiently is key to the success of communication systems.

  • Why did the professor mention that the internet routing system operates on an 'honor code'?

    -The professor mentioned the 'honor code' to highlight the trust-based nature of internet routing. ISPs and organizations trust each other to provide accurate routing information. This trust can be exploited, as seen in incidents like the YouTube hijacking, where the lack of authentication in route advertisements can lead to significant issues.

Outlines
00:00
πŸ“š Introduction to Internet History and Technical Challenges

The paragraph introduces the lecture's focus on the internet's development during the 1980s to 2010s. It highlights the importance of MIT OpenCourseWare and its reliance on donations. The professor aims to discuss the internet's evolution, congestion collapse in the mid-1980s, and the concept of route hijacking. The lecture also mentions the growth of the internet and the formation of the Internet Engineering Task Force (IETF), emphasizing the IETF's approach to creating open standards.

05:00
🚧 Internet Congestion and Addressing Challenges in the 1980s

This section delves into the internet's growth in the 1980s, leading to congestion collapse due to inadequate TCP implementation. It discusses the efforts of Dave Clark and the creation of the domain name system (DNS). The paragraph also covers the US Department of Defense's adoption of TCP/IP, the rise of campus-area networks, and the issues arising from the depletion of IP addresses and the need for a routing system overhaul.

10:01
πŸ€” Dealing with Congestion Through Adaptive Algorithms

The paragraph explains the problem of congestion in networks and the development of algorithms to address it. It discusses the end-to-end solution proposed by Van Jacobson and the concept of conservation of packets. The text also touches on the dynamic nature of bandwidth and the importance of adapting to network conditions, including the use of acknowledgments to manage packet transmission rates.

15:01
🌐 The Evolution of Internet Routing and the Introduction of Classless Addressing

This section talks about the shift in the 1990s from ARPANET to the modern internet, the invention of the World Wide Web by Tim Berners-Lee, and the transition of the internet service market towards commercial providers. It also discusses the Border Gateway Protocol (BGP) and classless addressing to solve the problem of IP address depletion, explaining the concept of IP address prefixes and how they are used in routing.

20:02
πŸ”„ The Longest Prefix Match Rule and Its Impact on Internet Routing

The paragraph explains the longest prefix match rule used in routing, which dictates that the most specific route is chosen when a destination address matches multiple entries in a routing table. It discusses how this rule is crucial for the operation of networks and how it was exploited in incidents where YouTube and a significant portion of internet traffic were hijacked by ISPs.

25:04
🌟 The Growth of the Internet and Challenges in the 2000s

The final paragraph summarizes key developments in the 2000s, including the rise of peer-to-peer networks, the emergence of security threats like worms and spam, and issues with route hijacking. It emphasizes the internet's reliance on trust between ISPs and the need for better solutions to prevent routing vulnerabilities. The paragraph concludes with a mention of the ongoing work on IPv6 and the creation of content distribution networks.

30:06
πŸ“ˆ The Autonomous System and the Issue of AS Number Exhaustion

This section discusses the concept of autonomous systems (AS) in the internet's structure, their importance, and the problem of running out of 16-bit AS numbers. It also touches on the practical implications of this limitation and the need for a larger addressing space, reflecting on past decisions and the importance of forward-thinking in network design.

35:07
πŸ›£οΈ The Anatomy of Route Hijacking and Immediate Solutions

The paragraph details the incident of route hijacking involving Pakistan Telecom and YouTube, explaining how the hijacking occurred and the immediate steps taken to resolve it. It highlights the lack of authentication in route advertisements and the use of more specific prefixes to regain control over the routing, emphasizing the need for better long-term solutions.

40:07
πŸ”— Summary of Digital Communication Networks Course

The final summary encapsulates the essence of the course, which is about designing digital communication networks across three layers of abstraction: bits, signals, and packets. It emphasizes the unique teaching approach of the course, covering topics from point-to-point links to multihop networks. The summary leaves the audience with two main themes: reliability and sharing, which are crucial for the functioning of communication systems.

Mindmap
Keywords
πŸ’‘Congestion Collapse
Congestion collapse refers to a situation in a network where the throughput or utilization of the network drops significantly, sometimes to zero, due to an overload of traffic. This phenomenon is characterized by a rapid decline in network performance, often resulting in a severe reduction of service quality or a complete outage. In the context of the video, the professor discusses how the internet faced congestion collapse in the mid-1980s and the solutions that were adopted to manage it, such as the implementation of new algorithms by Van Jacobson.
πŸ’‘TCP/IP
TCP/IP stands for Transmission Control Protocol/Internet Protocol, which is the fundamental communication protocol suite used for transmitting data over the internet. It is a set of rules that defines how electronic devices communicate over networks. In the video, the professor mentions TCP/IP in relation to the standardization of internet protocols and how it became the dominant protocol during the 1980s.
πŸ’‘IETF
The Internet Engineering Task Force (IETF) is an international community of network designers, operators, vendors, and researchers concerned with the evolution of the internet architecture and the smooth operation of the internet. It is the principal body engaged in the development of new standards for the internet. The professor discusses the creation of the IETF and the process of formalizing internet standards through a community-driven approach.
πŸ’‘RFCs
RFC stands for Request for Comments, which are a series of memos that document the development process of internet standards and protocols. They are published by the IETF and constitute the official publication channel for internet standards documents. In the video, the professor explains the role of RFCs in the process of creating and commenting on internet standards.
πŸ’‘Domain Name System (DNS)
The Domain Name System (DNS) is the hierarchical system used for naming and locating computers, services, or other resources connected to the internet or a private network. It associates various information with domain names assigned to each of the participating entities. In the video, the professor talks about the creation of the DNS in 1984, which was a critical development for organizing and accessing resources on the growing internet.
πŸ’‘NSFNET
NSFNET refers to the National Science Foundation Network, which was a program funded by the U.S. National Science Foundation through which a number of university and government sites were interconnected. It was a major part of the internet in the 1980s and played a significant role in the development of the modern internet. The professor discusses the transition of the internet backbone from the ARPANET to the NSFNET.
πŸ’‘IPv4 and IPv6
IPv4 and IPv6 are Internet Protocol versions that dictate the addressing and routing of internet traffic. IPv4 is the fourth version of the IP and is the most widely used version. It uses 32-bit addresses, leading to a limitation in the number of available addresses. IPv6, the sixth version, was developed to address this limitation by using 128-bit addresses, thereby greatly expanding the address space. The professor talks about the depletion of IPv4 addresses and the development of IPv6 to overcome this issue.
πŸ’‘Classless Inter-Domain Routing (CIDR)
Classless Inter-Domain Routing (CIDR) is a method for allocating IP addresses and routing that allows for a more efficient allocation of IP addresses and a reduction in the size of routing tables. It supersedes the original system of IP address classes. The professor explains the concept of classless addressing and its significance in allowing for a more flexible and scalable internet routing system.
πŸ’‘Border Gateway Protocol (BGP)
The Border Gateway Protocol (BGP) is the protocol that is used to exchange routing and reachability information among autonomous systems on the internet. It is a path vector protocol and is considered the backbone of internet routing. The professor discusses BGP in the context of the challenges of routing traffic across multiple internet service providers that both compete and cooperate.
πŸ’‘Route Hijacking
Route hijacking is a type of malicious activity where an entity intentionally or accidentally takes control over a route to a certain IP address prefix, causing traffic intended for one network to be rerouted through another network. The professor provides examples of route hijacking, such as the incidents involving YouTube and China Telecom, and discusses the vulnerabilities in the current routing system that allow such occurrences.
πŸ’‘Content Distribution Networks (CDNs)
Content Distribution Networks (CDNs) are a type of network that distribute content across multiple, geographically dispersed servers. The goal is to provide high availability and performance by distributing the service spatially relative to each user. In the video, the professor mentions the creation of CDNs in the late 1990s as a way to serve content more reliably and efficiently over the internet.
Highlights

MIT OpenCourseWare provides high-quality educational resources for free, supported by donations.

The history of the internet's development in the '80s and '90s, and the last decade, is discussed in detail.

Congestion collapse in the mid-1980s was a serious problem for the internet, which was addressed by evolving TCP protocols.

The implementation of TCP in the 1980s was similar to the sliding window protocol from a lab exercise.

Route hijacking on the internet is alarmingly easy and can cause significant damage, as demonstrated by real-world examples.

In the 1980s, the internet's growth was explosive, with rates of 80% to 90% per year.

Dave Clark was designated as the internet's chief architect and played a crucial role in formalizing internet standards.

The Internet Engineering Task Force (IETF) was created to standardize protocols without relying on voting or political processes.

The US Department of Defense standardized on TCP/IP in 1982, significantly influencing networking's direction.

The creation of the domain name system (DNS) in 1984 was essential for managing the increasing complexity of the internet.

The National Science Foundation (NSF) took over the backbone of non-military networks in the United States, leading to the NSFNET.

Adaptive congestion control algorithms, initiated by Van Jacobson, are fundamental to modern TCP implementations.

Conservation of packets principle is key to managing data flow and preventing network congestion.

The window size in TCP starts at one packet and adjusts based on acknowledgments, representing an exponential growth followed by linear growth.

The 1990s saw the end of ARPANET and the rise of the World Wide Web, changing the landscape of internet usage.

The commercialization of the internet led to competition among internet service providers while requiring them to cooperate for end-to-end connectivity.

The Border Gateway Protocol (BGP) was developed to manage routing between competing yet cooperative networks.

Classless addressing allowed for more flexible and efficient allocation of IP addresses as the number of networks grew.

The longest prefix match rule in routing is critical for the operation of internet traffic and can be exploited in route hijacking incidents.

The rise of peer-to-peer networks, security concerns, and route hijacking highlight the internet's vulnerabilities despite its growth and importance.

Transcripts
Rate This

5.0 / 5 (0 votes)

Thanks for rating: