Which of the following is globally unique in the system

Recommended textbook solutions

Which of the following is globally unique in the system

Information Technology Project Management: Providing Measurable Organizational Value

5th EditionJack T. Marchewka

346 solutions

Which of the following is globally unique in the system

Introduction to Algorithms

3rd EditionCharles E. Leiserson, Clifford Stein, Ronald L. Rivest, Thomas H. Cormen

720 solutions

Which of the following is globally unique in the system

Service Management: Operations, Strategy, and Information Technology

7th EditionJames Fitzsimmons, Mona Fitzsimmons

103 solutions

Which of the following is globally unique in the system

Information Technology Project Management: Providing Measurable Organizational Value

5th EditionJack T. Marchewka

346 solutions

Internet, Overview

Raymond Greenlaw, Ellen M. Hepp, in Encyclopedia of Information Systems, 2003

I. Introduction

We begin with a definition of the Internet as formulated by the Federal Networking Council.

The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term Internet. Internet refers to the global information system that:

1.

Is logically linked together by a globally unique address space based on the Internet protocol (IP) or its subsequent extensions/follow-ons

2.

Is able to support communications using the transmission control protocol/Internet protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols

3.

Provides, uses or makes accessible, either publicly or privately, high-level services layered on the communications and related infrastructure described herein

Simplifying, we can condense this definition to: the Internet is a global system of networked computers together with their users and data. The system is global in the sense that people from all over the world can connect to it. The users of the Internet have developed their own culture and as such they are a defining factor of the Internet. Without the possibility of accessing data or personal information, very few would be excited about connecting to the Internet. The notions of being able to quickly and easily access information and communicate led to the vision of the Internet.

The Internet has also been referred to as the “Information Superhighway.” Thirty years ago, information exchange and communication took place via the “back roads”—regular postal mail, a telephone call, a personal meeting, and so on. Today they take place over the Internet nearly instantaneously. The history section of this article describes the evolution of the vision of the Internet into today's Information Superhighway.

I.A. Information Superhighway

We expand on the superhighway metaphor. With cars there are levels of expertise—learning to drive is easy and knowing how to operate a vehicle is all you really need to know about cars in order to use them to get to where you are going. Driving is like learning to surf the Internet. In the course of driving you learn about highways and shortcuts, and so it is with the Internet. With practice, you will learn where and how to find information on the Internet.

In driving you can go a step further and learn how an engine works and how to do routine maintenance and repairs such as oil changes and tune-ups. With the Internet the equivalent is to learn how Web pages are created or how search engines find information.

A deeper level of involvement with cars is to learn how to do complex repairs, design them, and build them. Not many people demonstrate this level of interest in cars. On the Internet a similar level of interest involves writing software, either building applets in a language such as Java or developing more general purpose tools for others to use in navigating the Internet. Again, only a limited number of people aspire to this level of involvement.

Today the Information Superhighway is in place but the mysteries surrounding it for many people are where to go and how to travel. Like traveling a highway in a foreign country, unable to read the road signs, navigating the Information Superhighway can be frustrating and time-consuming without the right tools.

As far as “how to travel” the Information Superhighway, consider that there are many routes and many forms of transportation that we can take to get to where we want to go. We can follow sidewalks, roads, and freeways and we can take a bicycle, a bus, a car, or a pair of in-line skates. Similarly, there are many ways to use the Internet to send and retrieve information. These include (but are not limited to) e-mail, file transfer, remote log-in, and the World Wide Web. It is also very likely that new methods of traveling the Information Superhighway will be conceived and developed in the near future, and existing methods will be improved.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404000964

Global Information Systems

Magid Igbaria, ... Charlie Chien-Hung Chen, in Encyclopedia of Information Systems, 2003

IV.B. Internet

The Internet is an example of GIS. According to the International Telecommunication Union (ITU), the Internet became a true mass communication tool for over 50 million people 4 years after it was launched in 1995. In contrast, it took TV, personal computer, radio, and telephone 13, 16, 38, and 74 years, respectively, to achieve the same audience. The Internet has affected every industry. As such, the Internet has become a pertinent issue for the GIS researcher. Thus, it is essential to understand the history, participants, and social and global impacts of the Internet when studying GIS.

As specified in the resolution of the Federal Networking Council (FNC) on October 24, 1995, the “Internet refers to the global information system that:

1.

Is logically linked together by a globally unique address space based on the Internet protocol (IP) or its subsequent extensions/follow-ons

2.

Is able to support communications using the transmission control protocol/Internet protocol (TCP/IP) suite or its subsequent extensions/ follow-ons, and/or other IP-compatible protocols

3.

Provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.”

The Internet's history can be traced back to a networking project of the United States Defense Department in the mid-1960s at the apex of the Cold War. The project was named after the department—Advanced Research Projects Agency (ARPANET). This project was to create a defense communication system that was resistant from possible nuclear attack from the former Soviet Union. As the number of different networks increased and crossed the American borders, their incompatibility became a bottleneck for global and open interconnection. In 1982, Bob Kahn and Vint Cerf of the University of California, Los Angeles, developed the de jure (open) technical protocol or TCP/IP to solve the interoperability problem. This technological breakthrough was named the “Internet.” When the National Science Foundation (NSF) developed the packet-switching network NSFNET successfully in 1985, the Internet gained popularity among the research community.

Corporations began using the Internet for commercial use after the NSF lifted its ban on commercial use in 1992. The Internet's technological sophistication was overcome when the user-friendly graphical user interface (GUI) tools and browser programs of Mosaic allowed nontechnical people to point-and-click the World Wide Web pages. As the Internet become more user friendly, the Internet users reached 40 million people through 10,000,000 hosts, representing more than 150 countries in 1996. According to the Internet Software Consortium (ISC), by January 2001 the world had 109,574,429 Internet host accounts, which were advertised in the Domain Name Service (DNS).

The Internet became a global network of networks that was not owned by any government, but governed by multilateral institutions. These institutions include international organizations, regional and international coordinating bodies, as well as a collaborative partnership of government, companies, and nonprofit organizations. For instance, the American National Standards Institute (ANSI) facilitated the development of standards for the global Internet infrastructure that include data, transmission medium, protocol, and topologies. The Organization for the Advancement of Structured Information Standards (OASIS), a nonprofit and international consortium, commits to the mission of creating “interoperable industry specifications based on public standards such as XML and SGML, as well as others that are related to structured information processing.” The most popular web languages, such as SGML, XML, HTML, and CGM are all the innovations of OASIS.

The Internet Corporation for Assigned Names and Numbers (ICANN), a private sector and nonprofit corporation, is in charge of the technical coordination of the Internet activities on a worldwide basis. This includes IP address space allocation, protocol parameter assignment, domain name system management, and root server system management functions. Last year directors of ICANN decided to introduce new top-level domains (TLD) such as .com, .net, .org, .edu, .int, .mil, and .gov. This was because the current TLDs, which were initially issued in 1994, failed to meet the demands of the ever increasing number of Internet users. According to ICANN, TLDs are important in promoting competition within the same category of institutions.

Deploying the global Internet infrastructure was another private initiative to increase the bandwidth of data transmission across different countries. The UUNET, a major international provider of the Internet infrastructure, has already deployed Tl, T3, and OC3 across continents. These are the major backbones of today's Internet. Studies indicate that a single fiber optic could deliver 1000 billion bits per section. When the global Internet infrastructure can deliver at least the bandwidth of one fiber optic within and across all countries, the entire world will be able to enjoy multimedia presentation of the Internet as with TV today.

However, the inequality of information and technology communication (ITC) and other complementary assets—education, literacy level, training, security, adoption, and so forth—do indeed exist in different countries. A recent report in the Economist indicates that Norway, Singapore, and the United States are the top three countries with approximately 50% Internet penetration rate. In contrast, the penetration rate of the Internet in countries such as India, Egypt, and China is below 5%. It is clear that this “digital divide” is hindering the realization of a total global community. As the United Nations stated, the challenge is for wealthy nations to “spread technology in a world where half the population does not have a telephone and 4 of every 10 African adults cannot read.” The United Nations held the Economic and Social Council (ECOSOC) conference on IT in July 2000 to propose international aids in the deployment of complementary assets that can stop the widening digital divide. Based on its investigation, they found about “90% of the Internet host computers are in high-income countries with only 16% of the world's population.” Thus, the lack of complementary assets for IT investment in low-income countries and the digital divide will continue to be an important GIS issue.

The Internet empowers buyers and suppliers to interact electronically and directly. A study by Chircu and Kauffman found that on-line intermediaries are displacing traditional intermediaries to facilitate customers completing the entire transaction process. Siebel System Inc., the largest software provider for customer relationship management solutions, estimates that 80% of sales made on the Internet come from business-to-business (B2B) transactions. Dataquest estimated that on-line B2B transaction volumes would grow from $12 billion in 1998 to $1.25 trillion in 2003. Dataquest's estimated transaction volume does not include financial goods and services that cross national borders. With the speed of the global connectivity and growth of non-proprietary standards, the Internet will be a key player for the continuous growth in both B2B and business-to-commerce (B2C) transactions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404000794

Private Addressing and Subnetting Large Networks

In IP Addressing & Subnetting INC IPV6, 2000

Considerations

Anyone can use any of the address blocks in Table 3.3 in any network at any time. The main thing to remember is that devices using these addresses will not be able to communicate with other hosts on the Internet without some kind of address translation.

Here are some things to think about when deciding to use private addressing in your network:

Number of addresses. One of the main benefits of using private addresses is that you have plenty to work with. Since you are not using globally-unique addresses (a scare resource), you don't need to be conservative. In the example network shown in Figure 3.1, you could use an entire class B equivalent address block without feeling guilty. Even though you would be using only 4 percent of the available addresses, you are not hoarding a valuable commodity.

Security. Using private addresses can also enhance the security of your network. Even if part of your network is connected to the Internet, no one outside your network will be able to reach your devices. Likewise, no one from inside your network will be able to reach hosts on the Internet. RFC 1918 specifies that:

“… routing information about private networks shall not be propagated on inter-enterprise links, and packets with private source or destination addresses should not be forwarded across such links. Routers in networks not using private address space, especially those of Internet service providers, are expected to be configured to reject (filter out) routing information about private networks.”

For Managers

Security Breaches from Within

Although the preceding information about security and privacy may be comforting, don't let it lull you into complacency. Security experts estimate that anywhere from 50 to 70 percent of all attacks on computer systems come from inside the organization. Private network addressing cannot protect against insider attacks.

Limited scope. The reason you have all these addresses available is that your network will not be connected to the global Internet. If, later, you wish to communicate over the Internet, you must obtain official (globally-unique and routable) addresses and either renumber your devices or use NAT.

Renumbering. Anytime you switch to or from private addressing, you will need to renumber (change the IP address of) all your IP devices. Many organizations are setting up their user workstations to obtain IP addresses automatically when booting up rather than assigning a fixed IP address to the workstations. This facility requires that at least one Dynamic Host Configuration Protocol (DHCP) server be set up for the organization. DHCP is described in RFC 2131 and discussed in more detail in Chapter 7.

Joining Networks. If you join your network with another that has used private addressing, you may find that some devices have conflicting addresses. For example, let's say you chose to use the 24-bit block of private addresses (network 10). You assigned the address 10.0.0.1 to the first router on the first subnet. Now you merge with another organization and must join your networks. Unfortunately, the administrator of the other network chose to assign address 10.0.0.1 to one of its routers. According to IP addressing rules, both devices cannot use the same address. Further, the two routers are probably on different subnets, so not only do you have to assign a different address to the router, you must assign different subnet addresses as well. Again, the solutions include renumbering and NAT.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781928994015500061

Advanced Internetworking

Larry L. Peterson, Bruce S. Davie, in Computer Networks (Fifth Edition), 2012

Autoconfiguration

While the Internet's growth has been impressive, one factor that has inhibited faster acceptance of the technology is the fact that getting connected to the Internet has typically required a fair amount of system administration expertise. In particular, every host that is connected to the Internet needs to be configured with a certain minimum amount of information, such as a valid IP address, a subnet mask for the link to which it attaches, and the address of a name server. Thus, it has not been possible to unpack a new computer and connect it to the Internet without some preconfiguration. One goal of IPv6, therefore, is to provide support for autoconfiguration, sometimes referred to as plug-and-play operation.

As we saw in Section 3.2.7, autoconfiguration is possible for IPv4, but it depends on the existence of a server that is configured to hand out addresses and other configuration information to Dynamic Host Configuration Protocol (DHCP) clients. The longer address format in IPv6 helps provide a useful, new form of autoconfiguration called stateless autoconfiguration, which does not require a server.

Recall that IPv6 unicast addresses are hierarchical, and that the least significant portion is the interface ID. Thus, we can subdivide the autoconfiguration problem into two parts:

1.

Obtain an interface ID that is unique on the link to which the host is attached.

2.

Obtain the correct address prefix for this subnet.

Network Address Translation

While IPv6 was motivated by a concern that increased usage of IP would lead to exhaustion of the address space, another technology has become popular as a way to conserve IP address space. That technology is network address translation (NAT), and its widespread use is one main reason why IPv6 deployment remains in its early stages. NAT is viewed by some as “architecturally impure,” but it is also a fact of networking life that cannot be ignored.

The basic idea behind NAT is that all the hosts that might communicate with each other over the Internet do not need to have globally unique addresses. Instead, a host could be assigned a “private address” that is not necessarily globally unique, but is unique within some more limited scope—for example, within the corporate network where the host resides. The class A network number 10 is often used for this purpose, since that network number was assigned to the ARPANET and is no longer in use as a globally unique address. As long as the host communicates only with other hosts in the corporate network, a locally unique address is sufficient. If it should want to communicate with a host outside the corporate network, it does so via a NAT box, a device that is able to translate from the private address used by the host to some globally unique address that is assigned to the NAT box. Since it's likely that a small subset of the hosts in the corporation requires the services of the NAT box at any one time, the NAT box might be able to get by with a small pool of globally unique addresses, much smaller than the number of addresses that would be needed if every host in the corporation had a globally unique address.

So, we can imagine a NAT box receiving IP packets from a host inside the corporation and translating the IP source address from some private address (say, 10.0.1.5) to a globally unique address (say, 171.69.210.246). When packets come back from the remote host addressed to 171.69.210.246, the NAT box translates the destination address to 10.0.1.5 and forwards the packet on toward the host.

The chief drawback of NAT is that it breaks a key assumption of the IP service model—that all nodes have globally unique addresses. It turns out that lots of applications and protocols rely on this assumption. Some protocols that run over IP (e.g., application protocols such as FTP) carry IP addresses in their messages. These addresses also need to be translated by a NAT box if the higher-layer protocol is to work properly, and thus NAT boxes become much more complex than simple IP header translators. They potentially need to understand an ever-growing number of higher-layer protocols. This in turn presents an obstacle to deployment of new applications.

Even more serious is the fact that NATs make it difficult for an outside device to initiate a connection to a device on the private side of the NAT, since, in the absence of an established mapping in the NAT device, there is no public address to which to send the connection request. This situation has complicated the deployment of many applications such as Voice over IP.

It is probably safe to say that networks would be better off without NAT, but its disappearance seems unlikely. While widespread deployment of IPv6 would probably help, NAT is now popular for a range of other reasons beyond its original purpose. For example, it becomes easier to switch providers if your entire internal network has (private) IP addresses that bear no relation to the provider's address space. And, while NAT boxes cannot be considered a true solution to security threats, the fact that the addresses behind a NAT box are not globally meaningful provides a level of protection against simple attacks. It will be interesting to see how NAT fares in the future as IPv6 deployment gathers momentum.

The first part turns out to be rather easy, since every host on a link must have a unique link-level address. For example, all hosts on an Ethernet have a unique 48-bit Ethernet address. This can be turned into a valid link-local use address by adding the appropriate prefix from Table 4.1 (1111 1110 10) followed by enough 0s to make up 128 bits. For some devices—for example, printers or hosts on a small routerless network that do not connect to any other networks—this address may be perfectly adequate. Those devices that need a globally valid address depend on a router on the same link to periodically advertise the appropriate prefix for the link. Clearly, this requires that the router be configured with the correct address prefix, and that this prefix be chosen in such a way that there is enough space at the end (e.g., 48 bits) to attach an appropriate link-level address.

The ability to embed link-level addresses as long as 48 bits into IPv6 addresses was one of the reasons for choosing such a large address size. Not only does 128 bits allow the embedding, but it leaves plenty of space for the multilevel hierarchy of addressing that we discussed above.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123850591000041

Building and maintaining a secure network

Branden R. Williams, ... Derek Milroy, in PCI Compliance (Fourth Edition), 2015

Restricting connections

Requirement 1.3 of PCI DSS gets pretty granular with restricting connections between publicly accessible servers and any system component in scope for PCI. What does this mean to you? The database containing cardholder data cannot be in a DMZ that is publicly accessible. Stateful inspection firewalls must be used. You should never allow spoofing to occur. If traffic is not explicitly allowed in the rule set, it should be denied. Any Request For Comment (RFC) 1918 addresses are not allowed from the Internet, and Internet Protocol (IP) Masquerading should be used where appropriate with Network Address Translation or Port Address Translation such that those IPs cannot pass from the Internet into the DMZ and the internal IP addressing scheme is not exposed.

Note

The PCI DSS states in Section 1.3.6 that the firewall solution must provide stateful inspection. Most commercial and open-source firewalls have expanded beyond basic port blocking techniques and have stateful inspection capabilities. Cisco provides this capability on top of basic access lists (ACLs) in a feature new to IOS 12.0 called Reflexive Access Lists (or RACLs), which can be useful when extending firewall capabilities to satellite locations such as retail locations and distribution centers in nonfirewall specific equipment.

RFC 1918, originally submitted in February 1996, addresses two major challenges with the Internet. One is the concern within the Internet community that all the globally unique address space (routable IP addresses) will be exhausted—and as of this publication it has already happened. Additionally, routing overhead could grow beyond the capabilities of the ISPs because of the sheer numbers of small blocks announced to core Internet routers. The term “private network” is a network that uses the RFC 1918 IP address space. Companies can allocate addresses from this address space for their internal systems. This alleviates the need for assigning a globally routable IP address for every computer, printer, and other device that an organization uses, and this provides an easy way for these devices to remain sheltered from the Internet.

Tools

RFC 1918 space is often quoted and misunderstood. According to the original RFC, which can be downloaded at www.faqs.org/rfcs/rfc1918.html, there are three blocks of IP addresses that are considered private and nonroutable over the Internet. Those are 10.0.0.0–10.255.255.255 (10.0.0.0/8), 172.16.0.0–172.31.255.255 (172.16.0.0/12), and 192.168.0.0–192.168.255.255 (192.168.0.0/16). Any private networks in your corporation should be numbered within those allocations, or in rare cases, on non-RFC 1918 space that is owned by the company and not advertised to the Internet. This can be dangerous, however, as a fat-fingered change could cause the space to be publicly routable. Avoid using IP space that is publicly routable but does not belong to you as it can be very dangerous.

Parts of Requirement 1.3.4 and 1.3.8 dictates preventing internal address space from accessing passing through choke points to the DMZ or internal network addresses such that they can exist on both sides of an interface. Some devices call this “Anti-Spoofing” technology, mainly because an old trick to get around firewalls is to spoof internal IP addresses from external hosts. Internal addresses originating from the external side trying to come in to the DMZ or internal network should raise a red flag in the logs for the device. The firewall rule set should only allow valid Internet traffic access to the DMZ, and vice versa. Requirements 1.3.2–1.3.3 add more color on restricting traffic from the Internet to only those addresses that are in the DMZ and restricting direct inbound routes from untrusted networks into the cardholder environment.

Why can’t Internet traffic pass to the internal network? Because Requirement 1.3.7 requires the in-scope database to be on the internal network segregated from the DMZ. The cardholder database should never be able to connect directly to the Internet. Front-end servers or services should only be accessible by the public. These servers and services access the database and return the required information on behalf of the requester just like a proxy. This prevents direct access to the database.

In recent years, some organization want to use cloud-based (really, SaaS-based) solutions instead of traditional software. For example, Office 365 or Google Applications may replace some of the common office software. However, if the desktop is located inside the CDE, using such applications may require direct access to the Internet, thus violating PCI DSS compliance.

Warning

There is no reason whatsoever to allow a database or other application to be directly accessed from the Internet. Along the same line, there is no reason to allow a database or other application to directly access the Internet, by passing the DMZ. This could cause cardholder information to be vulnerable to unauthorized access. It is just as risky to allow a database server to have two network interfaces: one on DMZ and one on the internal network, even if no actual routing takes place. Multihomed servers effectively remove the security that is designed to be effective with a DMZ entirely. Can it be done? Sure. But it is another compromise point that wouldn’t normally exist.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128015797000054

A survey on methods to provide interdomain multipath transmissions

Robert Wójcik, ... Krzysztof Wajda, in Computer Networks, 2016

8 A New Internet routing architecture (NIRA)

The main goal of NIRA is to provide end users with the possibility of choosing the sequence of Internet service providers (domain-level routes) a packet traverses. The authors believe that their solution will foster competition between ISPs, and users will gain from the improvement of end-to-end performance and reliability as well as new, enhanced services that will be introduced. However, they are also aware that it can lead to route oscillation or suboptimal route selection. To make the design more tractable, NIRA supports user choice only at domain-level rather than router-level.

The NIRA architecture was first introduced in [19]. Yang presented it as a viable technical solution covering a broad range of issues: (i) deployment feasibility, (ii) efficient route representation, (iii) route discovery, (iv) failure handling, (v) provider compensation. In [20], Yang et al. evaluated the design of NIRA using a combination of simulation, network measurements and analysis.

8.1 Design overview

In the NIRA design AS domains are divided into two regions: Core and Access. The Core consists of Tier-1 providers which do not purchase transit service from other ISPs. The Access region is a chain of providers between users and Core (called the user’s up-graph). The customer-provider business relationship is most common in this region, but peer-to-peer can also be present. The domain-level route is constructed with two up-graphs of sender and receiver and it is said to be valley-free. An example route from U1 to U2 is presented in Fig. 5 with bold arrows.

Which of the following is globally unique in the system

Fig. 5. Example of the provider-rooted hierarchical addressing in NIRA.

8.2 Route representation scheme

NIRA splits an end-to-end route into two parts: (i) a sender part, and (ii) a receiver part. Both parts are represented using addresses; this means that to send and forward packets, routers have to check not only the destination address, but the source address as well. It is worth noting that, contrary to source routing, NIRA supports user choice without expanding packet headers.

The authors decided to use a provider-rooted addressing scheme based on IPv6 to encode a route that connects the user to the Core. In Fig. 5, an example of address assignment is shown. AS10 in Core has a globally unique address prefix — 1::/16 – and allocates prefixes 1:1::/32 and 1:2::/32 to its customers AS200 and AS300 respectively. ASes continue to assign prefixes to their customers until end users are reached. As a result, route AS100-AS200-AS10-AS300-AS400 between U1 and U2 is uniquely represented with two addresses: 1:1:1::1000 and 1:2:1::2000. This is a basic example, i.e., without peer-to-peer relationships in the Access region, but NIRA architecture also covers such cases. Interested readers can find more details in [19] and [20].

8.3 Route discovery

In order to bootstrap communication, the sender needs to discover his and the destination’s up-graphs. Then the user can use a source and destination address to encode an end-to-end route and change routes by changing addresses.

NIRA provides two mechanisms for route discovery: (i) The Topology Information Propagation Protocol (TIPP) and (ii) The Name-to-Route Resolution Service (NRRS) to discover sender and receiver up-graphs respectively. With the help of TIPP, providers propagate to a user his addresses and the routes associated with these addresses. Moreover, TIPP informs users if domain-level topology changes occur. NRRS maps the name of a destination to the route segment the destination is using. When a user wants to be reached by others, he has to register his route segments corresponding to addresses obtained from TIPP in NRRS. The user is also responsible for updating the entries in NSSR upon reception of topology change information from TIPP.

After solving the bootstrap problem and successful transmission of the first packet, two users can exchange all possible routes they have and agree to use one they both like.

8.4 Failure handling

To use discovered routes successfully, a user has to know whether they are failure free. The mechanism that provides failure discovery consists of a combination of proactive and reactive feedback. A user is immediately notified about any changes in his up-graph. Thus, during the communication initiation, the user knows which of his routes are available.

In order to reduce the number of TIPP messages users receive, they are not propagated globally (i.e. the user receives only messages related to his providers’ domains). As a consequence of this rule, a user does not know the availability of routes on a destination’s up-graph. Thus, to discover route failure, a sender node has to rely on reactive mechanisms such as: (i) a router feedback — a router in a network has to notify the sender when it notices that the route specified in the packet header is unavailable; (ii) a time-out — in cases when the router is overloaded or the route between router and sender is broken, a sender uses a time-out mechanism to detect route failure. The former solution provides fast route fail-over and allows the user to switch to a new route in a period on the order of a round trip time. In the latter solution, switching time depends on the time-out value.

As the reactive notifications can increase the time of connection initialization, it is advised that users should cache states of recently used routes and use only ones that are available. In addition, users or ISPs can employ any mechanism to discover route availability, such as monitoring routes by sending a probe.

8.5 Provider compensation

No technical design will be implemented if there is no practical payment scheme for service, and if providers cannot benefit from giving the user the possibility to choose routes, they would not allow for it. Being aware of that fact, the authors also addressed this issue.

It is not feasible to sign a contract with every ISP in the world, so to make a payment scheme practical, NIRA constrains users to choose a route only from a set of providers they agree to pay for by signing bilateral contracts.

Yang proposed two compensation schemes: (i) Direct Business Relationships — directly connected ISPs sign the agreement, and monitor and charge the customer differently based on the routes he/she uses. Some mechanism of policy checking is required to prevent usage of illegitimate route fragments; (ii) Indirect Business Relationships — users are able to sign a contract with non-directly connected providers. However, in this case, packets coming from one adjacent domain may come under various transit policies, and preventing route misuse becomes more complicated.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S138912861630281X

How do GCP customer and Google Cloud Platform divide responsibility for security?

How do GCP customers and Google Cloud Platform divide responsibility for security? All aspects of security are the customer's responsibility. Google takes care of the higher parts of the stack, and customers are responsible for the lower parts. All aspects of security are Google's responsibility.

How are billing accounts applied to projects in Google Cloud pick two Mcq?

Cloud Billing accounts are linked to and pay for projects. Cloud Billing accounts are connected to a Google Payments Profile. The Billing Account Admin can enable Billing Export, view cost/spend, set budgets and alerts, and link/unlink projects.

How are resource hierarchy organized in Google Cloud Mcq?

Google Cloud resources are organized hierarchically, where the organization node is the root node in the hierarchy, the projects are the children of the organization, and the other resources are descendants of projects. You can set allow policies at different levels of the resource hierarchy.

At what level in the Google Cloud resource hierarchy is billing set up?

According to Architecting with Google Kubernetes Engine: Foundations Week 1 Introduction to Google Cloud, configuring the billing account is possible at the folder level. However, according to Overview of Cloud Billing concepts, it should be at the project level.