What is the name of a large computer that is used to process complex calculations?

A supercomputer is a computer with a high level of performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Supercomputers contain tens of thousands of processors and can perform billions and trillions of calculations or computations per second. Some supercomputers can perform up to a hundred quadrillion FLOPS. Since information moves quickly between processors in a supercomputer (compared to distributed computing systems) they are ideal for real-time applications.

Supercomputers are used for data-intensive and computation-heavy scientific and engineering purposes such as quantum mechanics, weather forecasting, oil and gas exploration, molecular modeling, physical simulations, aerodynamics, nuclear fusion research and cryptoanalysis. Early operating systems were custom made for each supercomputer to increase its speed. In recent years, supercomputer architecture has moved away from proprietary, in-house operating systems to Linux. Although most supercomputers use a Linux-based operating system, each manufacturer optimizes its own Linux derivative for peak hardware performance. In 2017, half of the world’s top 50 supercomputers used SUSE Enterprise Linux Server.

The largest, most powerful supercomputers are actually multiple computers that perform parallel processing. Today, many academic and scientific research firms, engineering companies and large enterprises that require massive processing power are using cloud computing instead of supercomputers. High performance computing (HPC) via the cloud is more affordable, scalable and faster to upgrade than on-premises supercomputers. Cloud-based HPC architectures can expand, adapt and shrink as business needs demand. SUSE Linux Enterprise High Performance Computing allows organizations to leverage their existing hardware for HPC computations and data-intensive operations.

Option 2 : Supercomputers

What is the name of a large computer that is used to process complex calculations?

The correct answer is Supercomputers.

What is the name of a large computer that is used to process complex calculations?
Key Points

  • Supercomputers
    • A supercomputer is a computer that performs at or near the currently highest operational rate for computers.
    • It is used for scientific and engineering applications that must handle very large databases or do a great amount of computation.
    • They were introduced in the 1960s.
    • The US has long been the leader in the supercomputer field.
    • Examples of special-purpose supercomputers include Belle, Deep Blue, and Hydra.

What is the name of a large computer that is used to process complex calculations?
Additional Information

  • Servers
    • A server is a computer that provides data to other computers. 
    • It may serve data to systems on a local area network or a wide area network over the Internet.
    • Many types of servers exist, including web servers, mail servers, and file servers.
    • Each type runs software specific to the purpose of the server.
    • A single server can serve multiple clients, and a single client can use multiple servers.
  • Laptops
    • A laptop is a small, portable personal computer with a screen and alphanumeric keyboard.
    • Laptops combine all the input/output components and capabilities of a desktop computer, including the display screen, small speakers, a keyboard, data storage device, sometimes an optical disc drive, pointing devices with an operating system, a processor, and memory into a single unit.
  • Mainframes
    • A mainframe computer is a computer used primarily by large organizations for critical applications, bulk data processing.
    • A mainframe computer is larger and has more processing power than some other classes of computers, such as minicomputers, servers, workstations, and personal computers.
    • Modern mainframes can run multiple different instances of operating systems at the same time.

India’s #1 Learning Platform

Start Complete Exam Preparation

What is the name of a large computer that is used to process complex calculations?

Daily Live MasterClasses

What is the name of a large computer that is used to process complex calculations?

Practice Question Bank

What is the name of a large computer that is used to process complex calculations?

Mock Tests & Quizzes

Get Started for Free Download App

Trusted by 3.4 Crore+ Students

views updated Jun 11 2018

Supercomputers, the world's largest and fastest computers, are primarily used for complex scientific calculations. The parts of a supercomputer are comparable to those of a desktop computer: they both contain hard drives, memory, and processors (circuits that process instructions within a computer program).

Although both desktop computers and supercomputers are equipped with similar processors, their speed and memory sizes are significantly different. For instance, a desktop computer built in the year 2000 normally has a hard disk data capacity of between 2 and 20 gigabytes and one processor with tens of megabytes of random access memory (RAM)just enough to perform tasks such as word processing, web browsing, and video gaming. Meanwhile, a supercomputer of the same time period has thousands of processors, hundreds of gigabytes of RAM, and hard drives that allow for hundreds, and sometimes thousands, of gigabytes of storage space.

The supercomputer's large number of processors, enormous disk storage, and substantial memory greatly increase the power and speed of the machine. Although desktop computers can perform millions of floating-point operations per second (megaflops), supercomputers can perform at speeds of billions of operations per second (gigaflops) and trillions of operations per second (teraflops).

Evolution of Supercomputers

Many current desktop computers are actually faster than the first supercomputer, the Cray-1, which was developed by Cray Research in the mid-1970s. The Cray-1 was capable of computing at 167 megaflops by using a form of supercomputing called vector processing , which consists of rapid execution of instructions in a pipelined fashion. Contemporary vector processing supercomputers are much faster than the Cray-1, but an ultimately faster method of supercomputing was introduced in the mid-1980s: parallel processing . Applications that use parallel processing are able to solve computational problems by simultaneously using multiple processors.

Using the following scenario as a comparative example, it is easy to see why parallel processing is becoming the preferred supercomputing method. If you were preparing ice cream sundaes for yourself and nine friends, you would need ten bowls, ten scoops of ice cream, ten drizzles of chocolate syrup, and ten cherries. Working alone, you would take ten bowls from the cupboard and line them up on the counter. Then, you would place one scoop of ice cream in each bowl, drizzle syrup on each scoop, and place a cherry on top of each dessert. This method of preparing sundaes would be comparable to vector processing. To get the job done more quickly, you could have some friends help you in a parallel processing method. If two people prepared the sundaes, the process would be twice as fast; with five it would be five times as fast; and so on.

Conversely, assume that five people will not fit in your small kitchen, therefore it would be easier to use vector processing and prepare all ten sundaes yourself. This same analogy holds true with supercomputing. Some researchers prefer vector computing because their calculations cannot be readily distributed among the many processors on parallel supercomputers. But, if a researcher needs a supercomputer that calculates trillions of operations per second, parallel processors are preferredeven though programming for the parallel supercomputer is usually more complex.

Applications of Supercomputers

Supercomputers are so powerful that they can provide researchers with insight into phenomena that are too small, too big, too fast, or too slow to observe in laboratories. For example, astrophysicists use supercomputers as "time machines" to explore the past and the future of our universe. A supercomputer simulation was created in 2000 that depicted the collision of two galaxies: our own Milky Way and Andromeda. Although this collision is not expected to happen for another three billion years, the simulation allowed scientists to run the experiment and see the results now. This particular simulation was performed on Blue Horizon, a parallel supercomputer at the San Diego Supercomputer Center. Using 256 of Blue Horizon's 1,152 processors, the simulation demonstrated what will happen to millions of stars when these two galaxies collide. This would have been impossible to do in a laboratory.

Another example of supercomputers at work is molecular dynamics (the way molecules interact with each other). Supercomputer simulations allow scientists to dock two molecules together to study their interaction. Researchers can determine the shape of a molecule's surface and generate an atom-by-atom picture of the molecular geometry. Molecular characterization at this level is extremely difficult, if not impossible, to perform in a laboratory environment. However, supercomputers allow scientists to simulate such behavior easily.

Supercomputers of the Future

Research centers are constantly delving into new applications like data mining to explore additional uses of supercomputing. Data mining is a class of applications that look for hidden patterns in a group of data, allowing scientists to discover previously unknown relationships among the data. For instance, the Protein Data Bank at the San Diego Supercomputer Center is a collection of scientific data that provides scientists around the world with a greater understanding of biological systems. Over the years, the Protein Data Bank has developed into a web-based international repository for three-dimensional molecular structure data that contains detailed information on the atomic structure of complex molecules. The three-dimensional structures of proteins and other molecules contained in the Protein Data Bank and supercomputer analyses of the data provide researchers with new insights on the causes, effects, and treatment of many diseases.

Other modern supercomputing applications involve the advancement of brain research. Researchers are beginning to use supercomputers to provide them with a better understanding of the relationship between the structure and function of the brain, and how the brain itself works. Specifically, neuroscientists use supercomputers to look at the dynamic and physiological structures of the brain. Scientists are also working toward development of three-dimensional simulation programs that will allow them to conduct research on areas such as memory processing and cognitive recognition.

In addition to new applications, the future of supercomputing includes the assembly of the next generation of computational research infrastructure and the introduction of new supercomputing architectures. Parallel supercomputers have many processors, distributed and shared memory, and many communications parts; we have yet to explore all of the ways in which they can be assembled. Supercomputing applications and capabilities will continue to develop as institutions around the world share their discoveries and researchers become more proficient at parallel processing.

see also Animation; Parallel Processing; Simulation.

Sid Karin and Kimberly Mann Bruch

Bibliography

Jortberg, Charles A. The Supercomputers. Minneapolis, MN: Abdo and Daughters Pub., 1997.

Karin, Sid, and Norris Parker Smith. The Supercomputer Era. Orlando, FL: Harcourt Brace Jovanovich, 1987.

Internet Resources

Dongarra, Jack, Hans Meuer, and Erich Strohmaier. Top 500 Supercomputer Sites. University of Mannheim (Germany) and University of Tennessee. <http://www.top500.org/>

San Diego Supercomputer Center. SDSC Science Discovery. <http://www.sdsc.edu/discovery/>

views updated Jun 11 2018

BRIAN HOYLE

A supercomputer is a powerful computer that possesses the capacity to store and process far more information than is possible using a conventional personal computer.

An illustrative comparison can be made between the hard drive capacity of a personal computer and a super-computer. Hard drive capacity is measured in terms of gigabytes. A gigabyte is one billion bytes. A byte is a unit of data that is eight binary digits (i.e., 0's and 1's) long; this is enough data to represent a number, letter, or a typographic symbol. Premium personal computers have a hard drive that is capable of storing on the order of 30 gigabytes of information. In contrast, a supercomputer has a capacity of 200 to 300 gigabytes or more.

Another useful comparison between supercomputers and personal computers is in the number of processors in each machine. A processor is the circuitry responsible for handling the instructions that drive a computer. Personal computers have a single processor. The largest supercomputers have thousands of processors.

This enormous computation power makes supercomputers capable of handling large amounts of data and processing information extremely quickly. For example, in April 2002, a Japanese supercomputer that contains 5,104 processors established a calculation speed record of 35,600 gigaflops (a gigaflop is one billion mathematical calculations per second). This exceeded the old record that was held by the ASCI White-Pacific supercomputer located at the Lawrence Livermore National Laboratory in Berkeley, California. The Livermore supercomputer, which is equipped with over 7,000 processors, achieves 7,226 gigaflops.

These speeds are a far cry from the first successful supercomputer, the Sage System CDC 6600, which was designed by Seymour Cray (founder of the Cray Corporation) in 1964. His computer had a speed of 9 megaflops, thousands of times slower than the present day versions. Still, at that time, the CDC 6600 was an impressive advance in computer technology.

Beginning around 1995, another approach to designing supercomputers appeared. In grid computing, thousands of individual computers are networked together, even via the Internet. The combined computational power can exceed that of the all-in-one supercomputer at far less cost. In the grid approach, a problem can be broken down into components, and the components can be parceled out to the various computers. As the component problems are solved, the solutions are pieced back together mathematically to generate the overall solution.

The phenomenally fast calculation speeds of the present day supercomputers essentially corresponds to "real time," meaning an event can be monitored or analyzed as it occurs. For example, a detailed weather map, which would take a personal computer several days to compile, can be complied on a supercomputer in just a few minutes.

Supercomputers like the Japanese version are built to model events such as climate change, global warming, and earthquake patterns. Increasingly, however, supercomputers are being used for security purposes such as the analysis of electronic transmissions (i.e., email, faxes, telephone calls) for codes. For example, a network of supercomputers and satellites that is called Echelon is used to monitor electronic communications in the United States, Canada, United Kingdom, Australia, and New Zealand. The stated purpose of Echelon is to combat terrorism and organized crime activities.

The next generation of supercomputers is under development. Three particularly promising technologies are being explored. The first of these is optical computing. Light is used instead of using electrons to carry information. Light moves much faster than an electron can, therefore the speed of transmission is greater.

The second technology is known as DNA computing. Here, recombining DNA in different sequences does calculations. The sequence(s) that are favored and persist represent the optimal solution. Solutions to problems can be deduced even before the problem has actually appeared.

The third technology is called quantum computing. Properties of atoms or nuclei, designated as quantum bits, or qubits, would be the computer's processor and memory. A quantum computer would be capable of doing a computation by working on many aspects of the problem at the same time, on many different numbers at once, then using these partial results to arrive at a single answer. For example, deciphering the correct code from a 400-digit number would take a supercomputer millions of years. However, a quantum computer that is about the size of a teacup could do the job in about a year.

█ FURTHER READING:

BOOKS:

Stork, David G. (ed) and Arthur C. Clarke. HAL's Legacy: 2001's Computer Dream and Reality. Boston: MIT Press, 1998.

ELECTRONIC:

Cray Corporation. "What Is a Supercomputer?" Supercomputing. 2002. <http://www.cray.com/supercomputing>(15 December 2002).

The History of Computing Foundation. "Introduction to Supercomputers." Supercomputers. October 13, 2002. <http://www.thocp.net/hardware/supercomputers.htm>(15 December 2002).

SEE ALSO

Computer Hardware Security
Information Warfare

views updated May 23 2018

supercomputer A class of very powerful computers that have extremely fast processors, currently capable (2004) of performing several Tflops, i.e. 1012 floating-point operations per second (see flops); most are now multiprocessor systems (see also SMP, MPP). Large main-memory capacity and long word lengths are the other main characteristics. upercomputers are used, for example, in meteorology, engineering, nuclear physics, and astronomy. Several hundred are in operation worldwide at present. Principal manufacturers are Cray Research and NEC, Fujitsu, and Hitachi of Japan.

views updated Jun 27 2018

su·per·com·put·er / ˈsoōpərkəmˌpyoōtər/ • n. a particularly powerful mainframe computer.DERIVATIVES: su·per·com·put·ing / -ˌpyoōting/ n.

Data Processing , Everyone is familiar with the term "word processing," but computers were really developed for "data processing"—the organization and manipulation of… Parallel Processing , Parallel processing is information processing that uses more than one computer processor simultaneously to perform work on a problem. This should not… Mainframes , Prior to the advent of the personal computer or PC, the minicomputer, and the microcomputer, the term "computer" simply referred to mainframes. What… Computer , Applications relevant to elementary particle and high-energy physics (HEP) computing can be categorized as follows: In addition to their specialized… Malicious Data , A forensic examination such as forensic accounting often involves tracing an electronic data trail. Roadblocks can be deliberately introduced to obsc… Allen Newell , Newell, Allen NEWELL, ALLEN (b. San Francisco, California, 19 March 1927, d. Pittsburgh, Pennsylvania, 19 July 1912) Newell was a founder of artifici…