What Is Computer Explain?
Top Questions
What is a computer?
A computer is a machine that can store and process information. Most computers rely on a binary system, which uses two variables, 0 and 1, to complete tasks such as storing data, calculating algorithms, and displaying information. Computers come in many different shapes and sizes, from handheld smartphones to supercomputers weighing more than 300 tons.
Who invented the computer?
What is the most powerful computer in the world?
As of November 2021 the most powerful computer in the world is the Japanese supercomputer Fugaku, developed by RIKEN and Fujitsu. It has been used to model COVID-19 simulations.
How do programming languages work?
Popular modern programming languages, such as JavaScript and Python, work through multiple forms of programming paradigms. Functional programming, which uses mathematical functions to give outputs based on data input, is one of the more common ways code is used to provide instructions for a computer.
What can computers do?
The most powerful computers can perform extremely complex tasks, such as simulating nuclear weapon experiments and predicting the development of climate change. The development of quantum computers, machines that can handle a large number of calculations through quantum parallelism (derived from superposition), would be able to do even more-complex tasks.
Are computers conscious?
Cam @Mr_Camron you still ain’t explain to me what computer is Putin means. Matter fact, just tell me what is Putin? And it’s August fam- this coat is very unnecessary 💕 #getemgirls #xplicitXcamRon @QueensABC https://t.co/4gqjrBSfKI
— b r a n d y (@SteamPoweredDM) Aug 23, 2021
computer, device for processing, storing, and displaying information.
Computer once meant a person who did computations, but now the term almost universally refers to automated electronic machinery. The first section of this article focuses on modern digital electronic computers and their design, constituent parts, and applications. The second section covers the history of computing. For details on computer architecture, software, and theory, see computer science.
Computing basics
The first computers were used primarily for numerical calculations. However, as any information can be numerically encoded, people soon realized that computers are capable of general-purpose information processing. Their capacity to handle large amounts of data has extended the range and accuracy of weather forecasting. Their speed has allowed them to make decisions about routing telephone connections through a network and to control mechanical systems such as automobiles, nuclear reactors, and robotic surgical tools. They are also cheap enough to be embedded in everyday appliances and to make clothes dryers and rice cookers “smart.” Computers have allowed us to pose and answer questions that could not be pursued before. These questions might be about DNA sequences in genes, patterns of activity in a consumer market, or all the uses of a word in texts that have been stored in a database. Increasingly, computers can also learn and adapt as they operate.
Computers also have limitations, some of which are theoretical. For example, there are undecidable propositions whose truth cannot be determined within a given set of rules, such as the logical structure of a computer. Because no universal algorithmic method can exist to identify such propositions, a computer asked to obtain the truth of such a proposition will (unless forcibly interrupted) continue indefinitely—a condition known as the “halting problem.” (See Turing machine.) Other limitations reflect current technology. Human minds are skilled at recognizing spatial patterns—easily distinguishing among human faces, for instance—but this is a difficult task for computers, which must process information sequentially, rather than grasping details overall at a glance. Another problematic area for computers involves natural language interactions. Because so much common knowledge and contextual information is assumed in ordinary human communication, researchers have yet to solve the problem of providing relevant information to general-purpose natural language programs.
Analog computers use continuous physical magnitudes to represent quantitative information. At first they represented quantities with mechanical components (see differential analyzer and integrator), but after World War II voltages were used; by the 1960s digital computers had largely replaced them. Nonetheless, analog computers, and some hybrid digital-analog systems, continued in use through the 1960s in tasks such as aircraft and spaceflight simulation.
One advantage of analog computation is that it may be relatively simple to design and build an analog computer to solve a single problem. Another advantage is that analog computers can frequently represent and solve a problem in “real time”; that is, the computation proceeds at the same rate as the system being modeled by it. Their main disadvantages are that analog representations are limited in precision—typically a few decimal places but fewer in complex mechanisms—and general-purpose devices are expensive and not easily programmed.
In contrast to analog computers, digital computers represent information in discrete form, generally as sequences of 0s and 1s (binary digits, or bits). The modern era of digital computers began in the late 1930s and early 1940s in the United States, Britain, and Germany. The first devices used switches operated by electromagnets (relays). Their programs were stored on punched paper tape or cards, and they had limited internal data storage. For historical developments, see the section Invention of the modern computer.
During the 1950s and ’60s, Unisys (maker of the UNIVAC computer), International Business Machines Corporation (IBM), and other companies made large, expensive computers of increasing power. They were used by major corporations and government research laboratories, typically as the sole computer in the organization. In 1959 the IBM 1401 computer rented for $8,000 per month (early IBM machines were almost always leased rather than sold), and in 1964 the largest IBM S/360 computer cost several million dollars.
These computers came to be called mainframes, though the term did not become common until smaller computers were built. Mainframe computers were characterized by having (for their time) large storage capabilities, fast components, and powerful computational abilities. They were highly reliable, and, because they frequently served vital needs in an organization, they were sometimes designed with redundant components that let them survive partial failures. Because they were complex systems, they were operated by a staff of systems programmers, who alone had access to the computer. Other users submitted “batch jobs” to be run one at a time on the mainframe.
Such systems remain important today, though they are no longer the sole, or even primary, central computing resource of an organization, which will typically have hundreds or thousands of personal computers (PCs). Mainframes now provide high-capacity data storage for Internet servers, or, through time-sharing techniques, they allow hundreds or thousands of users to run programs simultaneously. Because of their current roles, these computers are now called servers rather than mainframes.