Home > issue 6 > The Essence of Quantum Computing
Part 3 of 3 Part Series

The Essence of Quantum Computing
Part 3 of 3 Part Series

An artificial “neuron” can be as simple as a bit or a qubit or as complex as you wish to make it. A neural net is a network of neurons, typically structured in arrayed layers where the first layer accepts inputs (e.g., image pixels), and the intermediate layers create various combinations of the input (say, representing structures such as edges, geometric shapes, etc.), and a final output layer that produces, say, a high-level classification of the image (say, dog, human, tree, etc.). Neural nets must first be trained on vast amounts of data to develop its “intuition” in a given area of knowledge before it makes its debut following which it can continue its learning process as it goes along its process of becoming smarter and smarter. During the learning phase, the neural net learns to make connections between inputs and outputs, and uses that learning to determine the output of an unknown input. The path that connects the input with the output through layers of neurons essentially means doing massive amounts of matrix algebra and such manipulations are exponentially faster with qubits than with classical bits. A typical neural net that solves real-life problems will have billions of neurons. Further, unlike a set of classical bits that store information locally, a set of qubits store far more information in the collective properties of those qubits. While a set of n classical bits can encode 2n different pieces of information, at any given time it can represent and process only one of them, the same number of qubits can concurrently represent and process 2n pieces of information by spreading the information among themselves.

So, it is not surprising that Google, IBM, Microsoft and several others are rushing into quantum computing and pouring money into it to resolve the technical hurdles that lie in its way of comprehensively surpassing humans. Those hurdles include (1) quantum computers operate on quantum states, not on human-readable data, and translating between the two presently is a giant task; (2) machine-learning algorithms must be noise-tolerant if they are to make sense of messy or incomplete input against a backdrop of red-herrings; (3) loading the input into a quantum computer, i.e., putting classical data into a quantum state is a task horrendous enough to be shunned; (4) once the data is fed into the computer, it must be stored in a manner that a quantum system can interact with it without collapsing the ongoing calculation; and finally (5) how to measure (read) the output because measurement will return only a single number at a time, and that too probabilistically and collapse the system in the process This means running the problem multiple times before all the information can be extracted.

Of course, as we have seen in Part II, all is not lost. For certain classes of problems exploiting quantum interference through cleverly choreographed unitary and measurement operations it is possible to eliminate incorrect answers and only the correct answer remains, waiting to be measured. Another important case is where brute force input data entry is not needed because the data is the product of a physics or chemistry experiment and data flows into the computer seamlessly. An even more intriguing possibility is that quantum machine learning systems may help in designing their successors. It is intriguing because there is enough reason to believe that human brains may themselves be hybrid quantum-classical computers, with the classical part deftly handling input-output. It may well be that quantum machine-learning systems is a better way to study human cognition, which depends on context, beliefs (biases), perceived available choices (wave function collapse?) based on the questions we ask and the sequence in which they are asked.

14.2 Quantum computing and social responsibility

 

The central issue here is the war between calculatingly rational machines versus rationalizing humans.

Now that quantum computing, biotechnology, artificial intelligence (AI), and novel materials have all crossed their initial teething troubles stage and have matured to a level where their potential in industrial applications is no longer doubted, they are ready for exponential growth in applications. The world in now seeing a unique confluence of these advanced technologies whose potential to serve mankind beneficially vastly overshadows any other in the history of the world.

Automation in agriculture and industry caused mass extinctions of jobs and brought about profound societal changes that led to rapid urbanization. However, those job losses and those later brought about by the Internet and World Wide Web (WWW) combine were handsomely compensated for by new kinds of better paid and more interesting jobs in the service sector and high-tech industries during the industrial era. The fledgling post-industrial era, starting around 1990, saw the rapid evolution of the Internet and digital technologies that opened the way for upward mobility of the masses, but brought with it often indiscriminate surveillance by government agencies, easier violation of principles of privacy and the indiscriminate right to dissent from anywhere in the world on any issue. Future AI advances could make such activities more widespread and more powerful. That is only the beginning.

Pages ( 12 of 14 ): « Previous1 ... 1011 12 1314Next »

Leave a Comment:

Your email address will not be published. Required fields are marked *