The speed involving design advancement

1 month ago documentarys85 0

Singularity Computers Protium Reservoir Tube

The authors don’t know when the singularity will come, but come it will. When this occurs, the end of the human race might very well be upon us, they say, citing a 2014 prediction by the late Stephen Hawking. According to Kurzweil, humans may then be fully replaced by AI, or by some hybrid of humans and machines.

A speed superintelligence describes an AI that can do everything that a human can do, where the only difference is that the machine runs faster. For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. Such a difference in information processing speed could drive the singularity.

Some intelligence technologies, like “seed AI”, may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on. Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.

But the researchers found that such events in our universe do not produce the same result—a collision always ends with the singularity still wrapped inside a black hole. Paul Allen argued the opposite of accelerating returns, the complexity brake; the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies, a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”. Whether or not an intelligence explosion occurs depends on three factors.

Singularity proponents occasionally appeal to developments in artificial intelligence as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBM’s Watson system for Jeopardy! But when we step back, we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. A computer program that plays excellent chess can’t leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but can’t deduce that a tightrope walker would have a great sense of balance. Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans.

While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor. The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Intelligence explosion is a possible outcome of humanity building artificial general intelligence . AGI may be capable of recursive self-improvement, leading to the rapid emergence of artificial superintelligence , the limits of which are unknown, shortly after technological singularity is achieved. Figueras and his colleagues have demonstrated, for instance, that naked singularities can show up when black holes collide.