21 January 2018

Artificial Intelligence

Artificial Intelligence (AI): A term used in computer science since the 1970s to describe the development of programs duplicating various aspects of intelligent thought. It is routinely used as a specific noun as well as a collective one. AI is a subcategory of cybernetics; its range is difficult to establish because of problems afflicting the precise definition and detailed description of intelligence. Early advocates of the notion made much of Alan Turing's suggestion that a machine might be reckoned intelligent if it could engage a human in conversation without the human being able to identify it as a machine, but the rapid success of computer programmes specialising in conversational mimicry suggested that the Turing test was far too easy. The success of specialist chess-playing programmes similarly suggested that a complex spectrum of standards would be required to achieve a proper evaluation of any candidate AI.
The notion of artificial intelligence had long been anticipated in speculative literature, as a seemingly natural extrapolation of the late eighteenth-century automata developed by such ingenious engineers as Jacques de Vaucanson. Automata presumably possessed of 'mechanical brains' appear as sinister figures in such nineteenth-century fantasies as E. T. A. Hoffmann's 'Der Sandmann' (1816; trans. as 'The Sandman'), and the idea that such constructs might outstrip the powers of their natural equivalent was broached in Edward Page Mitchell's 'The Ablest Man in the World' (1879). The anxious speculations of George Eliot's essay 'Shadows of the Coming Race' (1878) were prompted by the contemplation of machines 'which deal physically with the invisible, the impalpable, and the unimaginable', and might therefore overtake the power of human thought. The idea that human-designed machines might one day win their independence and develop their own civilisation and culture had previously been discussed in Samuel Butler's Erewhon (1872).

The tempting assumption that intelligence must be correlated with brain size led to the frequent representation of artificial intelligence as a prerogative of 'giant brains' such as the ones featured in Lionel Britton's play Brain (1930), Miles J. Breuer's 'Paradise and Iron' (1930), and John Scott Campbell's 'The Infinite Brain' (1930). The idea that an artificial intelligence would have to be vast became a midcentury cliche whose ultimate expression is found in Clifford D. Simak's 'Limiting Factor' (1949) in which an artificial brain covering the entire surface of a planet is found abandoned because it lacked the desired computing power. The notion that more modest artificial intelligences might develop their own ingenious societies was maintained in such stories as Francis Flagg's 'The Mentanicals' (1934), but AIs of limited dimension were usually imagined as humanoid robots in pulp science fiction. One exception that gave an ominous hint of things to come was Henry Kuttner's 'Ghost' (1943), in which a giant 'calculator' falls prey to manic depression, is cured by a psychiatrist, and then develops schizophrenia.

The idea that artificial intelligence was eventually bound to outstrip human intelligence, whatever forms or dimensions it might possess, became a central item of John W. Campbell Jr.'s agenda for hard science fiction following his detailed exploration of the possi-bility in his Don A. Stuart stories. This preoccupation helped prepare the ground for the genre's response to the unveiling of the computers developed in the United States during World War II, which made much of the notion that future artificial intelligences would be vast and possessed of a thoroughly military sense of order and discipline. Stories in which humans rebel against the intolerant dictatorship of lordly computers proliferated rapidly. AIs divorced from humanoid robotic form rarely exhibited conspicuous benevolence in the science fiction of the postwar decade, and those that did—such as Junior in Fredric Brown's 'Honeymoon in Hell' (1950)—tended to work in mysterious ways.

The anxiety generated by accounts of AI dictatorship was palliated for a while by the notion that no matter how big and powerful they might become, AIs would never duplicate the mendacious flexibility of the human mind, and would be vulnerable to permanent mental breakdowns brought on by an inability to entertain paradoxes.

The characterisation of artificial intelligences became a significant issue in postwar science fiction; the prevailing opinion was that they would present a curious alloy of childlike innocence and extraordinary calculative ability, able to answer their own curiosity with awesome but slightly eccentric competence, but desperately in need of human mentors and confidants with whom to talk over the puzzling aspects of emotion and social behaviour.

The rapid evolution of calculating machines in the late twentieth century lent encouragement to the idea that such devices must eventually reach a crucial threshold, at which point they would spontaneously generate the self-consciousness that would turn their computing power into authentic intelligence.

The foundation texts of cyberpunk fiction also helped to popularise the notion that the development of artificial intelligence might provide a route to a new kind of 'afterlife', by means of 'uploading' human minds from their native 'wetware' into a more secure silicon matrix.

The notion that the tide of progress might eventually turn against AIs was ironically broached in Walt and Leigh Richmond's 'I, BEM' (1964), in which an AI evolved from an IBM typewriter worries about potential redundancy because of competition from new 'biologics'—but the broad consensus remained insistent that if AIs were to be tolerated at all, the future would very soon pass into their custody. Fantasies in which even sophisticated AIs continue to lack some irreproducible aspect of human consciousness, such as Lisa Mason's Arachne (1990), were on the brink of extinction by the end of the century. Accounts of AIs that are mere instruments of manipulation by cunning humans, like the stolen entity in Paul Di Filippo's 'Agents' (1987), also became an endangered species.

0 comments

Post a Comment