Changes

Jump to: navigation, search

How Did Artificial Intelligence Develop

350 bytes added, 10:15, 6 September 2019
Later Innovation
==Later Innovation==
Ada Lovelace, in the early 19th century, had realised that machines, that she and other called Analytical Engines, could be programmed to conduct more than simple calculations, such as that demonstrated by Leibniz in the 17th century. Lovelace is often credited with the first code written, through an algorithm she created to help artificially create music. In the late 19th century and early 20th century, more developments in mathematics allowed more intellectual foundations for AI to develop. Gottlob Frege and George Boole further developed mathematical logic. This eventually led to Alfred North Whitehead and Bertrand Russell writing a three-volume work with one of those volumes in 1913 formally arguing for formal, logic-based solutions for mathematical problems. Kurt Gödel demonstrated there was an incompleteness in formal logic, but that mathematical reasoning can be mechanized. Effectively, this challenged mathematicians and engineers to device more complex machines that can utilise logic reasoning to derive answers to questions. Perhaps the most critical discovery to what became the foundation of computing and AI was the development of the Turing machine by Alan Turing in 1936. The abstract machine manipulates mathematical symbols, such as 0 and 1, where a solution could be derived using simple rules. The device could theoretically contain an infinite amount of tape that can be rolled and sequenced to find a solution. The description became the foundation to what would become computing memory and a central processing unit (CPU). The machine would have simple operations, just six, but with these simple operations very complex processes could be derived. Today, all computers use a lot of the basic logic discussed by Turing. The process of calculation also developed into the idea of AI, as later theorists began to take ideas from a Turing machine to derive solutions for problems.<ref>For key developments in the 19th and early 20th centuries, see: Pearce, Q. L. 2011. <i>Artificial Intelligence</i>. Technology 360. Detroit: Lucent Books.</ref>
Word War II became one of the key turning points for AI, when code breaking machines were need to crack German and Japanese war codes. Both Turing and John von Neumann laid the foundations for code breaking machines that would help fight the war effort but also later solve different logic problems (Figure 2). After World War II, many speculated that a sort of artificial brain could be created using computers through formal, mathematical logic. In 1956, the field of AI was developed as a sub-area of computing research. Developments, which have continued until today, saw that neural links in the human brain could be replicated. The fields of neurology and human cognition became very influential in AI research as early AI researches used artificial neural networks and learning methods from human experiences to try to replicate forms of machine learning. Marvin Lee Minsky was the first to build a neural network machine that used cognitive research to help design and apply the machines functions to replicate human learning in a machine. This work heavily utilised research conducted by Walter Pitts and Warren McCulloch, who demonstrated how artificial neural networks could allow machines to learn from experience and repeated action. Through out the 1950s and 1960s, early AI machines were taught tasks such as how to play chess or checkers, such as using the Ferranti Mark 1 machines or the IBM 702 machines.<ref>For more on World War II and post-war developments in AI, see: Coppin, Ben. 2004. <i>Artificial Intelligence Illuminated</i>. 1st ed. Boston: Jones and Bartlett Publishers.</ref>
[[File:02102016 ENIAC programmers LA.2e16d0ba.fill-735x490.jpg|thumb|Figure 2. The ENIAC machine, built in the United States, in the 1940s was used to conduct numeric calculations that could help crack codes. ]]

Navigation menu