Difference between revisions of "How Did Artificial Intelligence Develop"
|Line 12:||Line 12:|
Revision as of 09:45, 6 September 2019
Artificial intelligence (AI) is increasingly becoming pervasive in our world. Whether it is on social media application, through facial recognition, or even using our credit cards and online security, we are increasingly living in a world where AI is critical for modern life. We have heard warnings about AI taking jobs away or even threatening humanity. Nevertheless, the history of AI is linked not only with the history of computing but those who made the foundation for this centuries ago.
The early history of AI can be traced to the intellectual foundations developed in the mid to late 1st millennium BCE. During this time, in Greece, India, China, Babylonia, and perhaps elsewhere philosophers and early mathematicians began conceptualizing artificial devices that can learn and perform tasks and calculations. Both Aristotle and Euclid reasons that through syllogism, that is a deductive logic-based argument, a mechanical device can be taught to perform given tasks. If a given statement was known or understood, that something created can learn or determine how to derive a conclusion. Effectively, logic can be taught to artificial devices. While such philosophers reasoned this possibility, they understood that capabilities to allow this was not as easy, even if they reasoned it was possible. Al-Khwarizmi, in the 8th and 9th centuries CE, and who's name became the basis for the term algorithm, developed rules and foundations in what became algebra. He derived linear and quadratic equations during his time in Baghdad as the lead mathematician and astronomer in the House of Wisdom, devised that many calculations could be automated through a mechanical device. The Spanish philosopher Ramon Llull also developed the idea machines could perform simple logical tasks that can be repeated and produced so that tasks could be accomplished in an automated way. Gottfried Wilhelm developed these ideas as he was, along with Isaac Newton, were laying the foundations to what became modern calculus in the 17th century. Taking ideas from Llull, and working with engineers, Wilhem was able to help develop a basic machine that can accomplish simple calculations, or what became a sort of calculator. The machine was able to add, subtract, multiply, and divide. This became known as the stepped reckoner, a mechanical device that through changes in gears within the device was able to conduct basic calculations. Thomas Hobbes and René Descartes also saw that logic and mathematical reasoning could be used to automatically determine if a given position proved true or not. They also theorized that an algorithmic, automated approach could potentially be able to determine arguments and determine the validity of an argument using reason or mathematical logic. Key developments during this time was a physical symbol system, that became also the basis for mathematical symbols used in algorithmic presentation today, developed. This provided the mathematical and logic foundations, along with a way to standardize their expression, that later developed AI's expressions.
Ada Lovelace, in the early 19th century, had realised that machines, that she and other called Analytical Engines, could be programmed to conduct more than simple calculations, such as that demonstrated by Leibniz in the 17th century. Lovelace is often credited with the first code written, through an algorithm she created to help artificially create music. In the late 19th century and early 20th century, more developments in mathematics allowed more intellectual foundations for AI to develop. Gottlob Frege and George Boole further developed mathematical logic. This eventually led to Alfred North Whitehead and Bertrand Russell writing a three-volume work with one of those volumes in 1913 formally arguing for formal, logic-based solutions for mathematical problems. Kurt Gödel demonstrated there was an incompleteness in formal logic, but that mathematical reasoning can be mechanized. Effectively, this challenged mathematicians and engineers to device more complex machines that can utilise logic reasoning to derive answers to questions. Perhaps the most critical discovery to what became the foundation of computing and AI was the development of the Turing machine by Alan Turing in 1936. The abstract machine manipulates mathematical symbols, such as 0 and 1, where a solution could be derived using simple rules. The device could theoretically contain an infinite amount of tape that can be rolled and sequenced to find a solution. The description became the foundation to what would become computing memory and a central processing unit (CPU). The machine would have simple operations, just six, but with these simple operations very complex processes could be derived. Today, all computers use a lot of the basic logic discussed by Turing. The process of calculation also developed into the idea of AI, as later theorists began to take ideas from a Turing machine to derive solutions for problems.
Word War II became one of the key turning points for AI, when code breaking machines were need to crack German and Japanese war codes. Both Turing and John von Neumann laid the foundations for code breaking machines that would help fight the war effort but also later solve different logic problems. After World War II, many speculated that a sort of artificial brain could be created using computers through formal, mathematical logic. In 1956, the field of AI was developed as a sub-area of computing research. Developments, which have continued until today, saw that neural links in the human brain could be replicated. The fields of neurology and human cognition became very influential in AI research as early AI researches used artificial neural networks and learning methods from human experiences to try to replicate forms of machine learning. Marvin Lee Minsky was the first to build a neural network machine that used cognitive research to help design and apply the machines functions to replicate human learning in a machine. This work heavily utilised research conducted by Walter Pitts and Warren McCulloch, who demonstrated how artificial neural networks could allow machines to learn from experience and repeated action.
Today, forms of artificial neural networks are still among the most commonly used methods for machine learning techniques. So called deep learning has become a sub-field where machines could learning a variety of tasks through the utilisation of large datasets that teach a computer basic patterns with very little guiding information.