Difference between revisions of "How Did Artificial Intelligence Develop"
|Line 7:||Line 7:|
Revision as of 09:35, 6 September 2019
Artificial intelligence (AI) is increasingly becoming pervasive in our world. Whether it is on social media application, through facial recognition, or even using our credit cards and online security, we are increasingly living in a world where AI is critical for modern life. We have heard warnings about AI taking jobs away or even threatening humanity. Nevertheless, the history of AI is linked not only with the history of computing but those who made the foundation for this centuries ago.
The early history of AI can be traced to the intellectual foundations developed in the mid to late 1st millennium BCE. During this time, in Greece, India, China, Babylonia, and perhaps elsewhere philosophers and early mathematicians began conceptualizing artificial devices that can learn and perform tasks and calculations. Both Aristotle and Euclid reasons that through syllogism, that is a deductive logic-based argument, a mechanical device can be taught to perform given tasks. If a given statement was known or understood, that something created can learn or determine how to derive a conclusion. Effectively, logic can be taught to artificial devices. While such philosophers reasoned this possibility, they understood that capabilities to allow this was not as easy, even if they reasoned it was possible. Al-Khwarizmi, in the 8th and 9th centuries CE, and who's name became the basis for the term algorithm, developed rules and foundations in what became algebra. He derived linear and quadratic equations during his time in Baghdad as the lead mathematician and astronomer in the House of Wisdom, devised that many calculations could be automated through a mechanical device. The Spanish philosopher Ramon Llull also developed the idea machines could perform simple logical tasks that can be repeated and produced so that tasks could be accomplished in an automated way. Gottfried Wilhelm developed these ideas as he was, along with Isaac Newton, were laying the foundations to what became modern calculus in the 17th century. Taking ideas from Llull, and working with engineers, Wilhem was able to help develop a basic machine that can accomplish simple calculations, or what became a sort of calculator. The machine was able to add, subtract, multiply, and divide. This became known as the stepped reckoner, a mechanical device that through changes in gears within the device was able to conduct basic calculations. Thomas Hobbes and René Descartes also saw that logic and mathematical reasoning could be used to automatically determine if a given position proved true or not. They also theorized that an algorithmic, automated approach could potentially be able to determine arguments and determine the validity of an argument using reason or mathematical logic. Key developments during this time was a physical symbol system, that became also the basis for mathematical symbols used in algorithmic presentation today, developed. This provided the mathematical and logic foundations, along with a way to standardize their expression, that later developed AI's expressions.
Ada Lovelace, in the early 19th century, had realised that machines, that she and other called Analytical Engines, could be programmed to conduct more than simple calculations, such as that demonstrated by Leibniz in the 17th century. Lovelace is often credited with the first code written, through an algorithm she created to help artificially create music. In the late 19th century and early 20th century, more developments in mathematics allowed more intellectual foundations for AI to develop. Gottlob Frege and George Boole further developed mathematical logic. This eventually led to Alfred North Whitehead and Bertrand Russell writing a three-volume work with one of those volumes in 1913 formally arguing for formal, logic-based solutions for mathematical problems. Kurt Gödel demonstrated there was an incompleteness in formal logic, but that mathematical reasoning can be mechanized. Effectively, this challenged mathematicians and engineers to device more complex machines that can utilise logic reasoning to derive answers to questions. Perhaps the most critical discovery to what became the foundation of computing and AI was the development of the Turing machine by Alan Turing in 1936. The abstract machine manipulates mathematical symbols, such as 0 and 1, where a solution could be derived using simple rules. The device could theoretically contain an infinite amount of tape that can be rolled and sequenced to find a solution. The description became the foundation to what would become computing memory and a central processing unit (CPU). The machine would have simple operations, just six, but with these simple operations very complex processes could be derived. Today, all computers use a lot of the basic logic discussed by Turing. The process of calculation also developed into the idea of AI, as later theorists began to take ideas from a Turing machine to derive solutions for problems.