Today, forms of artificial neural networks are still among the most commonly used methods for machine learning techniques. So called deep learning has become a sub-field where machines could learning a variety of tasks through the utilisation of large datasets that teach a computer basic patterns with very little guiding information. IN the 1960s, the related field of robotics began to develop in earnest. The merger of AI with robotics was often led by Japanese research institutions, such as Waseda University through the WABOT project. While optimism in AI built through the 1960s, by the 1970s there was a backlash and doubt that machines could replicate human thinking. One of the main problems was a lack of computational power. Machines could not be build to be powerful enough to solve sometimes easy tasks such as recognizing a face or programming a robot to move across a room without hitting something. These actions required much more powerful computational power that did not develop until the 1980s. By then, industry began to have greater influence, rather than government funding, in shaping AI. In the 1990s and early 2000s, and since that time, AI has grown substantially. In part, this was influenced by major successes such as IBM's Deep Blue machine, which beat the first chess champion in chess in 1997. Additionally, Stanford build a car in 2005 that was able to travel across across the desert for more than 130 miles by itself, initiating a lot of research that has since developed in autonomous vehicles. Other military and consumer applications, along with increased needs in cyber security and solving tasks quickly for firms, helped to inspire new research and funding for AI that has not abated substantially since the late 1990s.