Artificial intelligence is something that is being talked about more than ever. The term AI is something that has been around since the 1950s as the idea of creating a machine that can learn. Today there are many companies that are advancing computing power and working on research and development of an AI that could continuously solve problems. Within the past few years new improvements for computing power and revolutions in technology have helped companies to take a more focused approach to AI.
Through innovators like Elon musk and his AI project as well as individual products from IBM, Amazon, Facebook, Microsoft, Apple and more all contributing to the marketplace with their own AI developments, we are making rapid headway into creating systems that can learn.
Early artificial intelligence is almost everywhere today but we have not quite reached the point where an AI system could be declared as advances the human mind. There is artificial intelligence in everything from the predictive keyboard in the Apple iPhone all the way down to webpages like zwillow that can capture preferences for home price estimates and previous browsing history for its customers.
There are also learning programs which are designed to serve one function such as the alphago project. With Alphago, a virtual program was able to defeat the top human player in the world Ke Jie at a game of go. This product was all part of the deep mine AI and the original testing of this learning function took place in May of 2017.
Even though humans still have more brainpower than the computer software that makes up some of the earliest functions of AI, a computer program that has this learning capability can often edge out individuals when it comes to focusing on specific tasks. Instances like this very advanced GO AI and some of the other AI’s that are in development could represent a chance for a piece of software that can continuously learn while being able to perform functions much more efficiently than any human worker.
Where did AI get started?
AI first began in the 1950s under John McCarthy who is a math professor at Dartmouth College. The basic principle of AI is that a principal or task could be so precisely describe that any machine could be made to eventually simulate the same action. Research into this simulation would eventually start at MIT and by the 1970s and 1980s there were machines that were able to start learning tasks through feeding data and making predictions.
In the present-day AI development has become so advanced because of new and powerful hardware. Products are now made to better suit any type of AI computation and to handle extremely large data sets with training software and models included too. Even as early as 2012 Google was using a neural network in 16,000 CPU chips to focusing on the task of recognizing cats and sorting YouTube videos. New eight-layer neural network technology based off of two graphics processing units with changing technology were eventually able to outperform those 16,000 CPU chips just a few months later.
With future developments in AI technology we can continue to see new functions for this type of machine learning as it grows more advanced. With investments from ridesharing companies like Uber and live to our looking to use AI systems and self driving cars and even financial institutions investing in this technology so that they would not have to employ tellers, the future of AI looks particularly bright!