To learn a skill, we gather knowledge, practice carefully, and monitor our performance. Eventually, we become better at that activity. Machine learning is a technique that allows computers to do just that.
Can Computers Learn?
Defining intelligence is tough. We all know what we mean by intelligence when we say it, but describing it is problematic. Leaving aside emotion and self-awareness, a working description could be the ability to learn new skills and absorb knowledge and to apply them to new situations to achieve the desired outcome.
Given the difficulty in defining intelligence, defining artificial intelligence isn’t going to be any easier. So, we’ll cheat a little. If a computing device is able to do something that would usually require human reasoning and intelligence, we’ll say that it’s using artificial intelligence.
For example, smart speakers like the Amazon Echo and Google Nest can hear our spoken instructions, interpret the sounds as words, extract the meaning of the words, and then try to fulfill our request. We might be asking it to play music, answer a question, or dim the lights.
In all but the most trivial interactions, your spoken commands are relayed to powerful computers in the manufacturers’ clouds, where the artificial intelligence heavy-lifting takes place. The command is parsed, the meaning is extracted, and the response is prepared and sent back to the smart speaker.
Machine learning underpins the majority of the artificial intelligence systems that we interact with. Some of these are items in your home like smart devices, and others are part of the services that we use online. The video recommendations on YouTube and Netflix and the automatic playlists on Spotify use machine learning. Search engines rely on machine learning, and online shopping uses machine learning to offer you purchase suggestions based on your browsing and purchase history.
Computers can access enormous datasets. They can tirelessly repeat processes thousands of times within the space that it would take a human to perform one iteration—if a human could even manage to do it once. So, if learning requires knowledge, practice, and performance feedback, the computer should be the ideal candidate.
That’s not to say that the computer will be able to really think in the human sense, or to understand and perceive as we do. But it will learn, and get better with practice. Skillfully programmed, a machine-learning system can achieve a decent impression of an aware and conscious entity.
We used to ask, “Can computers learn?” That eventually morphed into a more practical question. What are the engineering challenges that we must overcome to allow computers to learn?
Neural Networks and Deep Neural Networks
Animals’ brains contain networks of neurons. Neurons can fire signals across a synapse to other neurons. This tiny action—replicated millions of times—gives rise to our thought processes and memories. Out of many simple building blocks, nature created conscious minds and the ability to reason and remember.
Inspired by biological neural networks, artificial neural networks were created to mimic some of the characteristics of their organic counterparts. Since the 1940s, hardware and software have been developed that contain thousands or millions of nodes. The nodes, like neurons, receive signals from other nodes. They can also generate signals to feed into other nodes. Nodes can accept inputs from and send signals to many nodes at once.
If an animal concludes that flying yellow-and-black insects always give it a nasty sting, it will avoid all flying yellow-and-black insects. The hoverfly takes advantage of this. It’s yellow and black like a wasp, but it has no sting. Animals that have gotten tangled up with wasps and learned a painful lesson give the hoverfly a wide berth, too. They see a flying insect with a striking color scheme and decide that it’s time to retreat. The fact that the insect can hover—and wasps can’t—isn’t even taken into consideration.
The importance of the flying, buzzing, and yellow-and-black stripes overrides everything else. The importance of those signals is called the weighting of that information. Artificial neural networks can use weighting, too. A node need not consider all of its inputs equal. It can favor some signals over others.
Machine learning uses statistics to find patterns in the datasets that it’s trained on. A dataset might contain words, numbers, images, user interactions such as clicks on a website, or anything else that can be captured and stored digitally. The system needs to characterize the essential elements of the query and then match those to patterns that it has detected in the dataset.
If it’s trying to identify a flower, it will need to know the stem length, the size and style of the leaf, the color and number of petals, and so on. In reality, it will need many more facts than those, but in our simple example, we’ll use those. Once the system knows those details about the test specimen, it starts a decision-making process that produces a match from its dataset. Impressively, machine-learning systems create the decision tree themselves.
A machine-learning system learns from its mistakes by updating its algorithms to correct flaws in its reasoning. The most sophisticated neural networks are deep neural networks. Conceptually, these are made up of a great many neural networks layered one on top of another. This gives the system the ability to detect and use even tiny patterns in its decision processes.
Layers are commonly used to provide weighting. So-called hidden layers can act as “specialist” layers. They provide weighted signals about a single characteristic of the test subject. Our flower identification example might perhaps use hidden layers dedicated to the shape of leaves, the size of buds, or stamen lengths.
Different Types of Learning
There are three broad techniques used to train machine-learning systems: supervised learning, unsupervised learning, and reinforcement learning.
Supervised learning is the most frequently used form of learning. That isn’t because it’s inherently superior to other techniques. It has more to do with the suitability of this type of learning to the datasets used in the machine-learning systems that are being written today.
In supervised learning, the data is labeled and structured so that the criteria used in the decision-making process are defined for the machine-learning system. This is the type of learning used in the machine-learning systems behind YouTube playlist suggestions.
Unsupervised learning doesn’t require data preparation. The data isn’t labeled. The system scans the data, detects its own patterns, and derives its own triggering criteria.
Unsupervised learning techniques have been applied to cybersecurity with high rates of success. Intruder detection systems enhanced by machine learning can detect an intruder’s unauthorized network activity because it doesn’t match the previously observed patterns of behavior of authorized users.
Reinforcement learning is the newest of the three techniques. Put simply, a reinforcement learning algorithm uses trial and error and feedback to arrive at an optimal model of behavior to achieve a given objective.
This requires feedback from humans who “score” the system’s efforts according to whether its behavior has a positive or negative impact in achieving its objective.
The Practical Side of AI
Because it’s so prevalent and has demonstrable real-world successes—including commercial successes—machine learning has been called “the practical side of artificial intelligence.” It’s big business, and there are many scalable, commercial frameworks that allow you to incorporate machine learning into your own developments or products.
If you don’t have an immediate need for that type of fire-power but you’re interested in poking around a machine-learning system with a friendly programming language like Python, there are excellent free resources for that, too. In fact, these will scale with you if you do develop a further interest or a business need.
Torch is an open-source machine-learning framework known for its speed.
Scikit-Learn is a collection of machine-learning tools, especially for use with Python.
Caffe is a deep-learning framework, especially competent at processing images.
Keras is a deep-learning framework with a Python interface.
- › How “Photonic Computers” Could Use Light Instead of Electricity
- › What Is Inside-Out Tracking in VR?
- › What Is GPU Computing, and What Is It Good For?
- › What’s the Difference Between Strong AI and Weak AI?
- › Your Face Is Being Scanned in Public, Here’s How to Stop It
- › “Live Text” Is the Best iPhone and Mac Feature You’re Not Using
- › Affective Computing Could Change the Future of Computer Interaction
- › How Much Does It Cost to Operate an Electric Lawn Mower?