Graphics Processing Units (GPUs) are designed to render graphics in real-time. However, it turns out that what makes GPUs great at graphics also makes them great at certain non-graphics jobs too. This is known as GPU computing.
How Do CPUs and GPUs Differ?
In principle, both GPUs and CPUs (Central Processing Units) are products of the same technology. Inside each device, there are processors that consist of millions to billions of microscopic electronic components, mainly transistors. These components form processor elements such as logic gates and from there are built into complex structures that turn binary code into the sophisticated computer experiences we have today.
The main difference between CPUs and GPUs is parallelism. In a modern CPU, you’ll find multiple complex, high-performance CPU cores. Four cores are typical for mainstream computers, but 6- and eight-core CPUs are becoming mainstream. High-end professional computers may have dozens or even more than 100 CPU cores, especially with multi-socket motherboards that can accommodate more than one CPU.
Each CPU core can do one or (with hyperthreading) two things at a time. However, that job can be almost anything and it can be extremely complex. CPUs have a wide variety of processing abilities and incredibly smart designs that make them efficient at crunching complicated math.
Modern GPUs typically have thousands of simple processors in them. For example, the RTX 3090 GPU from Nvidia has a whopping 10496 GPU cores. Unlike a CPU, each GPU core is relatively simple in comparison and is designed to do the types of calculations typical in graphics work. Not only that, but all of these thousands of processors can work on a small piece of the graphics rendering problem at the same time. That’s what we mean by “parallelism.”
General-Purpose Computing on GPUS (GPGPU)
Remember that CPUs are not specialized and can do any type of calculation, regardless of how long it takes to finish the work. In fact, a CPU can do anything a GPU can do, it just can’t do it quickly enough to be useful in real-time graphics applications.
If this is the case, then the reverse is also true to an extent. GPUs can do some of the same calculations that we usually ask CPUs to do, but since they have a supercomputer-like parallel processing design they can do it orders of magnitude faster. That’s GPGPU: using GPUs to do traditional CPU workloads.
The major GPU makers (NVIDIA and AMD) use special programming languages and architecture to allow users access to GPGPU features. In the case of Nvidia, that’s CUDA or Compute Unified Device Architecture. This is why you’ll see their GPU processors referred to as CUDA cores.
Since CUDA is proprietary, competing GPU makers such as AMD can’t use it. Instead, AMD’s GPUs make use of OpenCL or Open Computing Language). This is a GPGPU language created by a consortium of companies that include Nvidia and Intel.
GPUs in Scientific Research
GPU computing has revolutionized what scientists can do with much smaller budgets than before. Data mining, where computers look for interesting patterns in mountains of data, gaining insights that would otherwise be lost in the noise.
Projects such as Folding@Home use home GPU processing time donated by users to work on serious problems such as cancer. GPUs are useful for all sorts of scientific and engineering simulations that would have taken years to complete in the past and millions of dollars in time rented on large supercomputers.
GPUs in Artificial Intelligence
GPUs are also great at certain types of artificial intelligence jobs. Machine learning (ML) is much faster on GPUs than CPUs and the latest GPU models have even more specialized machine learning hardware built into them.
One practical example of how GPUs are being used to advance AI applications in the real world is the advent of self-driving cars. According to Tesla, their Autopilot software required 70,000 GPU hours to “train” the neural net with the skills to drive a vehicle. Doing the same job on CPUs would be far too expensive and time-consuming.
GPUs in Cryptocurrency Mining
GPUs are also excellent at cracking cryptographic puzzles, which is why they’ve become popular in cryptocurrency mining. Although GPUs don’t mine cryptocurrency as quickly as ASICs (Application-specific Integrated Circuits) they have the distinct advantage of being versatile. ASICs can usually only mine one specific type or small group of cryptocurrencies and nothing else.
Cryptocurrency miners are one of the main reasons that GPUs are so expensive and hard to find, at least at the time of writing in early 2022. Experiencing the heights of GPU technology means paying dearly, with the going price of an NVIDIA GeForce RTX 3090 being over $2,500. It’s become such a problem that NVIDIA has artificially limited the cryptography performance of gaming GPUs and introduced special mining-specific GPU products.
You Can Use GPGPU Too!
While you may not always be aware of it, some of the software that you use every day offloads some of its processing to your GPU. If you work with video-editing software or audio processing tools, for example, there’s a good chance your GPU is carrying some of the load. If you want to tackle projects like making your own deepfakes at home, your GPU is once again the component that makes it possible.
Your smartphone’s GPU is also responsible for running many of the artificial intelligence and machine vision jobs that would have been sent off to cloud computers to do. So we should all be grateful that GPUs can do more than draw an attractive image on your screen.
- › When to Replace Your Old Monitor
- › Multi-Chip Module (MCM) GPUs Could Be the Future of Graphics
- › How “Photonic Computers” Could Use Light Instead of Electricity
- › How to Use Your Apple Watch’s Hidden Web Browser (and Why You Shouldn’t)
- › How to Batch Edit Photos and Videos on iPhone
- › Why You Should Be Using ‘Movies Anywhere’
- › 10 Alexa Features You Should Be Using on Your Amazon Echo
- › Why Mario Kart Just Isn’t as Fun as It Used to Be