Dissecting Mythbusters’ GPU vs CPU Video: What It Means for Real-Time Check Processing and Fraud Detection
Back in August, we explored the importance of Nvidia's data centers to real-time check processing. The blog post garnered a huge amount of attention from our readers, yielding requests to take a deeper dive into the featured video from Mythbusters and the subject of GPUs vs CPUs.
For reference, Mythbusters hosts -- Adam Savage and Jamie Hyneman -- provided an entertaining way to illustrate a complicated concept.
Watch below as "Leonardo" demonstrates how simple CPU might create a piece of artwork, as a series of discrete actions done one after the other. Contrast that with the epic GPU-powered "Leonardo 2.0":
"Leonardo" vs. "Leonardo 2.0" -- simple step-by-step smiley face vs. epic all-at-once Mona Lisa -- provides an effective illustration of the different capabilities of CPU (Central Processing Units) and GPU (Graphics Processing Units) technology. But, let's take a deeper dive to translate the information presented by the video in terms of check processing and fraud detection.
Accuracy and Speed
In the video, the final portraits couldn't be any more different:
CPU
GPU
The accuracy achieved by GPU vs CPU can be applied to check processing and fraud detection, particularly when applying artificial intelligence and machine learning technologies utilizing GPUs.
In order to achieve accuracy levels of over 99.5%, AI and Machine Learning technologies require the high processing powers of GPUs. This is not only important for training of the models, but its ability to process the images. We've previously detailed "what is a deep learning node", and in order for high levels of accuracy to be achieved, the input -- or in this case, an image of a check -- will be transferred and computated between millions of nodes. Imagine trying to perform this large number or computations with CPUs ...
Which leads us to the next point of emphasis: Speed. As seen in the video, it takes a rather large amount of time for the CPU-powered machine to create a rudimentary portrait. As explained in the video:
"I introduce you to Leonardo. And he is going to paint a picture for you guys in a way that a CPU might do it, as a series of discreet actions performed sequentially, one after another."
Then, we come to the Leonardo 2.0 demonstration. While we will not transcribe the full depiction of the process (click the video below to watch Leonardo 2.0 again), what is important to understand is that the final product took 80 milliseconds to achieve, as the GPU-powered machine is running thousands of processes in parallel or simultaneously.
This is incredibly important for real-time check processing and fraud detection. GPUs are capable of processing thousands of images simultaneously, returning the results in milliseconds vs CPUs which do not have the processing power to achieve actual real-time processing.
And there you have it, folks! What are your impressions of GPU vs CPU? Let us know!