Movate Blog - A Thought leadership platform for Cloud, Analytics, Tech Support Articles

Deep Learning vs. Machine Vision - 3 things you should know

Written by Massamiliano Versace | Jul 20, 2021 11:40:49 AM

When it comes to vision technology, we have the tendency to throw around buzzwords such as “machine vision” and “deep learning,” but what do they really mean? Machine vision has been around for decades and has been applied to a wide variety of industrial and non-industrial applications. More recently, we’ve seen deep learning become heralded as a natural next step for machine vision. So, how does deep learning differ from machine vision? And how can enterprises, such as manufacturers, leverage this evolution of vision technology to cope with real-world demands?

Here are 3 fundamental ways in which deep learning differs from machine vision:

1. Design: hand-crafted vs. learning

In a typical machine vision task, an engineer decides which simple features — edges, curves, color patches, corners and other attributes within images — are important for to be recognized. Then, they devise a classifier, usually hand-tuning several "thresholds" that automatically analyze these features and decide to which object they belong. Take an apple as an example. The classifier and its thresholds will determine how much “red” and “curvature” classify an object as a red apple vs. a green apple. While this approach is nowhere near a complete characterization of the power of human vision, it is simple and effective enough to have survived the last several decades without any pushback.

Deep learning on the other hand, does not rely on the two hand-tuned steps of traditional machine vision. Instead, the burden of finding (learning) those features and thresholds is shifted to the deep learning model itself, rather than an engineer. Although scientists still have to use their brains to devise equations that enable this generalized learning directly from the data, now it only had to be done once.

This is really the key to deep learning: one does not need to handcraft a machine vision model for every case, but rather devise a learning machine that can be taught virtually anything directly from data, whether classifying fruits, airplanes or products in a machine.

2.  Precision: with less data and less effort

Because of this key ability to learn, deep learning models have dominated many of the data science competitions to date. Armed with the ability to learn even with small amounts of data, deep learning regularly beats other machine vision methods, and very often human experts, in domains ranging from traffic signs and medical images, to general purpose object recognition. The high precision of deep learning doesn’t come with a high overhead of time and effort. It can identify good or bad products, for instance, very precisely, without calibrating and determining measurements, creating custom programs, etc.

3.  Adaptability: one solution fits all

Another consequence of their learning capability is that deep learning models are more flexible and adaptable than machine vision methods. For instance, in a manufacturing quality assurance application where new items are constantly being introduced and previously unseen defects show up in the production line, running deep learning at the edge in industrial machines enables dozens of cameras to quickly learn new item types and defects in a variable production environment. Deep learning can also handle subjectivity and qualitative inspection. Machine vision could not tackle this — there would be too much customization needed, as each product comes with its own very complicated set of requirements. Deep learning radically differs from machine vision by making this rapid configurability possible, reducing the cost and time to optimize quality inspection to a level that makes it both technically and economically feasible for manufacturers of all kinds.

While machine vision has served its purpose, deep learning-enabled cameras will bring innovation to a number of applications previously not possible.

The original article was published here on October 8, 2020.