Understanding artificial intelligence
The uses of artificial intelligence (AI) seem to grow day by day. According to a Mckinsey report from October 2018, “AI now powers so many real-world applications, ranging from facial recognition to language translators and assistants like Siri and Alexa, that we barely notice it.”
Yet while it’s ever-present in our home lives, fewer of us have encountered it at work. AI can now be used to read back date codes so it’s revolutionising the way in which we protect our packaging lines from the associated risks of incorrect labels, such as danger to consumer health, brand damage and the hit to the bottom line from the fines and wasted product. But when it comes to AI, it can sometimes feel like you need to become an expert just to understand the terminology, let alone the applications.
Here we provide a glossary of the terms to help you understand the basics and discover the benefits of using AI in your facility.
According to Merriam-Webster, AI is, “The capability of a machine to imitate intelligent human behaviour.” At its most basic level, it’s a collection of algorithms that allows machines to establish and validate rules from large volumes of data to make decisions and analyse situations. AI can also aggregate data automatically with the ability to extract it and organise it in a way that highlights relationships between elements of data.
There are currently two levels of AI: the first is based on algorithms and allows you to forecast behaviour; the second is based on deep learning and can make recommendations. Each has its uses depending on the application.
AI can be used to assist with continuous improvement and process optimisation, increasing the speed of operations, delivering greater accuracy and boosting productivity. A 2017 study by PWC found that global GDP will be 14% higher by 2030 as a result of AI adoption, which will contribute an additional $15.7 trillion to the global economy so it’s worthwhile considering applications within your facility.
Neural networks loosely imitate those neurons found in the human brain. They’re usually used to deal with complex problems that have an element of variability. For example, humans can identify images they’ve never seen before based on their experience of seeing similar things and getting feedback as to whether or not they’re right.
An untrained neural network doesn’t have this ‘experience’ so it must be trained to understand a dataset or images and assign them to the correct classification. Labelled data is needed to train a neural network, for example a ‘good date’ or a ‘bad date’, so that over time, it can start to recognise where new images should be classified.
Deep learning uses artificial neural networks to simulate the cognitive processes of a human brain when it comes to decision-making and recognition, using upwards of ten layers of processes and millions of these artificial neural networks. In this way, deep learning is good at identifying objects or features and classifying these into specific sets. This works especially well when recognising errors or missing items that are unknown in advance, as can be common on a packaging line.
It’s an extremely effective way to solve complex classification problems where there is a high degree of variability – e.g. reading back the date code on food products where different fonts, positions and artwork are in use on a regular basis, as well as varying printing technologies (e.g. inkjet, thermal, laser etc). Deep learning models can identify any unacceptable reads (e.g. an illegible date code) while also tolerating natural variations in complex patterns (e.g. the date code on one package printed higher than the rest of the line).
The most common neural network in use in machine vision applications is the Convolutional Neural Network (CNN). These are extremely complex, however, essentially they work like an algorithm to extract model data based on large training sets. Once the system has been trained, it can even recognise images it’s never seen before as it uses features it’s already learned to analyse new image data and classify it based on the probability that it belongs to a given category. In this way, with deep learning, adding more training images improves the accuracy of classification as it has a greater level of ‘experience’ to draw upon when meeting certain features for the first time, which makes it more resilient to changes over time.
Machine learning is a broader term which incorporates a number of AI-based solutions, including neural networks and deep learning. It allows systems to automatically learn and improve from experience without the need for explicit programming as they can access data and use it learn for themselves.
Machine learning uses algorithms to predict, classify and segment data based on the examples it has already seen. It relies on huge datasets and vast amounts of human effort to train up the systems to make decisions, but once it is programmed it is an extremely useful way of automating routine and unsafe tasks, or those prone to human error, to improve productivity and remove operators from dangerous or mind-numbing tasks.
Supervised vs unsupervised learning methods
Neural networks work better in unsupervised learning scenarios where there are various combinations of settings required to find the best solution. In this method, an extremely large dataset is needed in order to train up the network effectively and offer as much ‘experience’ as human would have in dealing with the data in real life circumstances.
However, other machine learning methods are more suited to supervised learning, which is in fact the most common method in use today. Systems are programmed with labelled input images and the algorithm chooses the best way in which to deliver the right outcome as it learns the relationship between the correct input to the correct output. There are a wide range of supervised learning methods so it’s important to select the right input for the algorithm, which is normally achieved by a staged learning approach. These supervised learning methods offer powerful pattern recognition tools and often require fewer training images than deep learning. The hardest part is ensuring that all of the possible outcomes have been accounted for so that the complexity of the solution can be fully understood.
Industrial or machine vision
According to the Automated Imaging Association (AIA), machine vision encompasses all industrial and non-industrial applications in which a combination of hardware and software provide operational guidance to devices in the execution of their functions based on the capture and processing of images. It has traditionally needed a series of scanners and an image analysis system with a very strict configuration set to precise rules to be able to inspect a wide variety of features such as date codes, colours and product quality etc.
Traditional vision systems have relied on optical character recognition (OCR) when reading date codes, designed to read specific characters. Due to the prevalence of inkjet printers in the food industry, which have a higher degree of variability, these vision systems have not been widely implemented as they have struggled with varying fonts and sizes of labels, font distortion and packaging changes. However, by incorporating AI, vision systems can deal with variations such as lighting, positioning, print quality and placement inherent in a food or beverage plant and read anything that is also legible to the naked eye.