Advances in technology continue to revolutionise the manufacturing industry, rapidly changing the manufacturing landscape. From the wide use of robotics to Artificial Intelligence (AI), these innovations are transforming the way products are designed, made, and delivered. In this article, we will explore how AI is shaping the future of manufacturing, which applications should consider using AI in their manufacturing processes, and how these can be optimised with the use of proper illumination.
Rule-based vs ai machine vision
Automation involves the use of machines and technology to perform repetitive and labour-intensive tasks. These systems are programmed to follow a set of predefined instructions to carry out tasks such as quality control, inspection, code reading, assembly and packaging.
Traditional rule-based machine vision relies on specific if-then rules to make decisions about an image. They are suitable for predictable environments because they lack the ability to adapt to changing conditions or learn from new data. Vision engineers program these rules with deep knowledge of the best techniques to achieve the desired output.
Artificial Intelligence systems work by training the software with labelled images until it can make distinctions independently. This allows the systems to identify patterns and make decisions based on insights gained from previous experiences.
Automation has been an integral part of the manufacturing sector for decades, streamlining processes and reducing human error. AI takes this to a whole new level by enabling machines to learn, adapt, and make decisions independently. With AI-powered systems, manufacturers can now analyse huge amounts of data, optimise production lines, detect anomalies more accurately, and predict maintenance needs. Two prominent AI technologies, edge learning and deep learning, further streamline the automation of complex tasks that are challenging to program with rule-based algorithms.
The global AI market has seen significant growth and is now valued at over $180 billion. This rapid expansion is expected to continue, with projections indicating it will surpass 826 billion U.S. dollars by 2030.
HOW CAN I TELL IF I SHOULD USE AI?
The difference between rule-based and AI-based vision lies in their approach to analysing and interpreting visual data. Rule-based machine vision operates on a set of predefined rules and parameters, typically set by a vision expert, to identify and classify objects in an image. It relies on strict descriptions and is most effective under consistent conditions.
On the other hand, AI-based systems can autonomously learn and adapt to different scenarios. These systems can handle complex and dynamic environments, offering more flexibility and scalability, making it particularly well-suited for applications with evolving conditions and diverse visual characteristics.
Rule-based vision is often utilised in applications where the criteria for inspection can be precisely defined and where the environment remains relatively stable and predictable. This includes tasks such as quality control in manufacturing, where specific measurements and thresholds can be configured to meet specifications.
AI-based machine vision is particularly suited to applications with high variability and complexity, where traditional rule-based methods may start to struggle. It excels in tasks such as surface inspection, texture inspection and object classification, especially when dealing with evolving defects.
To determine if your application needs AI, consider factors such as:
- the variability of the objects to be inspected,
- the availability of high-quality image data for training,
- and the need for adaptability to changing conditions.
If your application involves dynamic environments or requires continuous improvement without manual intervention, AI-based machine vision may be the preferable choice for you.
While machine learning systems offer greater flexibility and adaptability, they require significant initial investment in data collection, training, and development. Conversely, rule-based systems are typically simpler to implement, but may become more complex and difficult to manage with additional rules and mitigating factors. Ultimately, users should always evaluate their specific needs and goals with an expert to determine the most suitable approach for their applications.
Edge learning VS deep learning
Within AI applications, edge learning distinguishes itself from deep learning by prioritising ease of use throughout every deployment stage. It needs fewer images to train, reducing the time required for image setup and acquisition, and eliminates the need for specialised programming.
Deep learning mimics the way human brain neurons interact. It involves exposing numerous layers of neural networks to a large collection of similar images. Through iterative adjustments of these connections, deep learning can accurately recognise objects and detect defects without explicit instructions.
This approach excels at processing extensive and very detailed image sets, making it suitable for complex or highly custom applications. These applications are characterised by significant variability, requiring advanced computational resources and extensive training. To accommodate such diversity and capture all possible outcomes, training sets often consist of thousands of images. Deep learning is designed to swiftly and efficiently analyse large image sets, providing an effective solution for automating intricate tasks.
Although traditional deep learning frameworks are well-suited for complex applications, there are several factory automation tasks that are less intricate, making them more appropriate for edge learning.
AI’s potential can be harnessed by embedding application-specific knowledge into neural network connections from the outset. This pre-training significantly reduces computational demands, especially when paired with traditional machine vision tools. This leads to edge learning, a streamlined and efficient set of vision tools.
Edge learning tools can be trained in mere minutes with as few as five to ten images, in contrast to deep learning solutions, which might require extended periods of training and hundreds to thousands of images. This simplification of deployment allows manufacturers to scale operations much quicker.
To optimise edge learning for embedded vision systems, training images are downscaled or focused on specific regions of interest. If these downscaled images are discernible by a line engineer, edge learning tools are expected to perform equally well. However, this optimisation has its limitations. It restricts the use of edge learning in advanced and precise defect detection applications, which are better addressed by traditional deep learning solutions.
DEEP LEARNING WORKFLOW
The AI workflow for deep learning classification involves four main steps:
Preparation: Acquire, Label & Review Data
- Collect image data under conditions similar to the live application.
- Label each object in the dataset consistently and accurately.
- Review and correct any mislabeled data using software.
Training
- Export the prepared data.
- Use software to analyse the images and automatically learn identifying features.
- Train a classifier based on pre-trained Convolutional Neural Networks (CNNs).
Evaluation: Verify the Model on Test Data
- Verify model performance using visualisation options like confusion matrices and heatmaps to assess true and false positives and understand the network’s decision-making areas.
Inference: Deploy to New Live Images
- Apply the trained CNN classifier to new images to identify classes such as scratched, contaminated, or good samples.
- Execute inference on GPUs or CPUs for real-time application.
ILLUMINATION FOR AI APPLICATIONS
While AI continues to revolutionise automation, it is important to remember it still remains one among many tools operators can use to accomplish tasks. AI systems depend heavily on the hardware and training data, underscoring the critical nature of the training process. Even minor changes in environmental conditions, such as lighting variations, can significantly impact the system’s performance and reliability.
General considerations for AI applications include:
Stable illumination
Illumination should be considered early on to avoid having to retrain the vision system due to environmental changes. Lighting sources should be stable and consistent over time to minimise variations in image quality, which can lead to false classification results.
Uniformity
Illumination should be considered early on to avoid having to retrain the vision system due to environmental changes. Lighting sources should be stable and consistent over time to minimise variations in image quality, which can lead to false classification results.
Intensity
The intensity of the illumination should be sufficient to provide clear and detailed images without causing glare or unwanted reflections that can affect the accuracy of the analysis. Overexposure, whereby the intensity of light exceeds the dynamic range of the camera sensor, can lead to loss of detail in bright areas, while insufficient illumination can result in dark or noisy images.
CASE STUDY: SUCCESSFUL IMPLEMENTATION OF AI INSPECTION
Leveraging AI technology to enhance inspection processes, this Deep Learning inspection system inspects car parts while the vehicles are in motion, dealing with challenging shiny and reflective surfaces, and conducting over 60 inspections per vehicle.
By adopting AI technology, the application achieves precise anomaly detection, analysis, and classification, even with variations, based on examples provided to the system. This enables consistent and reliable decision-making with maximum accuracy, akin to human learning models.
SUMMARY
IN BRIEF:
- Applications with high variability, complex inspections benefit from AI-based vision systems.
- Stable environments with well-defined inspection criteria can rely on rule-based systems.
- Proper illumination is crucial for AI applications to ensure consistent image quality. Stable, uniform, and adequately intense lighting prevents quality variations, which can lead to false classifications.
- Choosing illumination early avoids retraining due to environmental changes and ensures consistent, accurate analysis. This reduces the need for costly adjustments and delays, ensuring more efficient operation.
The future of manufacturing is being shaped by advancements in AI, offering unprecedented possibilities for efficiency, productivity, and cost-effectiveness. From optimising production processes to developing personalised products, AI is transforming every aspect of the manufacturing industry. Depending on the industry, the task itself, and quantity and quality of training data, AI-driven systems can improve decision-making and adapt to new challenges effectively.
By embracing deep learning and adopting a data-driven approach, manufacturers can unlock new opportunities for growth, innovation, and sustainability. This results in smoother operations within today’s intricate supply chains, leveraging AI’s capacity to integrate data-driven insights, allocate resources effectively, and adjust production schedules responsively.
[1] Statista – “Artificial intelligence (AI) market size worldwide from 2020 to 2030”
https://www.statista.com/forecasts/1474143/global-ai-market-size