Illuminating the Future: How AI and Machine Vision Lighting are Transforming Manufacturing

From the wide use of robotics to Artificial Intelligence (AI), advancing technology is revolutionising the way products are designed, made, and delivered. In this article, we will explore how AI is shaping the future of manufacturing, which applications should consider using AI, and how machine vision lighting can optimise manufacturing processes.

ai laptop

Rule Based vs Machine Vision

Automation involves the use of machines and technology to perform repetitive and labour-intensive tasks. These systems are programmed to follow a set of predefined instructions for tasks like quality control, visual inspection, barcode reading, assembly and packaging.

Traditional rule-based machine vision relies on specific if-then rules to make decisions about an image. They are suitable for predictable environments because they lack the ability to adapt to changing conditions or learn from new data. Vision engineers program these rules with deep knowledge of the best techniques to achieve the desired output.

Artificial Intelligence systems train the software with labelled images until it can make distinctions independently. These AI-powered solutions learn, adapt, identify patterns and make decisions based on insights gained from previous experiences.

Currently, the global AI market is valued at over $180 billion, with projections indicating it will surpass $826 billion by 2030 (Thormundsson, 2024). Two prominent AI technologies, edge learning and deep learning, further streamline the automation of complex tasks that are challenging to program with rule-based algorithms.

How can I tell if I should use AI?

The difference between rule-based and AI-based vision lies in their approach to analysing and interpreting visual data. Rule-based machine vision operates on a set of predefined rules and parameters, typically set by a vision expert, to identify and classify objects in an image.

Alternatively, AI-based systems can autonomously learn and adapt to different scenarios. These systems can handle complex and dynamic environments, offering more flexibility and scalability. It excels in tasks such as surface inspection, texture inspection and object classification, especially when dealing with evolving defects and diverse visual characteristics.

To determine if your application needs Artificial Intelligence learning, consider these factors:

  • Variability of the objects to be inspected
  • Availability of high-quality image data for training
  • The need for adaptability to changing conditions

If your application involves dynamic environments or requires continuous improvement without manual intervention, AI-based machine vision may be the preferable choice. Additionally, consider how machine vision lighting systems can support accurate and reliable image acquisition in such scenarios.

While AI-based systems offer greater flexibility and adaptability, they require significant initial investment in data collection, training, and development. Conversely, rule-based systems can be simpler to implement, but may become more complex and difficult to manage. Ultimately, users should always evaluate their specific needs and goals with an expert to determine the most suitable approach for their applications.

ai sat at pc

Deep Learning vs Edge Learning

Within AI applications, edge learning prioritises ease of use throughout every deployment stage. Compared to deep learning, it needs fewer images to train, reducing the time required for image setup and acquisition, and eliminates the need for specialised programming.

Deep learning mimics the way human brain neurons interact. It involves exposing numerous layers of neural networks to a large collection of similar images. Through iterative adjustments of these connections, deep learning can accurately recognise objects and detect defects without explicit instructions.

Deep learning excels at processing extensive and very detailed image sets, making it suitable for complex or highly custom applications. Deep learning can swiftly and efficiently analyse large image sets, providing an effective solution for automating intricate tasks.

Edge learning tools can be trained in mere minutes with just five to ten images, in contrast to deep learning solutions, which might require extended periods of training and hundreds to thousands of images. This simplification of deployment allows manufacturers to scale operations much quicker.

To optimise edge learning for embedded vision systems, training images are downscaled or focused on specific regions of interest. However, this restricts the use of edge learning in advance and precise defect detection applications, which are better addressed by traditional deep learning solutions.

Deep Learning Workflow

3d low poly abstract background with shallow depth of field

The AI workflow for deep learning classification includes four stages:

Preparation: Acquire, Label & Review Data

  • Collect image data under conditions similar to the live application.
  • Label each object in the dataset consistently and accurately.
  • Review and correct mislabelled data using software.

Training

  • Export the prepared data.
  • Use software to analyse the images and automatically learn identifying features.
  • Train a classifier based on pre-trained Convolutional Neural Networks (CNNs).

Evaluation: Verify the Model on Test Data

  • Verify model performance using visualisation options (confusion matrices and heat-maps) to assess true and false positives and understand the network’s decision-making areas.

Inference: Deploy to New Live Images

  • Apply the trained CNN classifier to new images to identify classes such as scratched, contaminated, or good samples.
  • Execute inference on GPUs or CPUs for real-time application.

Illumination for AI Applications

While AI continues to revolutionise automation, it remains one among many tools operators can use to accomplish tasks. AI systems depend heavily on the hardware and training data, underscoring the critical nature of the training process. Even minor changes in environmental conditions, such as lighting variations, can significantly impact the system’s performance.

General considerations for AI applications include leveraging machine vision lighting systems and optimising illumination in AI applications. This enhances accuracy and stability which is critical for ensuring consistent results.

  • Stable illumination

Illumination should be considered early on to avoid having to retrain the vision system due to environmental changes. This will minimise variations in image quality, which can lead to false classification results.

  • Uniformity

It is important that illumination is uniform for the best results in image quality and for processes to work at their fastest rate.

  • Intensity

The intensity of the illumination should be sufficient to provide clear and detailed images without causing glare or reflections that can affect the accuracy of the analysis.

Examples of lighting technologies include: Bar lights for wide, uniform illumination, ring lights for shadow-free lighting around objects, and diffused backlights for enhancing contrast in silhouette imaging. These solutions are essential to ensure optimal performance in AI-driven systems, particularly in applications requiring consistent and high-quality image acquisition. 

Sources: Thormundsson, B. (2024). Artificial intelligence (AI) market size worldwide from 2020 to 2030. https://www.statista.com/forecasts/1474143/global-ai-market-size

Related products

  • MR-Dome

    • Powerful dome light
    • Overdrive, sectors, bicolour
    • 80mm and 130mm IntØ
  • Modular M-EBAR

    • Modular bar light
    • Angle Changers flexibility
    • Sizes up to 500mm