Automated Vehicle Inspection (AVI) systems are taking over from traditional assessment methods in many applications, allowing operators to assess new or used vehicles for damage or imperfections quickly and consistently, without needing specialist skills. As Inspektlabs puts it: “Our products conduct damage detection on items using photos and videos, eliminating the need for physical inspections”. The most common applications are for damage assessment for insurance claims, when used vehicles are brought to a dealer for trade-in and when rental vehicles are returned, but they are also being used at the beginning of a vehicle’s life, when it is delivered from the manufacturer, and even for preventative maintenance.
These systems use Artificial Intelligence (AI) techniques to identify issues rapidly and, it seems, reliably. US used car wholesaler Openlane has a network of inspectors who provide condition reports using a system provided by Israeli firm Click-Ins called Visual Boost AI. This provides “a virtual overlay of any exterior damage detected. The overlays feature hot pink highlights that pinpoint detected damage, including hail, paint peel, detached panels, broken lights, rust, scratches, dents and cracks”. The process of generating eight standard pictures in this fashion apparently takes less than a minute. French firm Tchek claims that its system can analyse a vehicle in a few seconds compared with over 20 minutes for an operator and “allows damage detection of over 90% on average”.
When we talk about AI in these applications, we are not talking about the text-based Large Language Models (LLMs) such as ChatGPT, or “generative AI” which creates images and video based on existing examples. Neither are AI inspection systems the same as traditional ‘computer vision’ systems, which have long been used in assembly processes such as ‘pick and place’ systems for assembling circuit boards. Here, each electronic component is presented to a camera to ensure it will be placed in the correct orientation and its attributes are well defined – e.g. its length and width – and whether the positive connector is a different colour than the negative.
MEDICAL SIMILARITIES
AI inspection systems have more in common with some medical and scientific applications, where the system is not dealing with regular or predictably-shaped objects. Here, Machine Learning (ML) has been used to identify particular types of images, whether it’s a mass on an MRI scan or a galaxy seen through a telescope. Here, these objects are NOT defined by specific attributes (eg shape or colour), which have been chosen by the operator. Instead, the system is ‘trained’ by showing it images which are classified in the way the operator desires: you might feed it pictures of a thousand galaxies of known types – and give it a list of these types (eg spiral, elliptical, lenticular galaxies). This process is known as ‘supervised learning’, as the training data (or ‘dataset’) has already been classified.
The system runs through a set of examples, trying to assign a type to each. Its first attempts will be practically random but, every time it makes a correct guess, that information is fed back into the system. This process reinforces subsequent decisions, so the correct identification rate should rise. There is also ‘unsupervised learning’, where no manually-labelled data is available – and both supervised and unsupervised learning often identifies recurring features or patterns which nobody has previously recognised or classified.
HEALTH UGRADE
ML can be amazingly effective – recent results from The Institute of Cancer Research indicate that an AI algorithm applied to CT scan images can grade sarcomas twice as effectively as more invasive biopsies. As far as vehicles go, Inspektlabs says that its vehicle damage inspection system has been trained on more than 7 million damage asset photos and videos “which allows us to achieve a very high level of accuracy (96%+)”.
Traditional machine vision systems use specialist cameras and lighting systems: for instance, assembly-line systems might use line-scan cameras to capture a single line of pixels at a time, or cameras with a very high frame rate. Specialists lighting includes backlighting to emphasise the overall shape and orientation of an object, or low-angle lighting to catch surface imperfections – like the ‘raking light’ used in manual paint inspections.
However, AI and increased processing power make it possible to use conventional cameras and to compensate for lighting conditions; systems such as Click-Ins’ Visual Boost AI can use images taken by dealers on smartphones, while Ravin’s AutoScan uses CCTV-style cameras to detect body damage on vehicles driving past at up to 30km/h.
Vision systems that incorporate ML are not just getting better, they are getting cheaper and more accessible. Today, processor boards costing a few hundred pounds are optimised for vision applications and ML software platforms such as Edge Impulse can run on low-cost devices such as a Raspberry Pi.
Cameras are not the whole story. Proovstation provides a specialist AI-assisted scanning system for vehicle tyres, using magnetic sensors built into drive-over pads. The scanner – just 30mm deep – measures the tyre’s condition within two seconds and delivers a report giving tread depth across its width, wheel alignment, recommendations on tyre rotation and the gap between wear patterns on different axles. The system has been adopted by Michelin, which calls it QuickScan, adding features such as tyre residual value assessment.
AUTOMATED ASSEMBLY ADVANTAGES
Manufacturers also use AVI on the Manufacturers also use AVI on the production line – for example, Toyota has a DeGould Auto-Compact system at its US plant in Mississippi. This system looks for damage and mistakes in assembly and then checks that the correct specification of options such as wheels and bodywork have been fitted to a particular vehicle. The Auto-Compact system is a free-standing drive-through framework – including cameras, lights and processing unit – which takes up just 18m2. Vehicles can be driven through or even run on a conveyor belt for maximum throughput. The system uses 10 high-resolution DSLR cameras and six or more machine vision cameras, with a combination of lighting fields, to take hundreds of ultra-high-resolution images within a few seconds. The firm says its algorithms are based on over 100 million image datasets and can identify defects down to 1mm in size – picking up “twice the number of defects as a human vehicle inspector”.Amazon is using a system from UVeye to inspect its fleet of delivery vans, initially in the US. After each shift, drivers take their vans through an AVI archway and over plates equipped with sensors and cameras, at up to 5mph. The system looks for body damage and tyre wear, but also performs an underbody scan of the chassis and running gear – this combines images from more than one camera, for a stereoscopic 3D scan, and was originally developed for scanning vehicles at borders and checkpoints.
This system can give valuable information regarding immediate repairs and preventative maintenance. Tom Chempananical, global fleet director at Amazon Logistics, says: “It can keep track of detected vehicle issues and see if they are repeatedly happening on particular routes”.
Automated inspection is not just confined to road vehicles, of course: train inspection is also done using cameras and other sensors, and AI-driven systems are used for track inspection. Other types of infrastructure are inspected this way too: for instance, power line and pylons can be checked via drone images. Grid Vision’s system combines AI inspection with manual verification: the inspector can check that the system has identified and classified issues correctly, and can add issues that the system has not identified. This ‘Collaborative AI’ approach should improve the performance of the system overall.