Institute of

Cognitive Integrated Sensor Systems

Prof. Dr.-Ing. Andreas König

News

Staff

Teaching

Research

Publications

Conferences

Student Theses

Contact

Impressum

Home

A Reconfigurable, Robust Robot Vision System for Medical Laboratory Automation

Subject:
The project contributes to the automation of medical laboratory tasks by the automation of handling of blood samples by a medical robot. To discern different container or tube types, the robot was augmented by an appropriate vision system based on QuickCog under cost and real-time constraints. In particular, the expected higher number of different installation sites required the consideration of reconfiguration and adaptation mechanisms for robot system initialization and recalibration.

Abstract:
A common practical task is the analysis of samples collected from patients at doctors consulting rooms. Blood samples and other samples are sent to laboratories for analysis in large numbers. The automation of this process requires idenfication, handling and tracking of the sample containers by automation equipment. In particular handling and opening requires particular care due to the variety and diversity of the different tube types and caps, which is illustrated in the following figure:


Erroneous type identification and handling of sample containers can lead to fatal damage and consequently the obliteration of the sample itself. To avoid this precarious situtation, the handling robot should be equipped with sufficient sensorial capability. Numerous related problems commonly can be solved by vision solutions at reasonable cost. Thus, the medical robot conceived for the automated handling and opening was augmented in this work by a dedicated vision system for tube type detection and assessment. The following figure exemplifies important steps and options in the design and reconfiguration of the medical robot vision system:

 

The design activities are related to other ISE activities on image and sensor signal system design automation. However, in this project the issues of robustness, multi-iinstance deployment, and drift/aging compensation by reconfiguration were in the focus of the research. In-the-Loop-Learning employing, e.g., based on Evolutionary Computation or related techniques, was considered similar to the application in Evolvable Hardware activities, as illustrated below:


The abstraction of this adaptation or reconfiguration mechanism was applied in part to the dedicated vision system developed for the medical laboratory robot. The robot and its vision system take the role of the Evolvable Hardware in this approach. Adaptation or reconfiguration in a supervised scheme is extremely simplified in the given case, as tube cartridges with clear class or type affiliations are available in sufficient numbers for calibration cycles, including the crucial image acquisition step. This is sketched in the following figure:

In the current state, only a fraction of the system is reconfigurable by automated learning mechanism. With regard to cost constraints and feasibility issues of a real-world industrial application, the need for reconfiguration for different phases in the product life time was assessed. Current emphasis of self-calibration is on the support of calibration activities required at customers installation to avoid the need for service staff presence. The baseline considerations are summarized in the following figure:

The elaborated concepts have been embodied by an extended QuickCog implementation and integrated into the former DAVID-robot system and currently into the OLA 2500 of Olympus. The following pictures show the OLA 2500 with decapper unit and several tupe types' side and top views:

Different and new tube types and in particular textured caps, denoted as tiger caps, iimpose further challenges on system enhancement:

Image processing in the developed vision system works in several channels from the different views and different features processing, e.g., related to shape and/or color iinformation. More details on these processing steps can be found in the recent references given below. The following figure shows a feature space (see dimensionality reduction and data visualization activities of ISE), that results from the feature computation stage:

 

Final classification is achieved in an hierarchical approach, employing PNN-classifiers and a fusion stage, which also exploits rules for consistency check and reject options in the final decision making:

The following tables exemplify typical system performance by relevant benchmark data collected from several installation sites:

Total of analysed tubes in test sets:

19.993

Number of sample sets:

17

Number of different machines:

16

Size of sample sets:

168-3856

Number of samples per class in training sets

20 - 100

Number of Cap-Types per sample set:

5-18

Number of combinations of Cap and Colour:

7-37

 

 

Correct
Percent

Wrong
Percent

Rejected
Percent

Correct
Total

Wrong
Total

Rejected
Total

Tube types

98.97

0.03

1.00

19'993

6

202

CapsColors

98.78

0.06

1.16

19'955

12

234

 

The outlined system and its derivatives are in use on more than 250 installations world wide. Current research focuses on potential system improvement, e.g., by employing support-vector-machines (SVM) in the decision making or advanced feature computation and fusion techniques. In particular, further advance of system design automation and reconfiguration capabilitiy is in the focus of the ongoing work. The objective is to obtain improved robustness to deviations/drift/aging, aleviated adaptation to meet task modifications along with improved recognition ability.

 

  Status:   running, duration 06/1999-03/2002, continued until present as doctoral project
  Partner:   Streamline-GmbH (Now: Olympus Diagnostica Labautomation GmbH)
  Financing:   Streamline-GmbH (Now: Olympus Diagnostica Labautomation GmbH)
  Contact:   Prof. Dr.-Ing. Andreas König
  Contributors:   Michael Eberhardt and Andreas König
  Publications:    
      M. Eberhardt, R. Hecht und A. König. Einsatz des Konzepts Machine-in-the-Loop-Learning zum individuellen, robusten Anlernen von Laborrobotersystemen. KI Zeitschrift, Ausgabe 02/02, S. 44-47, 2002.
       
      Michael Eberhardt, Siegfried Roth, and Andreas König.  Industrial Application of Machine-In-the-Loop-Learning for a Medical Robot Vision System - Concept and Comprehensive Field Study. In special issue on Advances on Computer-based Biological Signal Processing, of the Journal Computers and Electrical Engineering (CEE), Elsevier, 2008.