SciELO - Scientific Electronic Library Online

 
 número58Performance analysis of serial and shunt microwave switches designed with p-i-n diodes of different semiconductor materialsAutomatic metric collection to a repository of measurements índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Não possue artigos similaresSimilares em SciELO
  • Em processo de indexaçãoSimilares em Google

Compartilhar


Revista Facultad de Ingeniería Universidad de Antioquia

versão impressa ISSN 0120-6230versão On-line ISSN 2422-2844

Rev.fac.ing.univ. Antioquia  n.58 Medellín abr./jun. 2011

 

Artificial vision and identification for intelligent orientation using a compass

Orientación inteligente usando visión artificial e identificación con respecto a una brújula

Alejandro Israel Barranco Gutiérrez1*, José de Jesús Medel Juárez1,2,

1Applied Science and Advanced Technologies Research Center, Unidad Legaría 694 Col. Irrigación. Del Miguel Hidalgo C. P. 11500, Mexico D. F. Mexico

2Computer Research Center, Av. Juan de Dios Bátiz. Col. Nueva Industrial Vallejo. Delegación Gustavo A. Madero C. P. 07738 México D. F. Mexico


Abstract

A method to determine the orientation of an object relative to Magnetic North using computer vision and identification techniques, by hand compass is presented. This is a necessary condition for intelligent systems with movements rather than the responses of GPS, which only locate objects within a region. Commonly, intelligent systems have vision tools and identification techniques that show their position on the hand compass without relying on a satellite network or external objects that indicate their location. The method of intelligent guidance is based on image recognition for the red needle of a compass, filtering the resulting image, and obtaining the angle direction, that allows finding the orientation of the object.

Keywords: Computer vision, identification, hand compass, RGB Image.


Resumen

En este trabajo presentamos un método para determinar la orientación de un objeto con respecto al Polo Magnético utilizando la visión por computadora así como las técnicas de identificación, con respecto a la aguja de la brújula. Condición necesaria dentro de los sistemas inteligentes con movimiento en lugar de las respuestas del GPS, ya que solo ubica al objeto dentro de una región. Comúnmente, los sistemas inteligentes cuentan con herramientas de visión y las técnicas de identificación y solo requieren obtener su posición con respecto a la aguja de la brújula sin depender de una red de satélites o de objetos externos que indican su orientación. El método de orientación inteligente se basa en el reconocimiento de imágenes para la manecilla roja de una brújula y que al filtrar la imagen resultante se puede obtener el ángulo que tiene la aguja, permitiendo conocer la orientación del objeto.

Palabras clave: Visión por computadora, identificación, brújula, imagen RGB.


Introduction

This paper demonstrates a computer vision application with identification techniques in which the computer understands the meaning of hand compass location. The compass, a device used to determine geographical directions, usually consists of a magnetic needle or needles, horizontally mounted or suspended, and free to pivot until aligned with a Planet's magnetic field [1-3]. Autonomous mobile robots need methods to obtain their location objective, commonly linked to nearby environments [1], but in an actual technological situation, some of them, have GPS (Global Positioning System) tools traveling around the world autonomously [2]. However, what happens when the GPS system cannot operate suitably? One solution is to show an application of computer vision to understand the meaning of hand compass orientation for robot intelligent systems. For example, the airplane viewed as intelligent system navigation is based in GPS, and has limits with respect to trajectory changes. In this case, the intelligent system considering the VOR (Very Omni-directional Range) net indicates the trajectory as is illustrated in figure 1.

The intelligent system changes to a manual operation when it breaks communication with an airplane. The compound tool has many technical limitations so that a normal hand compass orientation methodology could be used as a tool in intelligent machines, operating with internal computer vision and identification techniques.

The system used to locate an airplane position on land is the GPS but its trajectory with aerodynamic conditions, requires the VOR net system illustrated in figure 1. The system generated in this case is very complex. If one of these fails, the airplane loses its trajectory, and has to change to manual mode, using compass location. The airplane has a set of intelligent systems and identification techniques which shutdown when there are external problems.




Additionally this paper focuses on an intelligent robot that needs to locate its position within any solar system, which does not operate either GPS or VOR. It needs to determinate the North Pole considering traditional magnetic hand compass directions, using intelligent computer vision and identification techniques. The analysis consists in: taking digital images of a compass identifying the needle that determines the hand compass angle with respect to magnetic North. An example of the images used in the experiment is shown in figure 2, where the image has resolution 1152 per 864 pixels. The black background and red needle, is expressed in gray tones.

This methodology works within limits [4, 5]: the distance between the compass and the camera is variable and the angle between the normal and the compass slope does not surpass 20 degrees, as shown in figure 3.

 




For representative information of the compass image there are many useful computer vision tools such as: image enhancement methods, images thresholding, image segmentation and image analysis [6-8]. Some of these tools have the objective to locate Magnetic North considering the methodology proposed in figure 4.

Scenario for an intelligent robot approach

The scenario described above is presented as a good option, but it is important to describe the efficiency of this method compared to GPS. First we describe how GPS works, and the environment in which an intelligent robot can act which is the same for GPS and the RTDM (Red Thresholding Discrimination Method). Both methods complement each other, giving an intelligent robot more autonomy when working in different environments and circumstances.

GPS was first designed for military purposes. The US Military developed and implemented a network using satellites as a military navigation system around Earth, but soon opened it to the public [9]. A GPS receiver's job is to locate four or more of these satellites, figure the distance to each, and use this information to deduce its own location. This operation, based on a simple mathematical principle called trilateration is illustrated in figure 5. It has been implemented in 2D and 3D scenarios [10]. First we explain 2D to give a general idea, and then 3D with the scenario of the intelligent robot, showing its pros and cons. Finally the system will explain where and how the gray TDM (Thresholding Discrimination Method) complements an intelligent robot operation.




The 2D intersections trilateration allows locating an object, using three circles. For a 3D scenario, the trilateration considered three intersected circles, obtaining the zone location. The trilateration system applied to intelligent robots permits autonomy in any field. The intelligent robot with defined end point trajectory, selects its movements, with 2D or 3D information and criteria, to accomplish an objective. The effect of weather conditions, noise, vibrations in a small room, and the GSP will give wrong information, affecting the robot's location [11]. Therefore, the computer robot system requires intelligent algorithms, bounding the disturbances and allowing the trajectory to accomplish its conditions. Figure 6 shows the situation where the robot is inside an isolated environment.

RTDM avoids obstacles. It is based on a hand compass registered by images, that permits the robot's evolution trajectory, taking into account the last coordinates and the actual location with respect to GLIM (Global Localization Image Map) described its operations in a block diagram, such that the robot creates an intelligent strategy situated in a fixed scenario (see example: [1, 12, 13]), as shown in figure 7.

This approach presents a combination of two techniques; one is GPS and the other is RTDM giving a new location [14].

 


Experiment

Image enhancement

This experiment necessitates enhancing the images eliminating noise at high frequency because the camera is sensitive to high frequency noise and different illumination qualitative, but before image filtering, it was necessary to observe an RGB color image. Moreover, we can use each RGB color independently. So at the first image processing stage we use the low pass filter as an arithmetic averages filter [15, 16]. For each RGB color we can obtain the intensity matrixes fR (x, y), fG (x, y), fR (x, y), fB (x, y), respectively, as shown in figure 8 a). Their deviations are described as equations described in (1) and figure 8 b).

Where faaR(x,y) is the red intensity image deviation, faaG(x,y) is the green, and faaB(x,y) is the blue and n = 3 is the filter kernel dimension applied for each RGB intensity matrix [17, 18]. For example, figure 8 a) shows the red image intensity, and figure 8 b) shows the filtered image, observing that the first image intensity has noises that affect the contents.




The same filter process was used in green and blue functions described as equations faaG and faaB, respectively.

Color image thresholding

At this stage we prepared the object's separation in RGB image [15] to isolate the object of interest. We are interested in finding the hand compass location. First we have to get the hand compass information, and color separation images. The North pointer is red, so first we have to make red color segmentation and we propose a simple method segmentation process after the thresholding stage. So we implemented of Otsu's method [6, 8]. We got three compass binary images: red, green, and blue, as shown in figure 9 a), figure 9 b) and, figure 9 c), respectively. An important aspect we must observe is that white needle has a considerable amount of red, green, and blue. The white pointers appear in the red, green, and blue threshold images.

Images segmentation

At this stage, we isolated the studied RGB image compass colors, shown in figure 10 a). Finding the red handle color expressed symbolically as redpure in base to equation (2).

The results with respect equation (2) correspond to the zone shown in figure 10 b). Equation (2) allows generate red pointers in the compass image. Therefore, the computer obtains the image shown in figure 10 c), expressed in gray.

ISkeletonization

At this stage we obtained the mean line according to the segmented image in figure 9 c), using techniques exposed in [14] considering it removes pixels from object boundaries but does not allow objects to separate. The pixels remaining make up the image skeleton as illustrated in figure 11. This option conserves the Euler number [6, 19].

Angle estimation using Least Squares Method

To estimate the compass hand angle, we consider the skeleton points (see figure 11 c)). Therefore, the skeleton point set is expressed as {si:= (xi, yi )} ⊆ R2[1,n] with i ∈ N.

Theorem 1. Consider the function f(xi) = a + bxi, f(xi) ∈ R that described the sequence of si. . The parameters estimation is optimal and has the form and




Proof 1. The functional error in discrete form is described by the second probability moment

Substituting the proposed function in equation (5):

Where α and b are unknown parameters, with respect to the trajectory depicted by the skeleton points sequence. In this sense, the gradients of equation (6) respect both parameters:

Simplifying the expressions considered in equations contained in (7):

The equations contained in (8) are symbolically expressed as:

The analytical parameters in base to equations contained in (9) have the forms:

Theorem 2. The recursive functional error has the form:

Converge in AAP (Almost All Points) to

Proof 2. Considering the basic mathematical expression

According to [20, 21], Gn integrated by {si = μ(xi, yi) <∞, i = 1,n, n ∈ Z+ } as a metric sequences set in L2, expressed as a group of radio vectors in ζG . The second probability moment with respect to error identification , has a recursive form for stationary conditions expressed in equation (11), and the sequence converges in AAP , i.e.,  in agreement to optimal parameters results considered as equations contained in (10).

Theorem 3. According to camera reference point, the relative axes system is within a relative scope defined as an analytical technique:

Where: âk is the camera relative scope and, bk is the hand compass scope. This means that âk  is a relative angular moving with respect to the camera axes.

Proof 3. The relative angular position considered in [5, 13, 17] , expressed as mm is a functional of unknown camera relative and hand compass scopes, described in equations contained in (10), so that, the hand scopes estimation converge in AAP in a agreement to theorem 2, obtaining equation (13).

Therefore, the relative angle with respect to magnetic North according to the relative axes camera position considered in equation (13) is depicted in figure 12 c) showing the identified line scope bk.

The relative scope after identifying the thin black line with respect to equation (13) based on equations contained in (10), had a positive angle, and described as 4 grades deviation with respect to Magnetic North.

Conclusion

In this paper we developed an intelligent orientation methodology that could be used by intelligent machines automatically locating Magnetic North, without considering an external information system. The combination of identification stochastic techniques and traditional computer vision, showed the slope as a thin black line, illustrated in figure 12, corresponding to equation (13).

In future works we will focus on expanding the dynamic location between different time intervals, considering Nyquist restrictions.

References

1. A. Ollero. Robótica, manipuladores y robots móviles. Ed. Marcombo. Barcelona. 2001. pp. 8-11.         [ Links ]

2. O. Khatib, V. Kumar, D. Rus. "Robust GPS/INS-Aided Localization and Mapping via GPS Bias Estimation". The 10th International Symposium on Experimental Robotics. Brazil, 2008. pp. 1-2.         [ Links ]

3. W. H. Hayt. Teoría electromagnética. Ed. Mc Graw Hill. Mexico. 2003. pp. 330-340.         [ Links ]

4. R. Hartley. A. Zisserman. Multiple View Geometry in Computer Vision. Ed. Cambridge University Press. Canberra (Australia). 2003. pp. 151-178.         [ Links ]

5. K. Voss, J. L Marroquin, S. J. Gutiérrez, H. Suesse. Análisis de Imágenes de Objetos Tridimensionales. Ed. Polytechnic. Mexico. 2006. pp. 49-74.         [ Links ]

6. J. H. Sossa. Rasgos Descriptores para el reconocimiento de Objetos. Ed. Polytechnic. Mexico. 2006. pp. 10-30.         [ Links ]

7. M. Works. Image Processing Toolbox for use with Matlab User's Guide. 2003. pp. 7/0- 7/ 11.         [ Links ]

8. N. Otsu. "A threshold selection method from gray level histograms". IEEE Transactions on Systems. Man, and Cybernetics. Vol. 9. 1979. pp. 62-66.         [ Links ]

9. W. J. Lewandowski, W. Azoubib, J. Klepczynski. "GPS: primary tool for time transfer". Proceedings of the IEEE. Vol. 87. 1999. pp. 163-172.         [ Links ]

10. S. J. Peczalski, A. Kriz, J. Carlson, S. G. Sampson. "Military/civilian mixed-mode Global Positioning System (GPS) receiver (MMGR)". Aerospace Conference IEEE. Vol. 4. 2004. pp. 2697- 2703.         [ Links ]

11. G. Bullock, J. B. King, T. M. Kennedy, H. L. Berry, E. D. Zanfino. "Test results and analysis of a low cost core GPS receiver for time transfer applications". Frequency Control Symposium. Proceedings of the 1997 IEEE International. Vol. 28. 1997. pp. 314-322.         [ Links ]

12. G. A. I. Barranco. Sistema Mecatrónico Controlado Telemáticamente. Ed. Polytechnic. Mexico. 2006. pp. 10-12.         [ Links ]

13. G. A. I. Barranco. J. J. Medel. Visión estereoscópica por computadora. Ed. Polytechnic. México. 2007. pp. 1-4.         [ Links ]

14. B. Espiau. "A New Approach To Visual Servoing In Robotics". IEEE Transactions in Robotics and Automation. Vol. 18. 1992. pp. 313-326.         [ Links ]

15. G. Ritter, J. Wilson. Handbook of Computer Vision Algorithms in Image Algebra. Ed. ITK knowledge. 1996. pp. 125-158.         [ Links ]

16. O. Faugeras, F. Lustman. "Motion and Structure from Motion in a Piecewise Planar Environments". Proceedings of the 8th WSEAS International Conference on Computer Vision. INRIA France. Vol. 1. 1988. pp. 2-4.         [ Links ]

17. J. Lira. Introducción al tratamiento digital de imágenes. Ed. Polytechnic. Mexico. 2002. pp. 219­336.         [ Links ]

18. E. Trucco, A. Verri. Introductory Techniques for 3d Computer Vision. Ed. Prentice Hall. Genève (Italy) 1998. pp. 220-292.         [ Links ]

19. P. Gonzáles, J. M. De la Cruz. Visión por computador. Ed. AlfaOmega RaMa. Madrid 2004. pp. 65-294.         [ Links ]

20. D. Poole. Algebra Lineal: Una introducción moderna. Ed. Thompson. México. 2007. pp. 577-590.         [ Links ]

21. G. A. I. Barranco., J. J. Medel., "Digital Camera Calibration Analysis using Perspective Projection Matrix". Proceedings of the 8th WSEAS International Conference on signal processing, robotics and automation. 2009. pp. 321-325.
        [ Links ]


(Recibido el 21 de septiembre de 2009. Aceptado el 30 de noviembre de 2010)

*Autor de correspondencia: teléfono: + 52 + 57 29 60 00, fax: + 52 + 53 95 41 47, correo electrónico: barranco_alejandro@yahoo.com.mx (A. I. Barranco)

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons