SciELO - Scientific Electronic Library Online

 
vol.23 issue48Predicting Cyber-Attacks in Industrial SCADA Systems Through The Kalman Filter Implementation author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


TecnoLógicas

Print version ISSN 0123-7799On-line version ISSN 2256-5337

TecnoL. vol.23 no.48 Medellín May/Aug. 2020

https://doi.org/10.22430/22565337.1408 

Artículo de revisión

A review of algorithms, methods, and techniques for detecting UAVs and UAS using audio, radiofrequency, and video applications

Revisión de algoritmos, métodos y técnicas para la detección de UAVs y UAS en aplicaciones de audio, radiofrecuencia y video

1 PhD. in Engineer, Centro de Desarrollo Tecnológico Aeroespacial para la Defensa Fuerza Aérea Colombiana, Rionegro- Colombia, jimmy.florez@fac.mil.co

2 MSc. in Information and Communication Technologies, Centro de Desarrollo Tecnológico Aeroespacial para la Defensa, Fuerza Aérea Colombiana, Rionegro - Colombia, jose.ortega@fac.mil.co

3 Project management specialist, Centro de Desarrollo Tecnológico Aeroespacial para la Defensa, Fuerza Aérea Colombiana, Rionegro- Colombia, abetancur.cetad@epfac.co

4 MSc. in Information and Communication Technologies, Centro de Desarrollo Tecnológico Aeroespacial para la Defensa, Fuerza Aérea Colombiana, Rionegro-Colombia, andres.garciares@upb.edu.co

5 Electronic Engineer, Centro de Desarrollo Tecnológico Aeroespacial para la Defensa, Fuerza Aérea Colombiana, Rionegro-Colombia, marlon.bedoya@epfac.edu.co

6 PhD. in Electronic Engineer, Grupo Automática, Electrónica y Ciencias Computacionales, Instituto Tecnológico Metropolitano, Medellín-Colombia, juanbotero@itm.edu.co


Abstract

Unmanned Aerial Vehicles (UAVs), also known as drones, have had an exponential evolution in recent times due in large part to the development of technologies that enhance the development of these devices. This has resulted in increasingly affordable and better-equipped artifacts, which implies their application in new fields such as agriculture, transport, monitoring, and aerial photography. However, drones have also been used in terrorist acts, privacy violations, and espionage, in addition to involuntary accidents in high-risk zones such as airports. In response to these events, multiple technologies have been introduced to control and monitor the airspace in order to ensure protection in risk areas. This paper is a review of the state of the art of the techniques, methods, and algorithms used in video, radiofrequency, and audio-based applications to detect UAVs and Unmanned Aircraft Systems (UAS). This study can serve as a starting point to develop future drone detection systems with the most convenient technologies that meet certain requirements of optimal scalability, portability, reliability, and availability.

Keywords: Drone detection; Deep Learning detection; Machine Learning classification; sound sensors; video sensors; radiofrequency sensors

Resumen

Los vehículos aéreos no tripulados, conocidos también como drones, han tenido una evolución exponencial en los últimos tiempos, debido en gran parte al desarrollo de las tecnologías que potencian su desarrollo, lo cual ha desencadenado en artefactos cada vez más asequibles y con mejores prestaciones, lo que implica el desarrollo de nuevas aplicaciones como agricultura, transporte, monitoreo, fotografía aérea, entre otras. No obstante, los drones se han utilizado también en actos terroristas, violaciones a la privacidad y espionaje, además de haber producido accidentes involuntarios en zonas de alto riesgo de operación como aeropuertos. En respuesta a dichos eventos, aparecen tecnologías que permiten controlar y monitorear el espacio aéreo, con el fin de garantizar la protección en zonas de riesgo. En este artículo se realiza un estudio del estado del arte de la técnicas, métodos y algoritmos basados en video, en análisis de sonido y en radio frecuencia, para tener un punto de partida que permita el desarrollo en el futuro de un sistema de detección de drones, con las tecnologías más propicias, según los requerimientos que puedan ser planteados con las características de escalabilidad, portabilidad, confiabilidad y disponibilidad óptimas.

Palabras clave: Detección de drones; aprendizaje profundo; aprendizaje de máquina; sensores de sonido; sensores de video; sensores de radiofrecuencia

1. INTRODUCTION

Unmanned Aerial Vehicles (UAVs) and Unmanned Aircraft Systems (UAS), also known as drones, were once only thought of as military aircraft, and, news and government media show relatively large aircraft controlled by an operator hundreds of miles (or half a world) away. Unmanned aircraft, such as General Atomics MQ-1 the Predator, have become famous for providing surveillance and delivering weapons without putting the operator at risk. As the technology to control and operate unmanned aircraft has become cheaper and widely available, commercial- and consumer-grade drones have been developed by a variety of manufacturers, which has contributed to their growing popularity, increasing their commercialization and making them affordable to anyone [1], [2].

Although UASs can be used in an endless number of applications (such as merchandise transport, aerial photography, agriculture, monitoring, search, and rescue, among others), they also pose new challenges in terms of safeguarding certain areas or spaces susceptible to trespassing, electronic warfare, or terrorist acts [3], [4].

Considering the security problems generated by the wrongful or illegal use of UASs, different approaches to address them have been considered; they include the use of radars, the analysis of the electromagnetic spectrum, audio analysis in the audible spectrum and ultrasound, image analysis in different spectrum ranges, and the implementation of artificial intelligence techniques to improve the accuracy and efficiency of the detection [5].

In addition, there are lots of types of drones with different technical specifications, including potential payloads, operating frequency, level of autonomy, size, weight, kind of rotors, speed, and other characteristics [6]. This diversity of UAS designs, as well as the specific conditions of the area to be protected, increase the complexity of the solutions.

This article is a state-of-the-art review of methods and techniques used for drone detection through video, radiofrequency and audio-based algorithms, which may serve as a reference point for the study, development, and implementation of these engineering solutions adapted to the characteristics of specific deployment environments. Section 2 of this study focuses on these detection methods and their classification. Finally, Section 3 presents the conclusions.

2. DETECTION METHODS

2.1. Sound detection method

2.1.1 Correlation techniques

Correlation, in the broadest sense, is a measure of the association between variables. In correlated data, the change in the magnitude of one variable is associated with a change in the magnitude of another variable, either in the same (positive correlation) or the opposite direction (negative correlation) [7] (Fig. 1).

Source: [8].

Fig. 1 Relationship between two variables with different correlation coefficients 

In [9] the authors recorded a signal to establish the fingerprint of a drone and correlate it with another noisy signal to be recognized in order to identify the presence or absence of the previously saved signal.

They calculated Pearson, Kendall, and Spearman correlations, reaching for 4 rotors drone getting maximum similarities of 49.3 %, 65.64 %, and 85.37 %, respectively.

A. Pearson correlation

Pearson’s correlation coefficient (r) is a measure of linear correlation between two quantitative variables regardless of the scale of the measure. The values determined by the Pearson correlation range between +1 and -1. A correlation value of 1 would indicate a strong direct relationship between the variables; -1, the existence of an inverse relationship; and 0, no linear relationship without ruling out the existence of some other type of relationship (quadratic, exponential, etc.) [10], [11].

Equation (1) represents Pearson’s correlation, where variable n represents the sample size; Xi and Yi , the single samples; and variable x , the sample mean of X , i.e.,

Likewise, in (2) can be applied to variable y.

B. Spearman rank correlation

The Spearman correlation coefficient (r s ) is a measure of the correlation between two variables that can be continuous or discrete. A difference with the linear Pearson correlation is that Spearman’s quantifies the monotonous correlation, that is, variables that grow or decrease in the same direction, but not necessarily at a constant rate, which could be associated with variables with non-normal distributions. The Spearman correlation assigns correlation ranges between +1 and -1. In either of these two extreme cases, the variables could be perfectly correlated [12].

In particular, a Spearman correlation is recommendable when the data present outliers produced by noisy environments such as airports [13].

The Spearman correlation is given by (3), where d is the difference between the ranges of ordained X and Y defined in (4).

C. Kendall rank correlation

This type of correlation (τ) represents a measure of the degree of statistical dependence between two qualitative ordinal variables. It is used when the degree of linear correlation should be estimated, but the ordinal variables do not have a normal distribution. Because the variables to be analyzed are qualitative, the data should be assigned ranges that are barely affected by a few moderate outliers. The Kendall coefficient ranges between -1 and +1. For variables X and Y, both of size n, we consider a possible total of n(n-1)/2 observations [14], [15], [16].

The Kendall correlation between variables X and Y is defined as (5):

where S is the difference between concordant pairs and discordant pairs.

In pair observation, X i Y i concordant refers to the case in which Y i increases along with X i . When Y i decreases as X i increases, it is a discordant pair.

2.1.2. Linear predictive coding

In [17], the authors used LPC (Linear Predictive Coding) and covered detection distances as long as 40 mt using a Cyclone drone. Linear Predictive Coding is defined as a digital method for encoding an analog signal in which a particular value is predicted by a linear function of the past values of the signal [18]. Therefore, it can be performed by minimizing the sum of squared differences between the actual data and the linearly predicted ones [19].

In (6) represents the LPC coefficients, where r(i) is the estimate of x(i) ; n, a parameter called the model order that determines how many previous samples are used in the estimation; and y(i) , the predictor coefficients [20].

2.1.3. K-nearest neighbors algorithm

Machine learning methods have also been used to address the drone detection problem. In [21], the authors introduced real-time drone detection using Plotted Image Machine Learning (PIL), which resulted in an 83 % accuracy, and K-Nearest Neighbors (KNN), which resulted in a 61 % accuracy. The KNN algorithm is one of the simplest similarity-based artificial learning algorithms, and it offers an interesting performance in some situations [22]. When classifying a given instance, the idea is to make the nearest neighbor instances near to new instance assing its class through vote. The class of the new instance is then determined based on the most frequent class among the k-nearest neighbors. The value of k must be chosen a priori; various techniques have been proposed to select it, such as cross-validation and heuristics. This value should not be a multiple of the number of classes to avoid tie votes. Thus, in the case of a binary classification, it is necessary to assign k an odd value so that a majority is necessarily constituted. The performance of the KNN algorithm also depends largely on the measurement used to calculate the distances between the instances [23].

The characteristics of the observations are recorded for both the training and the test dataset. By default, the KNN function employs Euclidean distance, which can be calculated with (7).

where p and q are element instances to be compared with n characteristics [24].

2.1.4. Acoustic fingerprinting technique

Audio fingerprinting is a technology used for the exact identification of audio content. Its typical use is to precisely identify a piece of audio in a large collection given a short query (where a query is a potentially distorted or modified audio excerpt) [25].

In [26], a time-frequency fingerprint was extracted by a warning system to recognize drone sounds. The authors reported a drone recognition accuracy of 98.3 % using a classifier based on a Support Vector Machine (SVM), which is a classification technique described in [27].

Audio fingerprinting systems typically aim at identifying an audio recording given a sample of it by comparing the sample against a database to find a match. Such systems generally transform, first, the audio signal into a compact representation (e.g., a binary image) so that the comparison can be performed efficiently [28]. In [29], an algorithm, once implemented, was responsible for extracting unique characteristics (such as tone frequencies and spectral characteristics) of all kinds of audio signals within the audible range. Once the extraction was complete, characteristics were stored as acoustic fingerprints that would serve to characterize the object of interest later. The algorithm also had a second training phase that made it more efficient. The algorithm first obtained the pitch frequency and subsequently extracted the spectral signature. The pitch frequency extraction used autocorrelation.

2.2. Video detection methods

2.2.1. Method based on object movement and ml (machine learning) with conventional cameras

According to [30] the methods of detection through video are useful only to a certain extent. Other authors [31], [32], [33], believe that interest object detection can be successful if it is based on differences between multiple consecutive data-frames in a video, which allows the extraction of the interest object in motion and the omission of background pixels.

In [34], the authors used a passive color camera in combination with an active laser range-gated viewing sensor in the Short Wave Infrared (SWIR) band in order to effectively eliminate the foreground and background around an object. In [35], the authors proposed a two-frame differencing to detect motion applying a series of operations of erosion and dilatation.

Afterward, they used local features to implement Speeded Up Robust Features (SURF) to distinguish whether the object was or not a drone. To mitigate false alarms, in [35], a coherency score was computed for each blob generated by the two-frame differencing.

The objective of all these techniques is to subtract the background and identify any flying object in the scene; however, in many other studies, this is complemented with a recognition and classification of the object with more sophisticated methods. Fig. 2, taken from [36], shows the difference between the two concepts, detection and classification, and their importance in order to reduce false alarms.

Source: [36].

Fig. 2 Flying object detections can be filtered by a classifier to reduce the number of false alarms 

Several studies [37], [38], [39], have implemented methods based on machine learning. For instance, SSD (single shot detector-ResNet-101), Faster R-CNN (ResNet-101), Yolo v2, and Yolo v3 have been used for the detection process, achieving drone classification accuracies of 81.92 %, 85.33 %, 70 % to 90 %, and 91 %, respectively.

Faster R-CNN is a detection system composed of two modules. The first one is a convolutional Region Proposal Network (RPN), and the second one is the detector Fast R-CNN, which finds or generates the region proposals. RPN takes an image as input and frames the objects in it in rectangles, each one with an objectivity score based on a sliding window [40]. In turn, a Single Shot Detector (SSDs) is designed for real-time detection and only needs a shot to detect multiple objects in the image, while detectors like those mentioned above (Fast R-CNN) need two shots- one to generate the regions and one to detect the objects in each one.

Fig. 3 shows this process: the SSD applies a convolutional 1network over the input image only once and calculates a features map; then, it executes a simple convolutional code for predicting the delimiter squares and the classification probability after multiple convolutional layers [41].

Source: [41].

Fig. 3 (a) The SSD only needs an input image and ground truth boxes for each object during training. In a convolutional fashion, it evaluates a small set of default boxes (e.g., 4) of different aspect ratios at each location in several feature maps with different scales, e.g., (b) 8×8 and (c) 4×4. For each default box, it predicts both the shape offsets and the confidences for all object categories ((c1, c2, ·· ·, cp)). During the training stage, these default boxes are first matched to the ground truth boxes. For example, the authors have matched two default boxes with the cat and one with the dog, which is treated as positives and the rest as negatives. The model loss is a weighted sum of localization loss (e.g., Smooth L1 [42]) and confidence loss (e.g., Softmax). 

The YOLO (You Only Look Once) as shown in Fig. 4 and explained in [41], divide the input image in S x S grid, if the center of an object falls inside a cell in the grid, that cell will be responsible for detecting the object, these grids predict B bounding boxes with confidence scores relatives to object that contain, it can be trained from extreme to the other and offers real-time detection with high accuracy.

Source: [41].

Fig. 4 Image classification process in YOLO 

2.2.2. Thermal radiation method

Thermal cameras capture images in which one color represents warmer areas and another, colder areas (e.g., white and black, respectively). Many color palettes are available to map these temperature measurements generating different brightness and contrast, that can be used by a linear transfer function that can be seen as a sliding window to change the location and width [43]. According to [44], the fusion of infrared thermal images with the visible spectrum is useful to detect objects with temperature differentials due to emission or reflection, such as drones and people. A system of this kind that uses conventional sensors captures objective information such as emitted and reflected radiation. By combining the characteristics of each visible color and the target heat signature, the tracking strategy can be more robust and complete.

Fig. 5 shows the result of a test conducted by the authors using techniques of background subtraction for movement detection with a thermal camera. This kind of devices can be used for object detection in other light conditions, such as nighttime, with high reliability [45].

Source: Created by the authors.

Fig. 5 Thermal object detection; implementation of an algorithm for object detection using a thermal camera 

2.3. Radiofrequency detection methods

A. Radio sensors

Another technology used to face this problem is radiofrequency, which is considered an effective method for drone detection because of its long-range and early-warning capabilities. It can be used to localize and track drones and pilots, develop small-size portable equipment, which can be low cost and passive (therefore, no license is required), and detect multiple drones or controllers [46], [47].

The modulation implemented by different brands of drone manufacturers for radio control of unnamed aircraft is based on techniques such as spread spectrum and frequency-hopping to allow the coexistence of different radiant sources (emitters and receivers) in the same frequency band and prevent or reduce interference. Therefore, the algorithms developed for drone detection using radiofrequency analysis to sense alterations in the electromagnetic spectrum produced by the communication signal between the drone and the controller, which implies an analysis in the time and frequency domain in the specific bands where most drones operate (400 MHz, 2.4 GHz, and 5.8 GHz) [48], [49], [50].

Software-Defined Radio (SDR) is a hardware-based platform that provides software control modulation techniques, wideband or narrowband operation, communications security functions (such as hopping), and satisfies waveform requirements of current and evolving standards over a broad frequency range [51]. In [52], the authors used an SDR USRP B210 to implement the AOA (Angle Of Arrival) technique applying the Pseudo-Doppler principle to calculate the direction in which the drone was detected by adopting two proposed methods: 1) spectral correlation density and cyclic autocorrelation function and 2) analyzing the reflection from a non-cooperative transmitter.

In [53], the authors proposed two methods for identifying physical signatures of drone body movement: the first one was based on an Inertial Measurement Unit (IMU) and the second one, on the reflection analysis of a Wi-Fi signal emitted by a transmitter in a cooperative way using an SDR USRP B200 mini software-defined radio. Afterward, the shift and the vibration in the received signal were analyzed, and the drone was identified. The authors reported a precision of 95.9 %, accuracy of 96.5 % and a recall of 97 % when they experimented with IMUs at a distance of 10m. When the distance increases, the performance of the detection falls to 89.4 % of accuracy, 86.7 % of precision and 93 % of recall at 100m; and, 81.5 % of precision 84.9 % of accuracy and 90.3 % of recall at 600m distance. When it was kept in mind the external interference, the authors reported getting 92 % of accuracy, 88.7 % of precision, and 96.3 % of recall in one environment with the interference of 16 Wi-Fi channel actives in the experiment place.

Furthermore, in [54], the authors implemented a passive radar using low-cost DVB-T receivers that utilized three television towers emitting signals instead of a dedicated radar emitter. They measured, on the one hand, the signal emitted by the non-cooperative source (called the reference signal); and, on the other hand, the signal reflected by the targets. Since the reference signal is not known, a matched filter approach is required to find delayed copies of this reference signal in the measured signal: a cross-correlation technique can be applied to identify the time-delayed copies of the reference in the measurement. The system was tested with two different applications: short-range moving target detection and moving target tracking.

In contrast, in [55], the authors implemented a Random Forest (RF) classifier to detect, in six different scenarios, where the wireless signals were present using the network traffic for the complete analysis. The authors reported a minimum true positive rate of as less as 0.993 and a false positive rate below 3 10-3.

B. Radar-based detection

Radars are electromagnetic systems designed to detect and locate target objects (such as aircraft, ships, spacecraft, vehicles, people, and the natural environment) that reflect a signal. They use electromagnetic radio waves to determine the angle, range, or velocity of objects [56]. Radars are also implemented to monitor restricted areas in different ways [57]; however, conventional radars are not optimized to sense small UAVs because they are smaller and slower than traditional aircraft and they fly at lower altitudes. Moreover, UAVs normally use rotor blades made of carbon, fiber, or plastic materials. And the smaller the drone, the more likely its blades are made of plastic, which is important for the visibility of the blades in radar systems [58], [59].

Other radar systems are more compact, and versatile, offer high-resolution (which makes them more affordable) and adopt different methods. mmWave radars are a special class of radar technology that uses short-wavelength electromagnetic waves.mmWave systems transmit signals with wavelengths in the millimeter range and can detect movements as small as a fraction of a millimeter [60], [61].

Fig. 6 shows two mmWave systems commercialized by National Instruments (a) and Ancortek (b), respectively. The latter was used in [62] to measure the radial velocity signatures and the angular velocity signatures of drone blades at different angles and to get distinct features in the time-frequency domain for its subsequent classification.

Source: [64].

Fig. 6 (a) mmWave Software-Defined Radio (SDR) from NI [63]. (b) mmWave radar kit from Ancortek 

Researchers at the Fraunhofer Institute for High-Frequency Physics and Radar Techniques (FHR) used this technology to simultaneously detect and track three multicopters in real-time in a measurement range from 50 to 150 m [65]. In turn, other authors [66] presented a rationale for using MIMO techniques to thin a transceiver element array without sacrificing image quality and the concepts behind the MIMO overlay or virtual array. They introduced a design of practical MIMO arrays for imaging radars at millimeter-wave frequencies and an analysis of spreading sequences suitable for UAV imaging radars.

These examples show that the use of radar systems based on mmWave technology is effective in drone detection and tracking.

Another technology used for drone detection is the software-defined radar described in [67], which applies the same principles of a software-defined radio: the components that have typically been implemented as hardware (e.g., mixers, filters, modulators, demodulators, detectors, etc.) are implemented using software on a computer or another programmable device, usually a Field-Programmable Gate Array (FPGA) [54]. In [68], the authors presented the development of a multi-band, multi-mode SDR radar platform that consists of a replaceable antenna and RF modules in the S-, X-, and K- bands. The transmission of a modulated radar waveform and the reception of echo are the working principles of the system, which was successfully tested in a small drone detection.

The literature includes other kinds of technologies. For instance, a Holographic Radar (HR) is mentioned in [69] as a surveillance radar operating in the 3-D and L-bands with high detection capacity. Said radar system can detect miniature UAS in a complex horizon, but it may detect other small moving objects such as birds due to its high sensitivity. That study provides results of Doppler characteristics for micro-drones and highlights the fact that a Doppler classification is fundamental to differentiate objects.

3. CONCLUSION

Nowadays, there are several ways to implement a drone detecting system. Nevertheless, each one of them presents advantages and disadvantages that may be considered in the design stage.

Sound-based detection is easy to install and represents a low-cost solution, but most of these systems need a database that must be constantly updated to be effective, have a short-range coverage, are sensitive to environmental noise, and need a large network of interconnected microphones deployed for detection[70], [71], [72]. Video-based detection is difficult to port to low-power processors because of their processing capabilities; moreover, the cameras can capture images as far as 350 feet (approximately 107 meters), but they have a very difficult time distinguishing birds from drones and require a line of sight. Besides, small drones cannot produce enough heat for thermal cameras to detect them [73], [74], [75].

Alternatively, radar systems can offer good capabilities, especially at long ranges and in poor visibility conditions (thick fog or nighttime), but conventional radars are not optimized to sense objects that are smaller and slower and fly at a lower altitude than traditional aircraft. Radars can only detect drones while they are flying and present a high false-positive rate in busy urban environments [76], [77]. In turn, radiofrequency-based methods are unable to detect drones if they are not communicating with the controller and are less effective in crowded RF areas unless a passive radar is designed with this type of sensors in order to detect and track any moving target [78].

Table 1 is a summary of the technologies cited in this article with their advantages and disadvantages. We can see that none of the options is a perfect system. As a result, several companies around the world have decided to produce combined systems in order to decrease the error rate, as can be seen in Table 1, which presents a comparison of the technologies implemented by different manufacturers of drone detection system.

Table 1 Summary of technologies for drone detection 

Source: Created by the authors.

This study examined several techniques, methods, and algorithms for drone detection. In Table 2, the best technology in terms of cost-benefit is radiofrequency because it can detect the drone and the controller, track multiple targets, and operate over long distances; moreover, it is relatively cheap. Its incapacity to detect inertial flights, as mentioned above, can be addressed by implementing passive non-cooperative pulse radars as illuminators.

Table 2 Comparison of types of technology used by drone manufactures 

Source: Created by the authors

Radar technology is the most expensive, but it offers the longest range, and sound is the most inefficient method in terms of cost-benefit. However, the combination of these techniques can provide a robust system that can efficiently address the drone detection problem.

This study opens a path for future developments because it can be used to understand the technologies involved in drone detection systems, which is necessary for selecting the best architecture and methodology depending on the place and the conditions of the deployment. Future studies should implement low-cost high-accuracy multimodal systems to protect specific areas of interest.

REFERENCES

[1] C. Guillot, “Drone’s-eye view,” in Image Testimonies Witnessing in Times of Social Mediavol., M. Richardson, Ed. Routledge, 2018, pp. 72-86. Available: https://www.taylorfrancis.com/books/e/9780429434853/chapters/10.4324/9780429434853-6Links ]

[2] B. Hearing and J. Franklin, “Drone detection and classification methods and apparatus,” US 9.275,645 B2, Mar. 2016. Available: https://patentimages.storage.googleapis.com/fa/28/8e/24456959a181c1/US9275645.pdfLinks ]

[3] Ministry of Transport, Ministry of Business, Innovation and Employment, “Drones : Benefits study High level findings,” report no. MOT009.18 2019. Available: https://www.coursehero.com/file/52540225/04062019-Drone-Benefit-Studypdf/Links ]

[4] L. E. Davis, M. J. McNerney, J. Chow, T. Hamilton, S. Harting, and D. Byman, Armed and Dangerous?: UAVs and U.S. Security, RAND Corporation, pp. 37. 2014. Available: https://www.jstor.org/stable/10.7249/j.ctt6wq880Links ]

[5] M. Ezuma, F. Erden, C. K. Anjinappa, O. Ozdemir, and I. Guvenc, “Micro-UAV Detection and Classification from RF Fingerprints Using Machine Learning Techniques,” in 2019 IEEE Aerospace Conference, Big Sky, 2019. Available: https://ieeexplore.ieee.org/abstract/document/8741970Links ]

[6] B. Custers, The Future of Drone Use, vol. 27. The Hague: T.M.C. Asser Press, 2016. https://doi.org/10.1007/978-94-6265-132-6Links ]

[7] P. Schober, C. Boer, and L. A. Schwarte, “Correlation Coefficients: Appropriate Use and Interpretation,” Anesth. Analg., vol. 126, no. 5, pp. 1763-1768, May 2018. https://doi.org/10.1213/ANE.0000000000002864Links ]

[8] K. Hartman, J. Krois and B. Waske, “E-Learning Project SOGA: Statistics and Geospatial Data Analysis. Department of Earth Sciences,” Correlation, 2018. Available: https://www.geo.fu-berlin.de/en/v/soga/Basics-of-statistics/Descriptive-Statistics/Measures-of-Relation-Between-Variables/Correlation/index.htmlLinks ]

[9] J. Mezei and A. Molnar, “Drone sound detection by correlation,” in 2016 IEEE 11th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, 2016, pp. 509-518. https://doi.org/10.1109/SACI.2016.7507430Links ]

[10] E. Obilor and E. Amadi “Test for Significance of Pearson’s correlation coefficient(r),” 2018. Available: https://www.researchgate.net/publication/323522779_Test_for_Significance_of_Pearson's_Correlation_CoefficientLinks ]

[11] R. Artusi, P. Verderio, and E. Marubini, “Bravais-Pearson and Spearman correlation coefficients: Meaning, test of hypothesis and confidence interval,” Int. J. Biol. Markers, vol. 17, no. 2, pp. 148-151, Apr. 2002. https://doi.org/10.1177/172460080201700213Links ]

[12] J. C. F. de Winter, S. D. Gosling, and J. Potter, “Comparing the Pearson and Spearman correlation coefficients across distributions and sample sizes: A tutorial using simulations and empirical data.,” Psychol. Methods, vol. 21, no. 3, pp. 273-290, May. 2016. http://dx.doi.org/10.1037/met0000079Links ]

[13] J. H. Zar, “Spearman Rank Correlation,” in Encyclopedia of Biostatistics, Chichester, UK: John Wiley & Sons, Ltd, 2005, pp. 47-57. https://doi.org/10.1002/0470011815.b2a15150Links ]

[14] E. Szmidt and J. Kacprzyk, “The Spearman and Kendall rank correlation coefficients between intuitionistic fuzzy sets,” in Proceedings of the 7th conference of the European Society for Fuzzy Logic and Technology (EUSFLAT-2011), Aug. 2011, vol. 1, no. 1, pp. 521-528. https://doi.org/10.2991/eusflat.2011.85Links ]

[15] H. Abdi, “Kendall Rank Correlation Coefficient,” in The Concise Encyclopedia of Statistics, New York, NY: Springer New York, 2008, pp. 278-281. https://doi.org/10.1007/978-0-387-32833-1_211Links ]

[16] C. Yau, “R Tutorial and introduction to statistics,” Kendall Rank Coefficient, 2019. [Online]. Available: http://www.r-tutor.com/gpu-computing/correlation/kendall-rank-coefficientLinks ]

[17] L. Hauzenberger and E. H. Ohlsson, “Drone Detection using Audio Analysis,” (Master Thesis), LUND university libraries, 2015. [Online]. Available: https://lup.lub.lu.se/student-papers/search/publication/7362609Links ]

[18] D. O’Shaughnessy, “Linear predictive coding,” IEEE Potentials, vol. 7, no. 1, pp. 29-32, Feb. 1988. https://doi.org/10.1109/45.1890Links ]

[19] A. R. Madane, Z. Shah, R. Shah, and S. Thakur, “Speech Compression Using Linear Predictive Coding,” in proceedings International workshop on Machine Intelligence Research MIR labs., Mumbai. 2009, pp. 119-122. Available: https://www.semanticscholar.org/paper/Speech-Compression-Using-Linear-Predictive-Coding-Madane-Shah/985a532c0dcef1bf4526354faac41e6814b100bcLinks ]

[20] M. W. Spratling, “A review of predictive coding algorithms,” Brain Cogn., vol. 112, pp. 92-97, Mar. 2017. https://doi.org/10.1016/j.bandc.2015.11.003Links ]

[21] J. Kim, C. Park, J. Ahn, Y. Ko, J. Park, and J. C. Gallagher, “Real-time UAV sound detection and analysis system,” in 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, 2017. https://doi.org/10.1109/SAS.2017.7894058 [ Links ]

[22] J. Diz, G. Marreiros, and A. Freitas, “Applying Data Mining Techniques to Improve Breast Cancer Diagnosis,” J. Med. Syst., vol. 40, no. 203, Aug. 2016. https://doi.org/10.1007/s10916-016-0561-yLinks ]

[23] W. Cherif, “Optimization of K-NN algorithm by clustering and reliability coefficients: Application to breast-cancer diagnosis,” Procedia Comput. Sci., vol. 127, pp. 293-299, 2018. https://doi.org/10.1016/j.procs.2018.01.125Links ]

[24] Z. Zhang, “Introduction to machine learning: K-nearest neighbors,” Ann. Transl. Med., vol. 4, no. 11, Jun. 2016. https://doi.org/10.21037/atm.2016.03.37Links ]

[25] R. Sonnleitner, “Audio Identication via Fingerprinting Achieving Robustness to Severe Signal Modications,” (Doctoral Thesis), Departament of computational percepcion, Johanes Kleper University Linz, 2017. pp. 196, Available: https://es.scribd.com/document/384118954/Sonnleitner-Audio-Identification-via-FIngerprinting-achieving-robustness-to-severe-signal-modificationsLinks ]

[26] A. Bernardini, F. Mangiatordi, E. Pallotti, and L. Capodiferro, “Drone detection by acoustic signature identification,” Electron. Imaging, no. 5, pp. 60-64, Jan. 2017. https://doi.org/10.2352/ISSN.2470-1173.2017.10.IMAWM-168Links ]

[27] V. Jakkula, “Tutorial on Support Vector Machine (SVM),” Sch. EECS, Washingt. State Univ., pp. 1-13, 2011. Available: https://www.semanticscholar.org/paper/Tutorial-on-Support-Vector-Machine-(-SVM-)-Jakkula/7cc83e98367721bfb908a8f703ef5379042c4bd9Links ]

[28] Z. Rafii, B. Coover and J. Han, An audio fingerprinting system for live version identification using image processing techniques,"2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, 2014, pp. 644-648. https://doi.org/10.1109/ICASSP.2014.6853675Links ]

[29] T. Edwina Alias, N. Naveen, and D. Mathew, “A novel acoustic fingerprint method for audio signal pattern detection,” Proc. - 2014 4th Int. Conf. Adv. Comput. Commun. ICACC 2014, no. 1, Cochin, 2014, pp. 64-68. https://doi.org/10.1109/ICACC.2014.21Links ]

[30] E. Páli, K. Máthé, L. Tamás, and L. Buşoniu, “Railway track following with the AR.Drone using vanishing point detection,” Proc. 2014 IEEE Int. Conf. Autom. Qual. Testing, Robot. AQTR 2014, Cluj-Napoca, 2014, pp. 1-6. https://doi.org/10.1109/AQTR.2014.6857870Links ]

[31] D. Zeng, X. Chen, M. Zhu, M. Goesele, and A. Kuijper, “Background Subtraction with Real-time Semantic Segmentation,” IEEE Access, vol. 10, pp. 1-1, Feb. 2019. https://doi.org/10.1109/ACCESS.2019.2899348Links ]

[32] A. Sobral, “BGSLibrary: An OpenCV C++ Background Subtraction Library,” IX Work. Visao Comput., 2013, pp. 1-3. Available: https://s3.amazonaws.com/academia.edu.documents/32708022/andrews_WVC2013.pdfLinks ]

[33] S. Noh and M. Jeon, “A new framework for background subtraction using multiple cues,” in (eds) Computer Vision - ACCV 2012. ACCV 2012. Lecture Notes in Computer Science , vol 7726. Springer, Berlin, Heidelberg 2013. https://doi.org/10.1007/978-3-642-37431-9_38Links ]

[34] F. Christnacher et al., “Optical and acoustical UAV detection,” in Electro-Optical Remote Sens. X, , Edinburgh, 2016, vol. 9988 https://doi.org/10.1117/12.2240752Links ]

[35] S. R. Ganti and Y. Kim, “Implementation of Detection and Tracking Mechanism For Small UAS,” in 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, 2016, pp. 1254-1260. https://doi.org/10.1109/ICUAS.2016.7502513Links ]

[36] A. Schumann, L. Sommer, J. Klatte, T. Schuchert, and J. Beyerer, “Deep cross-domain flying object classification for robust UAV detection,” in 2017 14th IEEE Int. Conf. Adv. Video Signal Based Surveillance, AVSS 2017, Lecce, 2017. https://doi.org/10.1109/avss.2017.8078558Links ]

[37] E. Unlu, E. Zenou, N. Riviere, and P.-E. Dupouy, “Deep learning-based strategies for the detection and tracking of drones using several cameras,” IPSJ Trans. Comput. Vis. Appl., vol. 11, no. 7, Jul. 2019. https://doi.org/10.1186/s41074-019-0059-xLinks ]

[38] X. Wang, P. Cheng, X. Liu, and B. Uzochukwu, “Fast and accurate, convolutional neural network based approach for object detection from UAV,” in Proc. IECON 2018 - 44th Annu. Conf. IEEE Ind. Electron. Soc., Washington, 2018, pp. 3171-3175. https://doi.org/10.1109/IECON.2018.8592805Links ]

[39] M. A. Akhloufi, S. Arola, and A. Bonnet, “Drones Chasing Drones: Reinforcement Learning and Deep Search Area Proposal,” Drones, vol. 3, no. 3, pp. 58, Jul. 2019. https://doi.org/10.3390/drones3030058Links ]

[40] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol 39, no. 16, pp. 1137-1149, Jun. 2017. https://doi.org/10.1109/TPAMI.2016.2577031Links ]

[41] W. Liu., D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Fu and A. Berg, “SSD: Single Shot Multibox Detector”, European Conference on Computer Vision (ECCV), 2016, pp. 21-37. https://doi.org/10.1007/978-3-319-46448-0_2Links ]

[42] R. Girshick, “Fast R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., Santiago, 2015, vol. 2015, pp. 1440-1448. Available: https://openaccess.thecvf.com/content_iccv_2015/html/Girshick_Fast_R-CNN_ICCV_2015_paper.htmlLinks ]

[43] A. Flodell and C. Christensson, “Wildlife Surveillance Using a UAV and Thermal Imagery,” (Master Thesis), Department of Electrical Engineering, Linköping University, 2016, pp. 132, Available: http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A941275&dswid=8677Links ]

[44] R. Stolkin, D. Rees, M. Talha, and I. Florescu, “Bayesian fusion of thermal and visible spectra camera data for region based tracking with rapid background adaptation,” in IEEE Int. Conf. Multisens. Fusion Integr. Intell. Syst., Hamburg, 2012, pp. 192-199. https://doi.org/10.1109/MFI.2012.6343021Links ]

[45] P. Andraši, T. Radišić, M. Muštra, and J. Ivošević, “Night-time Detection of UAVs using Thermal Infrared Camera,” Transp. Res. Procedia, vol. 28, pp. 183-190, 2017. https://doi.org/10.1016/j.trpro.2017.12.184Links ]

[46] Bavak “Beveiligingsgroep BV F-Sensors”. 2019. Available: https://www.bavak.com/integrated-security-solutions/drone-detection-systems/rf-sensors/Links ]

[47] Robin radas systems “9 Counter-Drone Technologies To Detect And Stop Drones Today.” 2019. [Online]. Available: https://www.robinradar.com/press/blog/9-counter-drone-technologies-to-detect-and-stop-drones-todayLinks ]

[48] W. D. Scheller, “Detecting Drones Using Machine Learning,” (Master Thesis) Electrical and Computer Engineering, Iowa State University, 2017. Available: https://lib.dr.iastate.edu/etd/16210/Links ]

[49] T. F. Wong, “Chapter 2 Introduction to Spread Spectrum Communications,” Univ. Florida, vol. 1, Spread Spectrum & CDMA, pp. 1-25, 2014. Available: http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=51467B980E5A5CFA33E44548DDEDA9A6?doi=10.1.1.117.8806&rep=rep1&type=pdfLinks ]

[50] Asian Military Review “RF Techniques for Detection, Classification and Location of Commercial Drone Controllers - Asian Military Review.” [Online]. Available: https://asianmilitaryreview.com/2019/05/rf-techniques-for-detection-classification-and-location-of-commercial-drone-controllers/Links ]

[51] Y. K. Kwag, J. S. Jung, I. S. Woo, and M. S. Park, “Multi-band multi-mode SDR radar platform,” Proc. 2015 IEEE 5th Asia-Pacific Conf. Synth. Aperture Radar, APSAR 2015, Singapore, 2015, pp. 46-49. https://doi.org/10.1109/APSAR.2015.7306151Links ]

[52] H. Fu, S. Abeywickrama, L. Zhang, and C. Yuen, “Low-Complexity Portable Passive Drone Surveillance via SDR-Based Signal Processing,” IEEE Commun. Mag., vol. 56, no. 4, pp. 112-118, Apr. 2018. https://doi.org/10.1109/MCOM.2018.1700424Links ]

[53] P. Nguyen, H. Truong, M. Ravindranathan, A. Nguyen, R. Han, and T. Vu, “Drone Presence Detection by Identifying Physical Signatures in the Drone’s RF Communication,” Proc. 15th Annu. Int. Conf. Mob. Syst. Appl. Serv. - MobiSys ’17, New York, 2017. pp. 211-224. https://doi.org/10.1145/3081333.3081354Links ]

[54] W. Feng, G. Cherniak, J.-M. Friedt, and M. Sato, “Software defined radio implementation of passive RADAR using low-cost DVB-T receivers.” Available: https://pdfs.semanticscholar.org/0fd0/91cdc131a29aa0a3bdb50039e83d3de8addd.pdfLinks ]

[55] S. Sciancalepore, O. A. Ibrahim, G. Oligeri, and R. Di Pietro, “Picking a Needle in a Haystack: Detecting Drones via Network Traffic Analysis,” arXiv Jan. 2019. Available: https://www.researchgate.net/profile/Savio_Sciancalepore/publication/330357671_Picking_a_Needle_in_a_Haystack_Detecting_Drones_via_Network_Traffic_Analysis/links/5c3f15c7299bf12be3cc1f89/Picking-a-Needle-in-a-Haystack-Detecting-Drones-via-Network-Traffic-Analysis.pdfLinks ]

[56] N. P. Bhatta and M. Geethapriya, “RADAR and its Applications,” IJCTA, Vol. 10 No. 3, Jan. 2017. Available: file:///C:/Users/nubeusuga/Downloads/1487326660.pdfLinks ]

[57] I. Güvenç, O. Ozdemir, Y. Yapici, H. Mehrpouyan, and D. Matolak, “Detection, localization, and tracking of unauthorized UAS and Jammers,” in 2017 AIAA/IEEE Digit. Avion. Syst. Conf. - Proc. (DASC), Petersburg 2017. https://doi.org/10.1109/DASC.2017.8102043Links ]

[58] V. Demirev, “Drone detection in urban environment - The new challenge for the Radar Systems Designers,” Security & Future, vol. 116, no. 3, pp. 114-116, 2017. Available: https://stumejournals.com/journals/confsec/2017/3/114Links ]

[59] F. Fioranelli, “Radar detection and classification of small UAVs and micro-drones.” The University of Glasgow, Available: https://www.gla.ac.uk/media/Media_480052_smxx.pdfLinks ]

[60] C. Iovescu and S. Rao, “The Fundamentals of Millimeter Wave Sensors,” Texas Instruments, pp. 1-8, 2017. Available: https://www.mouser.ee/pdfdocs/mmwavewhitepaper.pdfLinks ]

[61] National Instruments Corporation “¿Qué es el Sistema de Transceptor de Onda Milimétrica?”, 2020. Available: https://www.ni.com/es-co/shop/wireless-design-test/what-is-mmwave-transceiver-system.htmlLinks ]

[62] J. A. Nanzer and V. C. Chen, “Microwave interferometric and Doppler radar measurements of a UAV,” in 2017 IEEE Radar Conf. RadarConf 2017, Seattle, 2017, pp. 1628-1633. https://doi.org/10.1109/RADAR.2017.7944468Links ]

[63] Engineer Ambitiously “Sistema de Transceptor de Onda Milimétrica - National Instruments.” [Online]. Available: http://sine.ni.com/np/app/main/p/docid/nav-116/lang/es/fmid/12027/Links ]

[64] Ancortek Inc “SDR-KIT 2400AD2” 2019. [Online]. Available: https://ancortek.com/sdr-kit-2400ad2Links ]

[65] Fraunhofer Institute for High Frequency Physics and Radar Techniques FHR “Detection of small drones with millimeter wave radar” [Online]. Available: https://www.fhr.fraunhofer.de/en/businessunits/security/Detection-of-small-drones-with-millimeter-wave-radar.htmlLinks ]

[66] G. Rankin, A. Tirkel, and A. Leukhin, “Millimeter wave array for UAV imaging MIMO radar,” in 2015 16th International Radar Symposium (IRS), Dresden, 2015. pp. 499-504. https://doi.org/10.1109/IRS.2015.7226217Links ]

[67] T. Debatty, “Software defined RADAR a state of the art,” 2010 2nd Int. Work. Cogn. Inf. Process. CIP2010, Elba, 2010, pp. 253-257. https://doi.org/10.1109/CIP.2010.5604241Links ]

[68] Y.-K. Kwag, I.-S. Woo, H.-Y. Kwak, and Y.-H. Jung, “Multi-Mode SDR Radar Platform for Smann Air-Vehicle Drone Detection,” in CIE International Conference on Radar, Guangzhou, 2017, pp. 4-7. https://doi.org/10.1109/RADAR.2016.8059254Links ]

[69] M. Jahangir, C. J. Baker, and G. A. Oswald, “Doppler characteristics of micro-drones with L-Band multibeam staring radar,” 2017 IEEE Radar Conf. RadarConf 2017, Seattle 2017, pp. 1052-1057. https://doi.org/10.1109/RADAR.2017.7944360Links ]

[70] A. Laučys et al., “Investigation of detection possibility of uavs using low cost marine radar,” Aviation, vol. 23, no. 2, pp. 48-53, May. 2019. https://doi.org/10.3846/aviation.2019.10320Links ]

[71] CERBAIR “Drone Detection & Neutralization- Part I | CERBAIR.” [Online]. Available: https://www.cerbair.com/drone-detection-and-neutralization-technologies-parti-blog/Links ]

[72] Squarehead Technology “Acoustic Drone Detection System Discovair - Rapid Deployable CUAS solution.” [Online]. Available: https://www.sqhead.com/drone-detection/Links ]

[73] C. Huang, P. Chen, X. Yang, and K. T. T. Cheng, “REDBEE: A visual-inertial drone system for real-time moving object detection,” in IEEE Int. Conf. Intell. Robot. Syst., Vancouver, 2017, vol. 2017-Septe, pp. 1725-1731.https://doi.org/10.1109/IROS.2017.8205985Links ]

[74] Help Net Security “Drone detection: What works and what doesn’t - Help Net Security.” 2015. [Online]. Available: https://www.helpnetsecurity.com/2015/05/28/drone-detection-what-works-and-what-doesnt/Links ]

[75] GCN “Drones: Findable but not stoppable.” [Online]. Available: https://gcn.com/articles/2015/06/03/drone-detection.aspx?m=1Links ]

[76] J. S. Patel, F. Fioranelli, and D. Anderson, “Review of radar classification and RCS characterisation techniques for small UAVs or drones,” IET Radar, Sonar Navig., vol. 12, no. 9, pp. 911-919, 2018. https://doi.org10.1049/iet-rsn.2018.0020Links ]

[77] L. Ziemba “Observe and Report: Considerations for Evaluating Drone Detection Systems.” Security Industry Association SIA, 2019. [Online]. Available: https://www.securityindustry.org/2019/09/06/observe-and-report-considerations-for-evaluating-drone-detection-systems/Links ]

[78] I. Bisio, C. Garibotto, F. Lavagetto, A. Sciarrone, and S. Zappatore, “Unauthorized Amateur UAV Detection Based on WiFi Statistical Fingerprint Analysis,” IEEE Commun. Mag., vol. 56, no. 4, pp. 106-111, Apr. 2018. https://doi.org/10.1109/MCOM.2018.1700340Links ]

Cómo citar / How to cite J. Flórez, J. Ortega, A. Betancourt, A. García, M. Bedoya, J. S. Botero, “A review of algorithms, methods, and techniques for detecting UAVs and UAS using audio, radiofrequency, and video applications,” TecnoLógicas, vol. 23, no. 48, pp. 269-285, 2020. https://doi.org/10.22430/22565337.1408

AUTHOR CONTRIBUTIONS

1Conceived the paper idea and focused on video methods.

2Conceived the paper idea, developed the theory, performed the initial searches and focused on video methods.

3Developed the theory performed the initial searches and audio techniques.

4Verified the analytical methods, complemented the different sections and focused on radiofrequency methods.

5Verified the analytical methods, complemented the different sections and focused on radiofrequency methods.

6Verified the analytical methods, complemented the different sections and audio techniques. All authors discussed the results and contributed to the final manuscript.

Received: August 02, 2019; Accepted: February 14, 2020

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License