Abstract

This research proposes a method to track a known runway image to land an unmanned aerial vehicle (UAV) automatically by finding a perspective transform between the known image and an input image in real-time. Apparently, it improves the efficiency of feature detectors in real-time, so they can better respond to perspective transformation and reduce the processing time. A UAV is an aircraft that is controlled without a human pilot on board. The flight of a UAV operates with various degrees of autonomy, either autonomously using computational-limited on-board computers or under remote control by a human operator. UAVs were originally applied for missions where human access was not readily available or where it was dangerous for humans to go. Nowadays, the most important problem in monitoring by an autopilot is that the conventional system using only the GPS sensors provides inaccurate geographical positioning. Therefore, controlling the UAV to take off from or land on a runway needs professional input which is a scarce resource. The characteristics of the newly developed method proposed in this paper are: (1) using a lightweight feature detector, such as SIFT or SURF, and (2) using the perspective transformation to reduce the effect of affine transformation that results in the feature detector becoming more tolerant to perspective transformation. In addition, the method is also capable of roughly localizing the same template in consecutive frames. Thus, it limits the calculation area that feature matching needs to work on.

Introduction

It is generally accepted that landing an aerial vehicle is a delicate process to control. No matter if it is a commercial flight or a fixed-wing unmanned aerial vehicle (UAV), the pilot needs to be trained. Even after training, because of certain situations, accidents happen. A commercial flight must have an instrumental landing system which is very expensive, to avoid human error. However, a small vehicle usually only has a global positioning system (GPS) to specify coordinates, which may be erroneous, especially when landing on a narrow runway, or if jammed either intentionally or unintentionally. Thus, to land a small UAV automatically, it is necessary to have much better methods than just using a GPS to land, such as a runway detection scheme.

Generally, a UAV requires a pilot to control the landing. However, the pilot must be trained in different and limited situations, with a 50% chance that an accident could occur, with 70% of these happening during landing (factors associated with humans). Consequently, the UAV landing process should be automatic to reduce the accident rate and decrease the duties of the pilot. The main problem is that the accuracy of a GPS is usually insufficient to detect the runway, especially with a moderate quality GPS on a narrow runway (3 m in width).

A short review is provided underneath on the various funded (military) research on “automatic landing assist systems”, before reviewing other related works of the present paper:

  1. There are long stripes on the sides of the runway, [1] similar to many automatic driving systems adopted for land vehicles, with (two) lines for landing. A video of Ruchanurucks and his team’s system can be viewed.2

  2. There are four or more distinct markers with known positions. Ruchanurucks et al. [2] extensively studied perspective-n-point (PnP) algorithms. They showed the best kinematic chain between equipment on board and the ground. Not only camera-to-ground kinematic information was derived, but the work also shows how to properly calibrate between a camera and an inertial measurement unit (IMU), which is usually aligned with the plane, was also shown.

  3. If there are unknown planar markers on the ground (they are known to be planar but without any coordinates), Sereewattana et al. [3] proposed a concept paper to land on “any planar area”. At that time, the camera and IMU calibration were not perfect, unlike the earlier mentioned work. However, it was also interesting for emergency cases involving landing anywhere that were planar with some features.

During the time we were working on the aforementioned projects, we were contacted again by personnel from the Defense Technology Institute. They asked us to find the positional relationship between an airplane and a runway (i.e., its planar image). The difference between the final earlier mentioned work and the present work is that the present work deals extensively with a known runway and assumptions are as follows:

  1. Aviation organizations often have bird’s eye pictures of their runway already.

  2. If the stripes are not present on the runway for any reason, can we still land the UAV, automatically?

At first glance, we thought it would be easier than the other work we have done, as it seemed like a simpler PnP; however, this paper deals with tracking input runway the image in real-time with respect to a planar template, to find a homogeneous transformation between the UAV, that can tilt a lot, so that the image is substantially more difficult than off-line feature extraction and matching.

Generally, in the field of image feature matching, there are multiple sources of errors, such as lighting, orientation, or perspective changes. In this paper, the key common challenge was viewpoint changes. Changes in capturing angle of the same object alter many image processing descriptors. Such alteration is unfavorable as it degrades matching accuracy. We focused on improving the matching rate for features on the planar object images.

The issue of matching images of a planar template in real-time using key points with viewpoint changes is the crux of this research. In other words, we propose a method to enhance the accuracy of many existing feature-matching algorithms with the presence of a large perspective transform between scenes, based on geometrical analysis by forecasting the orientation of the runway. Either speeded-up robust features (SURF) or scale-invariant feature transform (SIFT; [4]) were chosen as representative methods, which have been generally accepted to be very good algorithms. Generally, the proposed scheme applies to other feature detectors as well.

Second, time consumption was tackled using feedback loops. The proposed method uses a PnP algorithm like a well-known method in machine vision of homography to forecast the position of runway; this makes the process a lot faster, which will be shown in the Experimental Results section. Other PnP algorithms are applicable as well, as in Ruchanurucks et al. [2], depending on their time consumption, which could be benchmarked with our experimental conditions or results.

The rest of the article is organized as follows. Next, a literature review is provided. Then, the overall method is briefly explained. Details of the method are explained next. To avoid confusion, the details are then converted to algorithms. Experimental results are verified using real landing videos of a small UAV. Finally, we conclude that the proposed scheme is essential for landing on a known runway based on feature matching.

Literature Review

This literature review covered two aspects. First, we present vision-based aerial vehicle control using image processing. Second, we discuss PnP solutions for rotation (R) and translation (t).

First, we present vision-based aerial vehicle control using features on the ground/another vehicle. Hence, we mainly present vision-based aerial vehicle control, instead of describing well-documented feature descriptors, as the former provides information on real-world problems and thus leads to our finding.

Following the problem outlined in the Introduction, setting multiple cameras and a high-performance estimating system on the ground is one way to help find out the coordinates of an airplane. Then, the data can be sent back to the airplane to control landing [5]. However, this system setting is cumbersome.

On the other hand, there are also schemes using a known simple marker on the ground and a camera on the plane to achieve clear points of interest around landing area [6]. However, their comprehensive controlling scheme was designed only for miniature air vehicles.

Other similar research to that just mentioned, proposes using one large-sized symbol (an airbag) that can be clearly seen at the landing point [7]. However, this is applicable only if the wings of the UAVs are not damaged by contact with the symbol.

To avoid contact between the plane and the airbag, some works use a symbol on the ground [8,9]. For all such methods above, the objects must large enough and have a different shape from the nearby environment. It is also good if there is a planar area that is large enough to be a runway. However, we cannot argue that such symbols are not useful.

For convenience, some research mentions that no equipment or symbol should be set up at the airport. Instead, they propose to use existing points of interest on the runway, which must be robust enough to accommodate changes of light and tilt angle using methods such as Bay et al.’s [4] as a feature detector. Our method would be more comprehensive than existing methods.

There are even more advanced matching methods than the local feature methods mentioned earlier. Global methods [10], which generate a matching map before searching, could be affected by changes in surrounding areas, e.g., a new unknown object in the scene. Such changes would hinder the matching process. Thus, we believe local feature methods are more suitable for matching in dynamic areas such as runways.

Examples of methods that rely on local feature matching methods are Goncalves et al. [11] and Ding et al. [12]. The former paper relies heavily on homography to assist landing. The latter paper finds the unknown landing point safely by analyzing the relationship of the interesting points or feature points in two pictures taken from different cameras at different times to analyze an area that is smooth and planar enough for a UAV to land without accident. To provide the right data for landing, they need to have many points of interest to average the error in the system.

Regarding PnP algorithms, POSIT (POS with ITerations) by DeMenthon and Davis [13] is among the oldest; however, it is well-known in pose estimation algorithms, probably because the authors state clearly that it requires only 25 lines of code. In other words, it was fast on that time. The underlying strengths of this work include: (1) using scaled orthographic projection along with the general perspective projection; and (2) their iterative algorithm even enhances the convergence to a better minimum. However, the authors admit that their algorithm does not use the fact that the rotational matrix is orthonormal. In this sense, this work produces sub-optimal outputs.

Efficient perspective-n-point (EPnP) [14] deals with pose estimation using a non-iterative solution with O(n), where n is the number of feature points. Many state-of-the-art procedures at that time were slower. Furthermore, they claimed that the method is more accurate than many other existing non-iterative solutions at that time. Additionally, even their accuracy was lower than in iterative solutions, though they considered the time consumed was better. Specifically, they stated that their approach was still less accurate than Lu’s [15]. Regardless, Lepetit et al. [14] claimed that if their method was used to initialize Gauss–Newton, it could achieve accuracy as high as that of Lu [15].

Lepetit et al. [14] compared their method (EPnP) with Ansar and Daniilidis [16], clamped (direct linear transform (DLT) [17], another least-square algorithm similar to homography), EPnP followed by [15], EPnP followed by Gauss–Newton. Generally, Lepetit et al. [14] on its own is inferior to Lu [15]; however, when used in combination with Lu [15] or Gauss–Newton, it is almost as accurate as Lu [15] alone, with less time consumption. For example, it is even used to initialize Lu [15], and they claim that the overall solution is still O(n). In other words, it is faster than Lu [15] alone. This is no surprise as they already produce a good initial starting point for Lu [15]. The key idea that makes this algorithm strong is its representation of the coordinates of n 3D points as a weighted sum of four virtual control points. Thus, the optimization is performed over just four coefficients.

Li et al. [18] is a state-of-the-art approach for various kinds of point cloud, both planar and non-planar. They compared their work with six well-known/state-of-the-art algorithms [14,17] followed by Gauss–Newton [15,19,20]. In most cases, their approach yielded better results, with the presence of image noises. The key ideas that make their algorithm very strong include: (1) never linearize anything unnecessarily, so they solve a perspective-3-point problem without applying the conventional linearizing method, (2) use the two farthest-away points to represent an axis, to reduce noise, and (3) for the remaining two axes, along with the translation vector, calculate using least squares.

However, Li et al. [18] mention that their result in the planar case is still inferior to that of Schweighofer and Pinz [20], which targets only the planar case. Schweighofer and Pinz [20] is a special planar case extension of Lu [15], which targets both planar and non-planar cases. The extension [20] assumes there will be maximally two minima for a pose, based on indeed any algorithm (though, this is not extensively proven). An iterative solution of Lu [15] is selected in their paper as the criteria in Lu [15] always lead to a convergence (though, maybe not a true pose [15,20]). Hence we should discuss extensively Lu [15] before Schweighofer and Pinz [20], whose strong points include: (1) it is an iterative algorithm that is proven to always converge, (2) the optimization is tailored for pose estimation, in contrast to works that rely only on general methods like Gauss–Newton or its extension like Lavenberg–Marquardt, and (3) in each iteration, R/t will be updated and will finally converge to a minimum.

In summary, many landing assist methods, which rely on existing features on a runway, regardless of the PnP algorithm used, suffer from the fact that when the input images are tilted, the feature detection efficiency is reduced. We aim to address this problem in this paper using the widely used homography as our PnP algorithm. We compared differences in the PnP methods for landing assist, without feature warping in Ruchanurucks et al. [2].

Method

Our method is based on detecting existing points of interest on the runway. The overview of the system is shown in Fig. 1. We further improve earlier works that rely on existing features on a runway [11,12] by estimating the appropriate orientation of a template image to best match with the runway in real-time by performing a perspective transform on template (feature) points. The region of interest (ROI) is also generated to guide image processing on where to look for a runway in the next frame.

Fig. 1
Overview of our algorithm
Fig. 1
Overview of our algorithm
Close modal

Importantly, the proposed method relies a lot on feature detection to acquire points of interest on the runway. Detection and matching are developed based on an algorithm called SIFT or SURF. By using the result of the SIFT or SURF feature of the present frame, as mentioned, we can estimate the orientation of the template image to solve the problem of robustness toward the affine transform of SIFT or SURF as well as to scope the ROI (estimating the position of the airport in pictures and cut out the environment).

Region of Interest Generation.

ROI is the attenuation of the unnecessary area from the picture involving four vectors with the coordinates of each point (x, y), width (w), and height (h). These parameters are calculated from the template (Q) from each frame to forecast the next frame. So, the ROI for the first frame, before detection and matching, has the same size as the picture frame. However, after that, the ROI will be reduced to cover only the runway area. Figure 2 shows the area of ROI for the next searching frame. Equations (1)(3) show the method for the ROI area calculation. For any feature (P) of the corner (i) detected, P is the point of interest in the template image, and P’ is the point of interest obtained by calculating the algorithm:
(1)
(2)
(3)
(4)
Fig. 2

However, from Fig. 3 where the ROI was adopted for searching, it can be observed that the area of features (dark rectangle) is not well aligned with the runway; despite even performing RANSAC (RANdom SAmple Consensus) because even within the runway, there are multiple features that are the same. Consequently, those outliers contribute to inferior localization of the runway. The reason behind this inferior matching and perspective transformation is the perspective difference between the input image and the template (Fig. 4).

Fig. 3
Using ROI to crop input images to reduce processing time
Fig. 3
Using ROI to crop input images to reduce processing time
Close modal
Fig. 4
Showing the change of matrix Ct to estimate tilt angle with Ht
Fig. 4
Showing the change of matrix Ct to estimate tilt angle with Ht
Close modal

Template’s Perspective Transform.

Perspective transformation of a template point is the twisting of the present template to forecast the next template (Q’), to be similar to the input pictures as much as possible to attenuate the issue of affine transformation which would affect SIFT or SURF feature matching. The perspective transform is calculated from the result of matching (Ht). The matrix (Ct) is designed to solve cumulative error from warp images by directly warping from the template runway image (Q) to Q’ using Eq. (5):
(5)
where Ct is the prediction matrix for the next image frame and generated by Eq. (6).
(6)
where Co is the identity matrix, Ht−1 is the just-calculated perspective transform matrix, and At−1 is a translation cancel matrix explained in the Importance of Translation Cancel Matrix section.

Importance of Translation Cancel Matrix.

The underlying reason for A is that after generating each newly warped template, the computer sees the warped template as not the only object area, but also sees black surplus pixels like those on the left side of Fig. 5 as well. Over time, the excess space would get larger and push the template out of the frame. Eventually, there would not be enough features in the template left for accurate matching.

Fig. 5
Translational accumulated error, so runway has too much scale shift
Fig. 5
Translational accumulated error, so runway has too much scale shift
Close modal

Figure 5 shows an example of a near-failed RANSAC after using Eqs. (5) and (6) multiple times (here, 25). The warped template is almost not appropriate for matching anymore due to the scaling/shifting of the template image. One can see the runway template moves further into lower right corner of the upper left area (the area corresponding to the warped template image). In other words, since warping does not guarantee that the warped template would still be within the image coordinates, the template could be occluded, leading to lost tracking in consecutive frames.

Hence, we introduce the notion of frame to generate A matrix. As RANSAC does not provide the result with 100% accuracy, the frame is used to shift the warped template to resolve the mentioned problem. The frame consists of the four corners of the template, which we call them as frame points (Fi). The frame points are warped (F'i) similarly to other points in the template; however, their usage is different. First, these four frame points are generated as follows.

Figure 6 is the result for the accumulated error solver, by converting the present picture’s frame (Fi) to be the same for the consecutive frame (F′i) using Eq. (7):
(7)
Fig. 6
Solution for accumulated error
Fig. 6
Solution for accumulated error
Close modal
Then, they are used to generate a shift matrix as
(8)

After warping the picture using the flowchart in Fig. 6 (upper part), F′i will not cause any translational accumulated error, as shown in Fig. 6 (lower part) in contrast to Fig. 5.

Generating Four Points to Calculate Homography.

Finally, to reduce the computation time during homography (if we use all matching pairs for the homography calculation, it does take a lot of time), we are going to create four virtual template points for homography, so that we do not have to input many detected points to the homography, resulting in a time reduction. Homography requires at least four pairs of points. (A pair here implies a template point and a real-time input point. The number of required points is a different PnP solution to the PnP solution. For example, direct linear transform requires at least six points.)

Formerly, template generation creates the template (Q) and four points for a PnP solver like homography (Pi); Pi = {Q1, Q2, Q3, Q4}, as shown in Fig. 7. Naturally, after the template is warped, we must recalculate the four points (P′i) as
(9)
Fig. 7
Method to create template point from video image
Fig. 7
Method to create template point from video image
Close modal

In Fig. 7, this example chooses the points (Pi) at the corner of the zebra lines. One can choose other locations as they are just four virtual points based relatively on other features.

Overall Systems.

This subsection concludes all the aforementioned sub-procedures into a block diagram, as shown in Fig. 8, which includes earlier mentioned diagrams. The explanation of this diagram is portrayed as the algorithms shown in the Algorithm section.

Fig. 8
Runway feature detector program
Fig. 8
Runway feature detector program
Close modal

Algorithm.

This section is designed to interconnect with what has been portrayed. To explain Fig. 8, we subdivide the algorithm into procedures using pseudo code and given as follows:

  1. Starting from setting the matrix value C0 = I. This is the identity matrix.

  2. Detect and match using SIFT or SURF and RANSAC. (Use the ROI to set the area of interest to be only the part within the frame.) The result of this step is homography (Ht), as shown in the earlier block diagram, which portrays the layout of detecting the runway.

    Algorithm 1

    Require: UndisROI Image, Warp Template

       Keypoints_Temp⇐Keypoints (Warp_Template)

       Keypoints_Image⇐Keypoints (UndisROI_Image)

       Descriptors_Temp⇐Descrip (Warp Template, Keypoints_Temp)

       Descriptors_Image⇐Descrip (UndisROI_Image, Keypoints_ Image)

    (Descriptors_Template, Descriptors_Image) ⇐ Match (Descriptors_Temp, Descriptors_Image)

       Ht⇐GetHomography (Descriptors_Template, Descriptors_ Image)

    ReturnHt

  3. Receive point values from the frames (Fi) (four points), which are acquired from the template (Q).

    Algorithm 2

    Require: Q

       Fi ⇐ GetFramePoint(Q)

    ReturnFi

  4. Calculate for matrix (At) following Eq. (8) by using the matrix (F′i) as Eq. (7) to prevent translational accumulated error by sub-function (translation cancel) in Algorithm 3.

    Algorithm 3

    Require: Fi, Ht, CtFiCt⇐PerspectiveTransform(Fi, Ct)

       HtCtFi⇐PerspectiveTransform (FiCt, Ht)

       At⇐TranslationCancel (HtCtFi)

    ReturnAt

  5. Calculating for the virtual template point (P′i) by Eq. (9) and the block diagram in Fig. 6 to get the virtual template point (P′i) as a appoint of interest in the template image (Q) after warping, as explained in template generation and the (P′i) is the real-world point calculated in the video streaming.

    Algorithm 4

    Require: Ht, Ct, Pi, ROI

       CtPi⇐PerspectiveTransform (Pi, Ct)

       P′m⇐PerspectiveTransform (CtPi, Ht)

       P′i⇐(P′m + ROI)

    ReturnP′i

  6. Calculate the matrix to determine the angle for the next frame Ct.

    Algorithm 5

    Require: Ct−1, At−1, Ht−1

       CtCt−1 * At−1 * Ht−1

    ReturnCt

Experimental Result

This experiment consists of four parts. The first part is a comparison of the estimated time, the second part is testing of accuracy for detecting an airport with and without angle adjustment (warped template), the third part is the testing of robustness to noise by mixing Gaussian noise into the images, and the fourth part is verifying on detection of an occluded runway.

In first experiment, for the comparison of the estimated time, we design the program to run by using ROI and not using ROI cropping. Figure 9 shows the result of the experiment.

Fig. 9
Result of first experiment. On the left is the adjusted (warped) template. The result of matching is in the middle and the result of detecting the whole runway is on the right
Fig. 9
Result of first experiment. On the left is the adjusted (warped) template. The result of matching is in the middle and the result of detecting the whole runway is on the right
Close modal

From the experimental result, the algorithm can match the points of interest on the runway correctly. One can see that at the beginning, with or without using ROI, it takes equal time. After the runway is located within consecutive frames, using ROI reduces the computation time greatly; we discussed this with military researchers who have dealt with many types of hardware and they believed the reduced time is applicable in real applications (Fig. 10).

Fig. 10
Time consumption with (fast) and without (slow) runway coordination approximation
Fig. 10
Time consumption with (fast) and without (slow) runway coordination approximation
Close modal

The second experiment compared the accuracy of runway detection between the original and warped templates by warping the template (Q) to a more suitable template (Q’). After matching the image with SIFT or SURF and RANSAC, the result of matching is shown Fig. 11 which compares the results between using ROI only and using ROI + warping.

Fig. 11
Comparison of accuracy of runway detection human expertise (small circles), ROI (light crosses), and warping image (dark crosses)
Fig. 11
Comparison of accuracy of runway detection human expertise (small circles), ROI (light crosses), and warping image (dark crosses)
Close modal

In detail, Fig. 11 compares the accuracy of runway detection between human expertise, ROI, and the warping image. P′i is the result of matching multiple images in a video, plotted with 15 frames from the video. Comparing human expertise (o-red), using ROI + non-warping (x-green), and using ROI + warping (x-blue), the results of the experiment show there is more error from using ROI + non-warping (x-green). Therefore, using ROI + warping (x-blue) is almost as accurate as human expertise (o-red), which has a deviation from the correct position of approximately ±6 pixels, quantitatively (Fig. 12).

Fig. 12
(a) Using only ROI and (b) using ROI as well, to estimate tilt angle
Fig. 12
(a) Using only ROI and (b) using ROI as well, to estimate tilt angle
Close modal

The third experiment considered the robustness with respect to image noise by adding mixed Gaussian noise into the input images, starting from σ = 0 and increasing until the system could not detect the airport.

The results of the experiment indicated that the maximum interference signal that could be handled was σ = 80, as shown in Fig. 13 for detecting the runway with Gaussian noise (σ) values of 0, 30, 60, and 80. The results show that the algorithm is robust regarding the interference signal but only up to σ = 80, as above that the runway was not detected.

Fig. 13
Robustness with image noise, adjusted by mixing different levels of Gaussian noise (σ)
Fig. 13
Robustness with image noise, adjusted by mixing different levels of Gaussian noise (σ)
Close modal

In the fourth experiment, we ran the algorithm on a runway with some parts occluded. In essence, we tested the method when the airplane was moving past the end of the runway. From Fig. 14, when the points of interest P3 and P4 are not in the images, the system still can detect the runway because there are enough features that the SIFT or SURF and RANSAC can calculate for the matrix H, so the system still can calculate P3 and P4 even though they are not in the images.

Fig. 14
Example of result when points of interest (P3 and P4) are not in the image
Fig. 14
Example of result when points of interest (P3 and P4) are not in the image
Close modal

Even though we did not directly test earlier works that also relied on multiple planar markers, the effectiveness of our ROI and (template) occlusion avoidance scheme was clear suitable based on time efficiency and robustness, as shown in Fig. 14, respectively.

Conclusion

This paper presents a method to track a runway with four or more planar features to automatically land a UAV on a birds-eye-image-known runway, without having to measure the distance between all the markers beforehand. The system is superior to existing methods in the sense that we comprehensively rely on multiple parameter feedback loops to enhance similarity between prepared features (in template images) and real-time features. The comprehensive feedback loops are the reason that the system is robust.

Furthermore, utilizing ROI also decreases feature detection area/time; the ROI localization is also part of the feedback scheme. Finally, four virtual template points are selected and warped along with the template to reduce the PnP computation time; this is better than selecting any four points, as the four virtual points represent all the features detected. The overall paradigm is comprehensive and fast enough to tackle real-world problems, especially, for example, when the GPS is jammed.

The method itself is comparatively as complex as many SLAM (Simultaneous Localization And Mapping) methods. However, in contrast to a ground vehicle SLAM, many block diagrams are interconnected to tackle the 3D localization and matching.

Footnote

Conflict of Interest

There are no conflicts of interest. This article does not include research in which human participants were involved. Informed consent is not applicable. This article does not include any research in which animal participants were involved.

Data Availability Statement

The authors attest that all data for this study are included in the paper.

References

1.
Wang
,
X.
,
Li
,
B.
, and
Geng
,
Q.
,
2012
, “
Runway Detection and Tracking for Unmanned Aerial Vehicle Based on an Improved Canny Edge Detection Algorithm
,”
International Conference on Intelligent Human–Machine Systems and Cybernetics
,
Nanchang, China
,
Aug. 26–27
, pp.
149
152
.
2.
Ruchanurucks
,
M.
,
Rakprayoon
,
P.
, and
Kongkaew
,
S.
,
2018
, “
Automatic Landing Assist System Using IMU+PnP for Robust Positioning of Fixed-Wing UAVs
,”
J. Intell. Rob. Syst.
,
90
(
1–2
), pp.
189
199
.
3.
Sereewattana
,
M.
,
Ruchanurucks
,
M.
,
Rakprayoon
,
P.
,
Siddhichai
,
S.
, and
Hasegawa
,
S.
,
2015
, “
Automatic Landing for Fixed-Wing UAV Using Stereo Vision With a Single Camera and an Orientation Sensor: A Concept
,”
IEEE/ASME International Conference on Advanced Intelligent Mechatronics
,
Busan, South Korea
,
July 7–11
, pp.
29
34
.
4.
Bay
,
H.
,
Tuytelaars
,
T.
, and
Gool
,
L. V.
,
2006
, “
SURF: Speeded-Up Robust Features
,”
European Conference on Computer Vision
,
Graz, Austria
,
May 7–13
, pp.
404
417
.
5.
Kong
,
W.
,
Hu
,
T.
,
Zhang
,
D.
,
Shen
,
L.
, and
Zhang
,
J.
,
2017
, “
Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach
,”
Sensor
,
17
(
6
), p.
1437
.
6.
Barber
,
B.
,
McLain
,
T.
, and
Edwards
,
B.
,
2009
, “
Vision-Based Landing of Fixed-Wing Miniature Air Vehicle
,”
J. Aerosp. Comput. Inf. Commun.
,
6
(
3
), pp.
207
226
.
7.
Huh
,
S.
, and
Shim
,
D. H.
,
2010
, “
A Vision-Based Automatic Landing Method for Fixed-Wing UAVs
,”
J. Intell. Rob. Syst.
,
57
(
1–4
), pp.
217
231
.
8.
Chen
,
M.
, and
Guo
,
J.
,
2015
, “
Automatic Landing on a Moving Target
,”
Mobile Robotics Lab Spring
.
9.
Sharp
,
C. S.
,
Shakernia
,
O.
, and
Sastry
,
S. S.
,
2001
, “
A Vision System for Landing an Unmanned Aerial Vehicle
,”
International Conference on Robotics and Automation
,
Seoul, South Korea
,
May 21–26
, pp.
1720
1727
.
10.
Sun
,
J.
,
Shen
,
Z.
,
Wang
,
Y.
,
Bao
,
H.
, and
Zhou
,
X.
,
2021
, “
LoFTR: Detector-Free Local Feature Matching With Transformers
,”
Conference on Computer Vision and Pattern Recognition
,
Virtual
,
June 19–25
, pp.
8922
8931
.
11.
Goncalves
,
T.
,
Azinheira
,
J.
, and
Rives
,
P.
,
2010
, “
Homography-Based Visual Servoing of an Aircraft for Automatic Approach, Landing
,”
IEEE International Conference on Robotics, Automation
,
Anchorage, AK
,
May 3–7
, pp.
9
14
.
12.
Ding
,
M.
,
Cao
,
Y. F.
,
Wu
,
Q. X.
, and
Zhang
,
Z.
,
2007
, “
Image Processing in Optical Guidance for Autonomous Landing of Lunar Probe
,”
ICIUS
,
Bali, Indonesia
,
Oct. 24–25
, pp.
1
5
.
13.
DeMenthon
,
D. F.
, and
Davis
,
L. S.
,
1995
, “
Model-Based Object Pose in 25 Lines of Code
,”
Int. J. Comput. Vision
,
15
(
1–2
), pp.
123
141
.
14.
Lepetit
,
V.
,
Moreno-Noguer
,
F.
, and
Fua
,
P.
,
2008
, “
EPnP: An Accurate O(n) Solution to the PnP Problem
,”
Int. J. Comput. Vision
,
81
(
2
), pp.
155
166
.
15.
Lu
,
C. P.
,
Hager
,
G. D.
, and
Mjolsness
,
E.
,
2000
, “
Fast, Globally Convergent Pose Estimation From Video Images
,”
IEEE Trans. Pattern Anal. Mach. Intell.
,
22
(
6
), pp.
610
622
.
16.
Ansar
,
A.
, and
Daniilidis
,
K.
,
2003
, “
Linear Pose Estimation From Points or Lines
,”
IEEE Trans. Pattern Anal. Mach. Intell.
,
25
(
5
), pp.
578
589
.
17.
Abdel-Aziz
,
Y.
, and
Karara
,
H.
,
1971
, “
Direct Linear Transformation From Comparator Coordinates Into Object Space Coordinates in Close-Range Photogrammetry
,”
ASP/UI Symposium on Close-Range Photogrammetry
,
Urbana, IL
,
January
, pp.
1
18
.
18.
Li
,
S.
,
Xu
,
C.
, and
Xie
,
M.
,
2012
, “
A Robust O(n) Solution to the Perspective-n-Point Problem
,”
IEEE Tran. Pattern Anal. Mach. Intell.
,
34
(
7
), pp.
1444
1450
.
19.
Malik
,
S.
,
Roth
,
G.
, and
McDonald
,
C.
,
2002
, “
Robust 2D Tracking for Real-Time Augmented Reality
,”
Proceedings of Vision Interface
,
Calgary, AB, Canada
,
May 27–29
, pp.
399
406
.
20.
Schweighofer
,
G.
, and
Pinz
,
A.
,
2006
, “
Robust Pose Estimation From a Planar Target
,”
IEEE Trans. Pattern Anal. Mach. Intell.
,
28
(
12
), pp.
2024
2030
.