Research ArticleBIOMIMETICS

AntBot: A six-legged walking robot able to home like desert ants in outdoor environments

See allHide authors and affiliations

Science Robotics  13 Feb 2019:
Vol. 4, Issue 27, eaau0307
DOI: 10.1126/scirobotics.aau0307


Autonomous outdoor navigation requires reliable multisensory fusion strategies. Desert ants travel widely every day, showing unrivaled navigation performance using only a few thousand neurons. In the desert, pheromones are instantly destroyed by the extreme heat. To navigate safely in this hostile environment, desert ants assess their heading from the polarized pattern of skylight and judge the distance traveled based on both a stride-counting method and the optic flow, i.e., the rate at which the ground moves across the eye. This process is called path integration (PI). Although many methods of endowing mobile robots with outdoor localization have been developed recently, most of them are still prone to considerable drift and uncertainty. We tested several ant-inspired solutions to outdoor homing navigation problems on a legged robot using two optical sensors equipped with just 14 pixels, two of which were dedicated to an insect-inspired compass sensitive to ultraviolet light. When combined with two rotating polarized filters, this compass was equivalent to two costly arrays composed of 374 photosensors, each of which was tuned to a specific polarization angle. The other 12 pixels were dedicated to optic flow measurements. Results show that our ant-inspired methods of navigation give precise performances. The mean homing error recorded during the overall trajectory was as small as 0.67% under lighting conditions similar to those encountered by ants. These findings show that ant-inspired PI strategies can be used to complement classical techniques with a high level of robustness and efficiency.


To keep up with the fast development of fully autonomous mobile robots, there is an urgent need to design navigation systems with a high level of reliability, repeatability, and robustness. The potential applications of these robots are many and various. They can be used for exploring unknown places, for instance, after natural disasters or in extraterrestrial environments, where both wireless communications and Global Positioning System (GPS) transmission can overcome signal failures; for inspecting urban infrastructures; for the long-range transportation of people and goods; for automatic crop inspection and harvest; for autonomous marine navigation (container ships); and for reconnaissance missions.

Nowadays, the civilian GPS is the main method used for obtaining the information required to determine a position on Earth. GPS is now integrated into many connected devices such as smartphones, watches, and cars, but it is accurate within a range of only up to 4.9 m on smartphones and even less near urban infrastructures, canyons, and trees (1). Vision-based systems are the second most frequently used means of specifying the position of mobile systems. Simultaneous localization and mapping (SLAM) methods are widely used in autonomous cars, planetary rovers, and unmanned aerial vehicles (UAVs), for instance, working both indoors and outdoors (24). Depending on the environment for which they are designed, SLAM algorithms can reach high levels of precision, although the computing costs are extremely high. In addition, the performances of vision-based methods depend strongly on the presence of consistent brightness, which seldom occurs in outdoor situations (5, 6). Event-based cameras are another possible way of solving vision-based localization problems, again using SLAM algorithms, as recently done with a dynamic vision sensor or DAVIS camera performing visual odometry (7). Laser detection and ranging methods [i.e., LIDAR (light detection and ranging)–based methods] also provide high-resolution maps that can be used in autonomous cars and robots (8). Unfortunately, the excellent performances obtained with all of these methods can be achieved only at a high price in terms of physical overload and high computational cost. The latter technique makes use of proprioceptive-based approaches, mostly involving inertial measurement units (IMUs), in which accelerometers are combined with rate gyros and magnetometers. These devices can be combined with GPS to improve the overall accuracy (9). Although a wide range of commercial IMUs are available, these sensors are still subject to long-term drift in both the static and dynamic modes and are highly sensitive to the electromagnetic interferences generated by ferrous building components.

To summarize, it would be of great interest to take advantage of all advanced navigation techniques to set up a new powerful, reliable, and robust navigation system. The aim of this paper is therefore to present a navigation system inspired by desert ants’ navigation behavior, which requires precise and robust sensory modalities.

Homing without GPS: An ant-based approach

Desert ants are known to be highly efficient navigators: They can find their way back home after foraging for hundreds of meters in hostile environments (Fig. 1A) (10). We proposed to take our inspiration from desert ants by developing a hexapod walking robot based on insect-inspired navigation models developed from the mid-1970s to the late 1990s. This robot called AntBot, which is equipped with minimalistic bioinspired sensors, was tested under outdoor conditions in a series of homing tasks. In line with previous bioinspired studies (11, 12), we adopted a dual objective. First, we sought to provide biologists who have developed ant-based behavioral models with a robotic platform that can be used to obtain field results in the same environments as insects. Then, we aimed to provide roboticists with robust autonomous navigation strategies that can be implemented outdoors. This strategy could be combined with other traditional techniques to improve the localization performances and to decrease the risks involved in navigating in GPS-denied environments.

Fig. 1 Homing trajectories of the desert ant Cataglyphis and the AntBot hexapod robot.

(A) Path integration in the desert ant C. fortis. After a random-like outbound trajectory (thin line, 592.1 m long), the forager went straight back to its nest (thick line, 140.5 m long). The open circle marks the nest entrance, and the large filled one shows the feeding location. Small filled dots represent time marks (every 60 s). Adapted from (10) by permission of Taylor & Francis. (B) AntBot’s homing performances inspired by experiments on Cataglyphis desert ants in (A). After a 10-checkpoint outbound trajectory (gray line, 10.6 m long), AntBot went back to its starting point (gray cross) just like desert ants (black line, 3.2 m long). Solid points denote the checkpoints where AntBot stopped to determine its heading.

Desert ants’ navigation systems differ greatly from those of other ant species. Classically, ants locate their nest by retracing the pheromone trails they have created during their foraging trip. In the desert, the heat of the ground destroys these pheromone trails. Desert ants therefore use proprioceptive and visual cues to find their way (10, 13). To navigate between their nest and food places, desert ants such as Cataglyphis fortis and Melophorus bagoti use a combination of several modes: the path integration (PI) mode, which involves only a few cues including the number of strides, the ventral optic flow (OF) cues based on the slip speed of the ground over the retina, and the celestial-based orientation cues; and visual guidance (VG) processes in cluttered environments containing prominent landmarks, which consist of memorizing and comparing snapshots [for a review of behavioral studies and models and studies on VG, see (1417)]. For several decades, experiments have been conducted on desert ants with a view to understanding how PI works (1821) and how ants choose between PI and VG. It has been observed that foraging ants always update their PI processes (14) even when applying VG. Under several circumstances, ants have been found to use a weighted combination of PI and VG routines (22). Ants were also found to mostly rely on PI when the visual scenery was less familiar (23). PI in desert ants can be said to resemble an Ariadne’s thread or a proxy strategy that is never switched off and the precision of which is not necessarily improved when the trajectory is repeated (10, 24).

Celestial cues, including both the pattern of polarization of the skylight and the Sun’s elevation, are widely used among desert ants (2528). Previous studies have shown that the insects’ compound eye contains a small region called the dorsal rim area (DRA), where the photoreceptors are sensitive to polarized light (29). Because the pattern of polarized skylight is largely symmetric about the solar meridian, it provides a reliable orientation cue for long-distance navigation because it can be used to estimate the heading angle (30). It has been reported that most species show maximum sensitivity in the ultraviolet (UV) range (31). Although many suggestions have been put forward to explain the predominance of UV cues, the main reason may lie in the persistence of the UV light through clouds and canopies (32). The insects’ celestial compass therefore remains stable even when clouds are crossing the sky, which enables them to update their PI seamlessly with orientation information. However, the polarized pattern of skylight is determined by the position of the Sun in the sky, the movement of which results in the gradual translation of the angle of polarization (AoP). If this shift was not compensated for, then it would cause detrimental drift in the insect’s PI. Ants can integrate the solar ephemeris templates in line with the circadian clock and thus correct the estimated heading angle (3336).

A model for the polarization neurons (POL-neurons) based on studies on the DRA of crickets was presented by Labhart (37). In the optic lobe, the POL-neurons’ activity is a sinusoidal function of the polarization orientation, i.e., an e-vector. The latter author also established that POL-neurons’ responses include three preferential angles of polarization, 10°, 60°, and 130°, with respect to the body’s longitudinal axis. The DRA ommatidia are known to be simultaneously sensitive to one e-vector and its corresponding orthogonal. In the model proposed by Labhart, an e-vector can be calculated from the log-ratio between the two photoreceptors’ responses: that based on one e-vector and that based on the orthogonal e-vector. This is the model that we have implemented onboard our hexapod robot.

Background and related studies in the field of robotics

The first robotic implementation of the desert ants’ DRA was presented in (38). In this robot, the sensor was composed of three pairs of polarization-sensitive units (POL-units) with a spectral sensitivity in the visible range. Each POL-unit was composed of two photodiodes topped with linear polarizing filters set orthogonally to each other to mimic the photoreceptors present in the DRA. The POL-units were aligned according to three different orientations (0°, 60°, and 120°). The polarization model described by Labhart was implemented, and the recorded mean error was found to be equal to 0.66° (37). Chu et al. developed a smaller compass consisting of six photodiodes topped with linear polarizers 60° apart from each other (39). The spectral sensitivity of the compass was in the blue range (400 to 520 nm), and the data processing method was the same as that described in (40). Experiments in a controlled environment yielded a mean error of ±0.2°. A polarization-dependent photodetector based on a bilayer nanowire was recently developed and tested under artificial blue lighting, giving a mean error of only ±0.1° once a polynomial fitting process had been applied (41). Other implementations of wire grid micropolarizers have been developed, some of which involved different AoPs (involving regular shifts of 45°) (42). Another biomimetic celestial compass was presented by Chahl and Mizutani in (43). The polarization sensor was composed of three pairs of photodiodes topped with linear polarizers as in (40) and used concomitantly with a bioinspired version of the insects’ ocelli (44). Bioinspired approaches have also been adopted with cameras in the visible range (45) along with the polarization model developed in (40). Concurrently with the biomimetic approach, other authors have developed polarization sensors in line with the physical theory of skylight polarization by computing the Stokes parameters to determine the degree of linear polarization (DoLP), the degree of circular polarization, and the AoP (4648).

Although various bioinspired celestial compasses have been implemented in the field of robotics, few robots have been developed so far that are able to perform ant-inspired autonomous robotic navigation tasks. In the late 1990s, the Sahabot 1 and 2 wheeled robots were pioneers in this field because, for the very first time, they were able to perform outdoor homing navigation tasks based on ant-inspired sensors and navigation models (PI and VG), with outstanding results. Relying on PI processes alone, Sahabot 2 gave a mean error of 13.5 cm (in the case of a 70-m-long trajectory) with an SD of 6.3 cm (40) under optimal lighting conditions. To achieve this high level of performance, the authors used an improved version of the celestial compass embedded onboard Sahabot 1 (38) in the visible range to compute the robot’s heading. The problem of solar ambiguity was solved by using additional photodiodes to detect the location of the Sun in the sky. The robot computed the distance traveled by integrating information provided by wheel encoders. Chu et al. also recently embedded their first celestial compass onboard a two-wheeled robot and obtained a mean homing error of 42 cm in the case of a 32-m-long trajectory (49).

Bioinspired odometry: Counting or not counting?

In all the robotic tests described above, odometry was performed using wheel encoders. Many hypotheses have been put forward to explain how insects measure the distance traveled (odometry) when foraging in unknown environments. Flying insects are thought to mostly use OF cues (50) and snapshot images (51). Cataglyphis and Melophorus desert ants can collect distance information from multiple sources: the panoramic OF or snapshot images (20, 52, 53), the panoramic skyline (54), ventral OF (55), and stride integration methods, which have been called the insects’ pedometer (5658). Panoramic vision–based methods seem to be mostly applied when navigating in cluttered environments, where detected obstacles can be used by insects as landmarks to find the location of the goal. Unfortunately, this method does not explain the straight-ahead homing paths adopted by walking insects at the end of their very first exploration of a new food source (Fig. 1A). Foragers’ PI odometer is now known not only to rely mainly on proprioceptive cues (stride integration methods) but also, in some specific cases, to combine these methods with visual information (ventral OF) to constantly track the distance to both the nest and food areas (59). Other studies have suggested that both odometric cues (OF or stride counting methods) are acquired during the foraging trip and can then be used, independently or not, to regain the nest entrance (60). These PI cues can also be combined with VG to increase the accuracy and the robustness of the homing procedure when foragers are crossing regions of interest they have previously explored (24).


Autonomous navigation was successfully performed with a legged robot endowed with two insect-inspired optical sensors comprising only 14 pixels overall, two of which were dedicated to a compass sensitive to UV light. When combining this pair with two rotating polarized filters, this compass was equivalent to two arrays composed of 374 pixels, each of which tuned to a specific polarization angle. The other 12 pixels were dedicated to OF measurements and were able to adapt autonomously to large changes in light intensity. In addition, the data fusion method presented here mimics a highly precise, robust dead reckoning strategy. These advances, in addition to recent findings on walking recovery in legged robots that have been damaged during the foraging trip (61), open up many possibilities for legged robots in the field of real-world navigation.

Here, the robot was set several homing tasks using various sensory modes such as internal knowledge of its stride integration processes, the use of the ventral OF, and that of celestial-based orientation (Fig. 1B). Five PI modes were therefore designed and implemented to determine the homing trajectory in the same way as desert ants. The performances and the reliability of these modes were determined on the basis of 130 trajectories. Successful homing trajectories were taken to be those in which the robot ended up in a homing area defined as a circular area around the departure point, the radius of which was equal to half of the robot’s diameter (22.5 cm).

We show that our autonomous robot’s performances are robust to weather conditions and UV index changes. In addition, our ant-inspired navigation strategy can be applied to any mobile system evolving in open-air environments (cars, ships, drones, and wheeled robots). In the long run, we believe that autonomous walking robots will be reliable enough to explore rough terrains in GPS- and communication-denied environments where our ant-inspired navigation strategy could be used as a genuine navigation system.


We therefore designed and tested five different ant-inspired homing methods to gradually add sensory information about the heading and the odometry to our hexapod robot AntBot until the desert ants’ PI was completely mimicked and then selected the best model for use in low-computational, high-efficiency, robust navigation.

Designing a robotic ant

AntBot is a hexapod robot equipped with insect-inspired optical sensors and driven by ant-inspired navigational models to be tested under real-life conditions (Fig. 2, A, C, and D, and fig. S1). It has three degrees of freedom per leg and is fully three-dimensional (3D)–printed. The walking gait of the AntBot robot was directly inspired by ants’ highly regular tripod gait (62, 63). The walking gait obeys a fixed preprogrammed pattern. However, real insects’ walking behavior is slightly different from that implemented in AntBot. Chronographs of the walking patterns of Cataglyphis bicolor desert ants (fig. S1D) have shown that ants’ legs show different transfer and pause times: The midlegs’ transfer times are shorter than those of the other legs. In addition, the transfer phase starts at different times between the front and hind legs. AntBot’s gait is far more regular; it shows steady synchronous transfer, pause times, and departure times (fig. S1E). The robot’s walking pattern is determined entirely by just a few parameters, which are directly adjustable via the terminal command. These parameters are described in table S1, which also gives the values used in this study. AntBot’s walking performances were tested in the Flight Arena of the Mediterranean (a 6-m-wide, 8-m-long, and 6-m-high arena) equipped with 17 motion capture cameras. Its straightforward walking pattern was determined in terms of the mean walking stride length (8.2 cm) and the mean turning angle per turning stride (10.9°) based on the parameters of interest in this study. AntBot is equipped with multiple minimalistic sensors mimicking the desert ants’ navigational toolkit: an insect-inspired celestial compass (64, 65) and a bioinspired OF sensor called M2APix (Michaelis-Menten auto-adaptive pixels) (66, 67). Details of the robot’s electronic architecture and its structure are presented in Fig. 2B. All the sensors and electronic parts are controlled by an embedded microcomputer (Raspberry Pi 2B board) dedicated to sensor data acquisition, data processing, and navigation. The robot weighs 2.3 kg, including its batteries. It can reach a maximum speed of about 90 cm/s and has a maximum autonomy of 30 min.

Fig. 2 Hardware used in the hexapod robot AntBot.

(A) Structure of the robot with its sensors and electronic parts. (B) Hardware architecture of the AntBot robotic platform. To deal with the communications between the Raspberry Pi 2B board and the other electronic devices [the celestial compass, IMU (MinIMU-9 v3), M2APix OF sensor, and stepper motor], we developed a custom-made shield. (C) Side view and (D) top view of AntBot.

The estimated heading angle is determined onboard on the basis of the celestial pattern of polarization described in Fig. 3A. Because of the Rayleigh scattering of sunlight, interactions between the photons and the constituents of Earth’s atmosphere produce a regular pattern across the sky (68, 69). Along the solar and anti-solar meridians, the AoP is known to be perpendicular to the meridians. We made use of this property in our ant-inspired celestial compass (Fig. 3, B and C, and fig. S2) (64). The compass is composed of two POL-units consisting of two UV light sensors (SG01D-18, SgLux) topped with actuated rotating linear sheet polarizers. The angular aperture of each POL-unit is about 120° (fig. S3), and the angular resolution is arbitrarily set at 0.96°, with an acquisition period of about 20 s because of the full rotation of the polarizers. This time multiplexing solution, though not existing in the DRA of insects, is an engineering solution that reproduces the full DRA with minimal overload for the robot by generating 2 × 374 measurements in an acquisition period. According to the model for POL-neurons found to exist in crickets by Labhart (37), this celestial compass determines the AoP by analyzing the log-ratio between the two POL-units’ signals (Fig. 3, D and E; for further details, see text S1). Preliminary tests on the previous version of AntBot called Hexabot showed that this sensor is highly reliable and suitable for performing navigation tasks in various meteorological contexts (high/low UV indexes, clear/overcast sky) (64). The statistical performances of the celestial compass were tested under various meteorological conditions (figs. S4 to S6).

Fig. 3 The celestial compass.

(A) 3D diagram of the pattern of polarization in the sky relative to the AntBot robot observer (O), at a given elevation of the Sun. The gray curves give the AoP all around the dome of the sky. The minimum DoLP occurs in the region of the Sun, and the maximum DoLP occurs 90° from the Sun (red curve). (B) Computer-aided design view of the celestial compass. (C) Photograph of the celestial compass. On the left, the top gear has been removed to show the UV light sensor and the Hall-effect sensor used to stop the sky scanning process after one full gear rotation. (D) An example of normalized raw (thin lines) and filtered (thick lines) signals UV0 (in blue) and UV1 (in red) during a sunny day in April 2017 in Marseille, France. (E) Raw (thin line) and filtered (thick line) log-ratio signals between UV0 and UV1. The AoP is located at the minimum values of the log-ratio output (here, the AoP is 118° and mod is 180°).

Our celestial compass is sensitive to changes in the sky and the changing effects of clouds on the AoP. Previous results showed that, even with a signal acquisition time as long as 20 s, the robot was still able to determine its heading angle with great precision: The median error was 0.02° when the sky was slightly cloudy and 0.59° in the case of an overcast sky (65). To solve the ambiguity between the solar and anti-solar heading angles, we made AntBot roll its celestial compass to the left and the right to detect the part of the sky containing the Sun (fig. S7). This approach is fairly in line with recent findings on the locust’s brain. POL-neurons are activated in response to direct sunlight, which has been taken to possibly explain how the solar ambiguity may be solved in the insect’s brain (36, 70).

As observed in desert ants, distance information can be obtained by using either ventral OF measurements or stride integration methods. AntBot is equipped with a 12-pixel OF sensor called M2APix (Fig. 4, A and B) mounted facing the ground. The M2APix measures the OF at a high rate and shows auto-adaptability to light changes within a seven-decade range (66). This sensor was characterized, and its interpixel angle ∆ϕ was found to be equal to 3.57° with an SD of 0.027° (fig. S8). Its signal-to-noise ratio was also measured, giving a mean value of 32 dB. The sensor is composed of two rows of six hexagonal pixels (Fig. 4C). Two adjacent pixels, which are referred to as a local motion sensor (LMS), are used to determine the time lag between two adjacent pixels’ detection of the same moving edge (Fig. 4D). The OF is equal to the ratio between the inter-receptor angle and this time lag. To compute a precise and reliable OF value, we compared the time lags of all the LMSs with a threshold defined by the robot’s average speed (Fig. 4E). The final OF value was calculated using only the time lags that were not rejected by the threshold process. The height variation of the robot’s center of mass while walking amounts to less than 1 cm in comparison with the height of the sensor (about 22 cm) and can therefore be taken to have no effect on the average OF value. The distance traveled can therefore be computed on the basis of OF measurements while the robot is walking straight ahead (text S2).

Fig. 4 AntBot’s ventral OF sensor.

(A) The M2APix silicon retina. Adapted from (66). The OF sensor is composed of 12 Michaelis-Menten pixels. (B) Photograph of the M2APix OF sensor embedded onboard AntBot. The sensor (a) is connected to a Teensy 3.2 microcontroller (b) and topped with a Raspberry Pi NoIR defocused lens (c). (C) Schematic view of the 12 hexagonal Michaelis-Menten pixels, divided into two rows of 6 pixels. (D) Optic geometry of a local motion sensor: two adjacent pixels in a row, showing visual signal acquisition of a moving contrast, depending on the inter-pixel angle ∆ϕ between two adjacent pixels and the acceptance angle ∆ρ corresponding to the width of the Gaussian angular sensitivity of each pixel at half-height. (E) Example of raw signals generated by a black/white moving edge. The colors correspond to those used in (C). The time lag between two adjacent pixels (∆t) is computed using cross-correlation methods.

Experimental context

These experiments were performed in Marseille, in the South of France, in front of our laboratory (43°14′02.1″ N, 5°26′37.4″ E) from 5 January to 16 February 2018. This place is surrounded by mountains, but the Sun remained visible all day long. The meteorological conditions were stable, with little wind and a clear sky (French meteorological services) (fig. S9). According to the European Space Agency, the UV index generally ranged around 1.0, showing little variability (<20%) in early January and reached 1.6 in mid-February, when the last experiments were performed. All experiments not requiring the celestial compass were performed indoors in the Flight Arena of the Mediterranean. To ensure that neither the walking performances of AntBot nor the OF-based odometric cues would bias the homing results, we used flat textured panels in all the experimental setups (fig. S10).

The dynamic behavior of AntBot’s servos is highly dependent on the ambient temperature, because the servos tend to heat up after long periods of use, resulting in locomotor failure. Because of this limitation, the servos’ temperature had to be strictly monitored when the experiments were conducted during the afternoon (when the temperatures ranged from 10° to 18°C). It was observed during previous tests that the average distance walked tends to differ between morning and afternoon conditions. This can be explained by the great difference in temperature, ranging from −2° to 8°C in the morning to more than 10°C in the afternoon. An ad hoc correction was therefore applied to the robot’s navigation modes (table S2). Last, the robot was powered by an external power supply during all experiments to make them easier to perform and to ensure that the robot’s dynamics would remain steady during all the navigation tasks.

Ant-inspired PI models

Because desert ants can retrieve navigation cues using stride integration, ventral OF, and skylight polarization processes, the following five PI modes were implemented:

(1) A PI-ST mode (a blind mode), where both the distance and the computed orientation were based solely on the stride integration process. The stride integration was performed using a motor control signal sent to the robot’s controller.

(2) A PI-OF-ST mode based on the ventral OF odometry and the stride-based estimation of the heading angle.

(3) PI-ST-Fuse mode based on a stride-based estimate of the robot’s heading angle and a combination between the stride integration and the ventral OF cues to determine the distance traveled.

(4) A PI-POL-ST mode based on the polarization of the skylight to determine the heading angle and a stride count to estimate the distance.

(5) The fully ant-inspired PI-Full mode based on the celestial compass to estimate the heading angle and a custom-made combination between the stride integration method and the ventral OF to determine the distance traveled (text S2 and table S2).

The path integrator is outlined in text S3 regardless of the sensory modes involved.

Navigation experiments

The outdoor experiments were conducted as follows: the foraging trajectory included N checkpoints (fig. S11); the homing phase was divided into NH checkpoints, thus enabling the robot to exactly measure and correct its own drift when using its celestial and OF sensors. The robot stopped at each checkpoint to determine the AoP before performing the next navigation segment. To test the five PI modes of interest here, we performed three series of experiments. In the first series, a five-checkpoint trajectory was tested and repeated 20 times in each PI mode. The overall foraging trip was about 5 m long and involved a 2-m-long homing trajectory. In the second series of experiments, five random five-checkpoint trajectories were tested with greatly varying distances. Last, a 10-checkpoint trajectory corresponding to an overall distance of 14 m was tested.

The first random 7-m-long trajectory was tested and repeated 20 times to check the repeatability of the robot’s homing performances (Fig. 5). The greatest median error was reached with the PI-ST mode (error, 7.29%), showing a high level of variability (Fig. 5A), whereas the lowest median error was recorded with the PI-Full mode (Fig. 5E); in this case, the error was 0.97%, and the minimum error recorded was equal to 0.17%. Some of the experiments were conducted under changeable sky conditions, but the results were combined with those obtained under a clear sky because there were no significant differences in the corresponding homing performances (all experiments conducted under a changeable sky led to homing success). It can be observed that the greatest variability among the homing locations corresponded to the PI-OF-ST mode (Fig. 5B), because the confidence ellipse had the largest area (2638 cm2) in this case, but the homing trajectories were more home-directed with this mode than in the PI-ST tests. Noncelestial methods (Fig. 5, B and C) gave poor homing performances; less than 25% of the 20 experiments resulted in homing success, but the use of the ventral OF improved the results (Fig. 5, B and C) in terms of the size of the ellipse. With the PI-POL-ST mode, the orientation estimate obtained was correct because AntBot’s homing path was oriented toward the goal, and the homing location was nearer to the goal and showed less variability, because the confidence ellipse measured only 1409 cm2 in this case (Fig. 5D). The orientation of the confidence ellipse shows that the homing distance was variable: The major axis was oriented in the homing direction, and the center of the ellipse was located near the goal, which means that some of the distances in these experiments were either underestimated or overestimated. The confidence ellipse of the homing locations in the PI-Full mode (Fig. 5E) had the smallest area (236 cm2; i.e., 10% of the largest area) and tended to have a circular shape. Last, the violin plots of the homing error show that the density probability had a normal distribution with the PI-ST, PI-OF-ST, and PI-Full modes (Lilliefors normality tests, P > 0.05) (Fig. 5, A, B, and E).

Fig. 5 Homing performances on a five-checkpoint trajectory.

(A to E) Homing results on the five-checkpoint trajectory based on the PI mode. The trajectory was repeated 20 times. Blue lines give the outbound trajectory, and red lines give the homeward trajectory. The black cross symbolizes the home location, and the green cross is the average position of AntBot after homing. (F) Box and violin plots of the homing error as a percentage of the entire journey in each PI mode. Violin plots show the probability density corresponding to each error value.

We then investigated the influence of the length and shape of the trajectory on the homing performances. Five random trajectories were generated and tested with various shapes and distances (from 4.7 to 10.2 m) (Fig. 6). The PI-ST mode gave a mean error equal to 9.37% of the entire journey, but the largest error was obtained with the PI-OF-ST mode, which gave a mean error of 11.58%. The PI-POL-ST mode resulted in a mean error of 2.99% and a low level of variability (SD, 1.3%). Once again, the full sensor and the ant-like PI-Full mode resulted in the lowest mean error (0.65%) and a low SD (0.28%). Comparisons between the results of these experiments and those obtained in the first series showed that the shape of the trajectory did not affect the robot’s homing performances. The violin plots obtained were similar to the previous ones presented in Fig. 5. The normal distribution of the homing distance error was also established in the case of all the PI modes tested (Lilliefors normality tests, P > 0.05).

Fig. 6 AntBot’s homing performances in five different five-checkpoint trajectories.

(A to E) The five trajectories tested. Outbound trajectories are presented in thin lines, and homeward trajectories are presented in thick lines. (F) Box and violin plots of the five trajectories in all the PI modes tested. Violin plots show the probability density corresponding to each error value with each of the PI methods tested.

However, the lengths of the trajectories tested here (Fig. 6) differed considerably (by up to 1.5 m), which may explain the variations in the homing performances observed in comparison with those presented in Fig. 5. We therefore conducted a third series of experiments in which AntBot was made to walk for more than 10 m taking a random trajectory and then returned to its starting point (giving a homing distance equal to about 4 m). This long-distance trajectory was tested only once with each PI mode and recorded (movies S1 to S5). The long-distance trajectories recorded are shown in Fig. 7. The greatest homing error was 13.44% (171.1 cm in the PI-ST mode). The lowest homing error was recorded with the PI-Full mode, where the error was 6.47 cm, corresponding to 0.47% of the entire trajectory (Figs. 1B and 7). The homing errors recorded with all the PI modes were similar to those obtained in the two previous series of experiments (two-tailed t test; P > αC, where αC = 0.05/5 using a Bonferroni correction procedure).

Fig. 7 AntBot’s homing performances involving a long trajectory.

The black cross gives the starting point; the red crosses give the homing result in each of the PI modes tested. Outbound trajectories are presented in thin lines, and homeward trajectories are presented in thick lines.

The overall results of the 26 experiments performed in all the PI modes are summarized in tables S3 to S7. The ground truth data were compared with the results obtained with the robot using the PI method (tables S3 to S7). The results showed that the robot’s PI was nearest to the ground truth with the PI-Full mode, showing a root mean square error (RMSE) of the homing location between the ground truth and the robot’s endpoint of 4.6 cm (this value jumped to 65.0 cm with the PI-ST mode). The homing errors, expressed as a percentage of the entire trajectory, are presented in Fig. 8B in the case of each PI mode, depending on the experiments. As expected in view of Figs. 5 to 7, AntBot gave the lowest homing error with the PI-Full mode (one-tailed t test; P > αC, where αC = 0.05/5 using a Bonferroni correction procedure), corresponding to a mean error of 0.67 ± 0.27% (i.e., a mean error of 6.67 ± 2.7 cm). These results show that the distance did not significantly affect the homing performance in the PI-Full mode. The homing error in the case of the 14-m-long trajectory was 6.47 cm, which differed by only 3% from the mean values obtained in all the experiments. The criterion used to determine whether an experiment was a homing success was whether the robot’s center of mass was located within a disk with a radius of 22.5 cm (i.e., half of the robot’s diameter) centered on the robot’s starting point. Figure 8A gives the homing success rates achieved in the experiments depending on the homing criterion, ranging here between L and L/16, where L is the robot’s diameter (45 cm). Except for the PI-POL-ST and PI-Full modes, none of the PI modes gave homing success rates of more than 90% (with either homing criterion). The PI-POL-ST mode gave a homing success rate of 97% when the homing criterion was set at L, but this value dropped sharply to less than 60% at L/2. The PI-Full mode gave a 100% homing success rate with both homing criteria L and L/2. At L/4 (i.e., 11.3 cm), all PI modes, except the PI-Full mode, yielded homing success rates below 10%, whereas that obtained with the PI-Full mode was still greater than 90%. This trend shows how accurate and repeatable the fully ant-inspired PI-Full mode is, although it requires only 2 pixels for celestial vision and 12 pixels for the ventral OF acquisition.

Fig. 8 Homing performances of AntBot.

(A) Homing success rate based on the homing success criterion defined as a fraction of the robot’s diameter (L). The criterion used in this study was L/2. (B) Homing errors as a percentage of the entire trajectory, depending on the PI modes described in Figs. 5 and 6. The mean errors and SDs were based on all the results obtained in the 26 experiments.


Five ant-inspired PI modes were tested onboard our hexapod robot AntBot. Most of the previous ant-based visual compasses were sensitive in the visible range, whereas AntBot was equipped with a DRA-inspired celestial compass that acquires the AoP in the UV range and with a ventral OF sensor composed of 12 auto-adaptive pixels. The PI-Full mode, in which celestial cues (skylight polarization and Sun position cues) are used to estimate the heading angle and ventral OF and stride integration processes are used to estimate the travel distance, resulted in a highly accurate homing performance and a very low homing error of 0.67 ± 0.27% (Fig. 8), and few differences were observed between the ground truth and the robot’s distance estimations (RMSE, 4.6 cm) (table S7). The results obtained here show that these performances did not depend on either the shape or the length of the trajectory, which means that the PI-Full mode is a highly suitable basis for autonomous navigation in open-air environments. If it is combined with other reliable dead reckoning techniques (70), then our results could considerably open the possibilities for legged robots to navigate in real environments, where wheeled robots are hampered by their own structure. The present results show that heading information can also be reliably determined on the basis of purely odometric cues [stride-counting (ST) methods and OF measurements] using the PI-OF-ST and PI-ST-FUSE modes. However, the robustness of the celestial compass under changeable weather conditions has been shown in (65), where the proposed AntBot method features greater performances than the Sahabot method.

Some of the experiments were conducted under changeable sky conditions. In all of the outdoor experiments combined, the homing success rate recorded under a cloudy sky was equal to 100% with the L/2 homing distance criterion (Fig. 8Aand tables S6 and S7). Future studies will focus on the large-scale repeatability of these results under changeable sky conditions.

Ant-inspired autonomous navigation methods have yielded good performances on wheeled robots (40, 49). The lowest mean homing error was obtained with the Sahabot 2 robot, but the SDs were rather high in comparison with AntBot’s present performances: Sahabot 2, 13.5 ± 6.3 cm in the case of a 70-m-long trajectory (0.19%); Chu et al. (49), 42 cm in that of a 32-m-long trajectory (1.31%); and AntBot, 6.47 cm in the case of a 14-m-long trajectory (0.46%). It is worth discussing the differences between these wheeled robots and desert ants. (i) There obviously exist no morphological or locomotor similarities between rovers and ants or legged robots, and rovers have a significant advantage here, because wheeled machines are less prone to drift than their legged counterparts. Rovers can easily acquire celestial information, whereas desert ants collect their celestial cues with great precision despite heavy yaw, pitch, and roll disturbances while they are traveling at high speed (62). On the basis of extensive studies carried out at our laboratory on attitude disturbances in our previous Hexabot robot (71), we can assume that AntBot, which is equipped with the same controller as Hexabot, faces the same attitude disturbances as desert ants when walking with tripod gait. Besides, AntBot can navigate in any kind of terrestrial environments, flat or rough, whereas wheeled robots are inherently limited to specific applications. (ii) The celestial compasses implemented in these studies were all designed to measure the polarized light in the visible range, whereas desert ants perceive their celestial cues in the UV range (31, 32), the principle of which is used in the celestial compass embedded onboard AntBot. The previous experiments were performed under clear sky conditions in the early morning or late afternoon to prevent the sensors from being saturated by the Sun, whereas desert ants are known to forage during the hottest time of day, i.e., when the Sun is near the zenith and the DoLP is at its lowest level (10, 16). The experiments conducted with AntBot were performed at any time of the day (tables S6 and S7). (iii) The methods used by desert ants to estimate the distance traveled differ considerably from those classically adopted in rovers (involving wheel encoders). The legged AntBot robot mimics desert ants in many respects: the morphological and locomotor aspects, the visual heading measurements, the odometric process based on the OF and the stride integration, and the PI processes on which their homing navigation is based. AntBot can reach very low homing errors, such as 0.47% (i.e., 6.47 cm) after taking a 14-m-long trajectory (Fig. 1B). The stride information can also be used to reliably determine the robot’s heading (modes PI-OF-ST and PI-ST-FUSE). AntBot’s navigation performances involved a very small number of pixels in comparison with traditional engineering solutions (up to several megapixels in the case of both OF sensors and celestial compasses).

The homing criterion adopted in this study was the ability to reach an area with a radius corresponding to half of AntBot’s diameter. Because of the robot’s walking and turning limitations (maximum walking stride, 8.2 cm; maximum turning stride, 10.9°), this criterion was a fair compromise between AntBot’s possibilities and the requirements of the homing task. The probability density of the homing error was studied in the case of all the PI modes tested. All the PI modes were found to have normal distributions (Figs. 5 and 6). On the other hand, the angular drift and the stride length also resulted in normal distributions (fig. S12). The use of several Kalman filters may therefore improve the robot’s homing performances in the PI-Full mode: The first filter could be applied to the stride-based estimate of both the heading angle and the travel distance, and the second filter could be applied directly to the PI procedure.

In its present form, AntBot has a diameter of 45 cm and walked at a speed of about 10 cm/s during the experiments, whereas C. fortis desert ants are only 1 cm long. As shown in Fig. 1A, the ant’s trajectory measured 732.6 m. AntBot should therefore have covered more than 32 km to be properly compared with ants’ navigation performances. Although AntBot can walk at speeds of up to 90 cm/s, very large scale navigation will require improving the hexapod robot’s actuators and power supply. These improvements will make it possible to test the PI-Full mode in more natural contexts, such as rugged terrains in a cluttered environment (forests) where the view of the sky is often barred by the presence of branches and leaves in the visual field of the celestial compass. The possibility of using the celestial compass under a forest canopy may require further visual processing of the POL-units’ signals: It is planned, for example, to investigate the effects of the phase of the signals on the AoP calculations.

The PI process adopted by Cataglyphis desert ants was embedded onboard a fully ant-inspired hexapod walking robot that mimics both the insect’s shape and the visual sensory modes involved. This approach yielded outstanding performances in several homing tasks. Yet, as shown in Fig. 1, desert ants are perfectly able to make their way back to their nest, whereas AntBot is still subject to a residual homing error of 0.47%. This difference can be explained by the VG used by desert ants during the final approach to their nest, as well as by the combination with PI adopted during their journey (22). Low-resolution panoramic vision is needed not only to perform VG navigation but also to add the collision-avoidance skills needed to make the robot more adapted to unknown, uneven environments where obstacles are liable to occur. The robot’s static friction coefficient has been found to be equal to 0.35, corresponding to a minimum ground slope of about 20°. Further experiments should consequently focus on the robot’s homing performances when walking on more steeply sloping ground.


The navigation task used in this paper was a homing task performed after a random trajectory in a flat outdoor environment. The low–computational cost ant-inspired approach presented here, which requires just a few sensors, considerably improved the robot’s homing performances. To meet this challenge, we equipped our hexapod robot AntBot with a celestial compass mimicking ants’ DRA, which determines the robot’s heading angle, and the M2APix OF sensor, which collects the ventral OF cues to ensure precise odometric measurements. The robot is also capable of integrating its strides while walking straight ahead (thus obtaining additional odometric cues) and while turning (thus obtaining additional orientation cues). Five methods combining all these sensory modes were therefore compared in this study.

The robot’s construction

AntBot is a fully 3D-printed structure [using polylactic acid (PLA) filament]. The 3D parts are available at Parts. The robot is equipped with an OpenCM9.04-C board, which is an Arduino-like microcontroller adapted for use with Dynamixel servos via a transistor-transistor logic serial communication (Fig. 2B). This controller includes an ARM Cortex-M3 processor. AntBot’s legs have three degrees of freedom controlled by Dynamixel AX-18 servos. The embedded microcomputer unit is a Raspberry Pi 2 model B, including a 900-MHz quad core ARM Cortex-A7 CPU with 1-GB memory. A board shield was designed to plug all the sensors and electronic devices onto the microcomputer unit. Communications between AntBot and the host computer took place via a local WiFi network. The robot is powered by a three-cell 11.4-V 5300-mAh lithium polymer battery (Gens Ace) mounted below the robot.

The optical compass

The celestial compass consists of two UV-polarized light sensors called POL-units, the spectral sensitivity of which ranges from 270 to 400 nm, with a peak transmission at 330 nm. Each POL-unit is composed of a UV-sensitive photodiode SG01D-18 (SgLux) topped with a UV linear sheet polarizer (HNP’B replacement) held by a gear driven by a stepper motor AM0820-A-0.225-7 (Faulhaber) (Fig. 3, B and C, and fig. S2). This motor is controlled by the Raspberry Pi 2 board via an I2C communication protocol. Two custom-made Hall-effect sensors signal each full rotation completed by the gears (fig. S3C). The sensor is 3D-printed using PLA filament. The parts are available at

The optical compass was fully characterized. The noise level was quantified, and the results show that POL-units feature Gaussian noise (fig. S4). The angular aperture of each POL-unit is equal to ±60° centered at 90° (fig. S3). The effects of clouds were investigated (figs. S5 and S6). The celestial compass RMSE under clear sky conditions (all UV indexes combined) was equal to 1.3%. This error dropped to 10.3% under a covered sky, but field results showed that the short-range navigation performances were still satisfactory (65). The pattern of polarization had to remain fairly constant during the insect’s or robot’s journey. In practice, the present experiments did not take longer than 20 min, which was consistent with the duration of the insects’ journey.

Given the regular pattern of skylight polarization (Fig. 3A), the output of each POL-unit is a 180° periodic sine wave, with a 180° shift between the two POL-units because the linear sheet polarizers are set orthogonally to each other. Raw signals are low-pass–filtered and normalized (Fig. 3D). The log-ratio output is then calculated between the two normalized and corrected signals (Fig. 3E), from which the robot’s heading angle is then determined. Because of the symmetry of the polarization pattern, this angle of orientation is known to range within [0°;180°]. To solve the solar/anti-solar ambiguity, AntBot rolls its celestial compass to detect the half (right or left) of the sky containing the maximum UV level (fig. S7). Last, the effect of the Sun’s course on the polarization is corrected after the solar ephemerides have been collected online ( to specify the exact location and dates of the experiments.

The OF sensor

The ventral OF sensor embedded onboard AntBot is a M2APix (66), including 12 hexagonal Michaelis-Menten pixels (Fig. 4). The inter-receptor angle was found to be equal to 3.57° with an SD of 0.027° (fig. S8). One OF value is computed by two adjacent pixels in a row, giving the time lag between the two pixels’ detection of a single moving contrast based on the cross-correlation method (text S2). The 10 local OF values are then sorted, and those that do not fit a specific range (defined on the basis of AntBot’s typical speed) are removed. The final OF is calculated, taking the mean value of all the remaining OF values. Last, the travel distance is computed on the basis of this mean OF value.

Collecting the ground truth data

The indoor experiments were recorded using our motion capture system, consisting of 17 infrared VICON cameras placed in a 6 m by 6 m by 8 m flight arena to determine the robot’s 3D location with millimetric precision. At each checkpoint, the robot’s location is tracked and stored in a Matlab file. The outdoor experiments were recorded by taking photographs of the scene at each checkpoint. The perspective was then corrected in each photograph and a point-and-click application was coded in Matlab language to determine the location of the robot in its environment. The robot’s distance from the goal was also measured directly during all the outdoor experiments before and after the homing trip. The overall results of all the experiments were processed with Matlab software.


Text S1. The celestial compass

Text S2. The robot’s odometer

Text S3. The PI process

Fig. S1. AntBot, an ant-inspired hexapod robot.

Fig. S2. Exploded computer-aided design view of the UV-polarized light compass.

Fig. S3. Characterization of the angular aperture of the celestial compass.

Fig. S4. Noise measured in each POL-unit in the absence of UV-polarized light.

Fig. S5. Effects of variable sky on the output of the celestial compass.

Fig. S6. Characterization of the celestial compass.

Fig. S7. Solar-based solution to the heading angle ambiguity.

Fig. S8. Characterization of the M2APix OF sensor.

Fig. S9. Photograph of the experimental setup.

Fig. S10. Photographs of the textured panels.

Fig. S11. Graph of the homing path.

Fig. S12. AntBot’s walking drift analysis.

Table S1. Walking parameters of the hexapod robot AntBot.

Table S2. Empiric gain β used in the outdoor experiments.

Table S3. Results obtained in the PI-ST mode.

Table S4. Results obtained in the PI-OF-ST mode.

Table S5. Results obtained in the PI-ST-FUSE mode.

Table S6. Results obtained in the PI-POL-ST mode.

Table S7. Results obtained in the PI-Full mode.

Movie S1 (.mp4 format). AntBot’s homing performances in the PI-ST mode.

Movie S2 (.mp4 format). AntBot’s homing performances in the PI-OF-ST mode.

Movie S3 (.mp4 format). AntBot’s homing performances in the PI-ST-FUSE mode.

Movie S4 (.mp4 format). AntBot’s homing performances in the PI-POL-ST mode.

Movie S5 (.mp4 format). AntBot’s homing performances in the PI-Full mode.


Acknowledgments: We thank M. Boyron and J. Diperi for technical support with designing the celestial compass, S. Lapalus for help with the outdoor experimental setup, and J. Blanc for revising the English manuscript. Funding: This research was supported by the French Direction Générale de l’Armement (DGA), CNRS, Aix-Marseille Université, the Provence-Alpes-Côte d’Azur region, and the French National Research Agency for Research (ANR) in the framework of the Equipex/Robotex project. Author contributions: J.D. and J.R.S. designed the robot. J.D. built the robot. J.D., J.R.S., and S.V. designed the celestial compass. J.D. designed the software and conducted the experiments. J.D., J.R.S., and S.V. interpreted the data. J.D. wrote the original draft. J.D., J.R.S., and S.V. revised the manuscript. J.R.S. and S.V. supervised the project. Competing interests: The authors declare that they have no competing interests. Data and materials availability: The 2D trajectories’ datasets in Matlab files are available online at

Stay Connected to Science Robotics

Navigate This Article