Research ArticleCOLLECTIVE BEHAVIOR

Implicit coordination for 3D underwater collective behaviors in a fish-inspired robot swarm

See allHide authors and affiliations

Science Robotics  13 Jan 2021:
Vol. 6, Issue 50, eabd8668
DOI: 10.1126/scirobotics.abd8668

Abstract

Many fish species gather by the thousands and swim in harmony with seemingly no effort. Large schools display a range of impressive collective behaviors, from simple shoaling to collective migration and from basic predator evasion to dynamic maneuvers such as bait balls and flash expansion. A wealth of experimental and theoretical work has shown that these complex three-dimensional (3D) behaviors can arise from visual observations of nearby neighbors, without explicit communication. By contrast, most underwater robot collectives rely on centralized, above-water, explicit communication and, as a result, exhibit limited coordination complexity. Here, we demonstrate 3D collective behaviors with a swarm of fish-inspired miniature underwater robots that use only implicit communication mediated through the production and sensing of blue light. We show that complex and dynamic 3D collective behaviors—synchrony, dispersion/aggregation, dynamic circle formation, and search-capture—can be achieved by sensing minimal, noisy impressions of neighbors, without any centralized intervention. Our results provide insights into the power of implicit coordination and are of interest for future underwater robots that display collective capabilities on par with fish schools for applications such as environmental monitoring and search in coral reefs and coastal environments.

INTRODUCTION

The natural world abounds with self-organizing collectives, where large numbers of relatively simple agents use local interactions to produce impressive global behaviors, such that the system as a whole is greater than the sum of its parts (1). Well-known examples include social insect colonies, bird flocks, and fish schools. Fish schools are particularly impressive—collectives of thousands migrate long distances, shoal together in coral reefs, efficiently search for resources, and even form dynamic shapes such as flash expansions or bait balls to evade predators and capture prey (25). A quarter of fish species school for their entire life, and about half the species school as juveniles (2). Fish achieve many benefits from this cooperation, including higher success in foraging, migration, and predator evasion (58). These collective behaviors emerge mostly from implicit coordination—many fish species base schooling decisions on visual observations of nearby neighbors, and several species use their lateral lines to perceive neighbors in low-visibility conditions (913). By making decisions based on local perception of neighbors, these fish schools elegantly bypass the inherent challenges of underwater communication, achieving enormous scalability and robustness through decentralization (3, 4, 1416).

Mathematicians and engineers have strived to understand the mapping from local interactions onto global behaviors and vice versa in a quest to understand natural collective intelligence and engineer artificial robot collectives (1720). Recent advances have demonstrated successful implementations of self-organized homogeneous robot swarms as large as 1000 units inspired by cells and social insects, albeit limited to two-dimensional (2D) local interactions (2126). For example, the SWARM-BOTS project demonstrated ant-inspired collective transport and chain formation (26), the Kilobot project demonstrated large-scale shape self-assembly (21, 24), and the particle robotics project demonstrated emergent complex motion (23). In the 3D aerial domain, large drone swarms have displayed complex maneuvers, although mainly relying on centralized base stations or external global position information rather than local and self-organized interactions (2733). For instance, Intel’s Shooting Stars (used at the 2018 Winter Olympics) and the Crazyswarm are centrally controlled by a single computer and depend heavily on the Global Positioning System (GPS) and motion capture, respectively (27, 28), whereas the VIO-Swarm uses local visual inertial odometry (VIO) to determine position (29). Other aerial swarms have demonstrated decentralized self-organization but rely on the exchange of GPS locations among robots or on a signaling home beacon to infer relative positions (3033). Fully decentralized coordination in 3D (search and retrieval) was achieved by the Swarmanoid, a heterogeneous swarm composed of wheeled, climbing, and flying robots that cooperate (34). The flying robots attach to the ceiling and use infrared wireless communication to self-organize local positional coordinates, providing navigation aid to ground and climbing robots.

Compared with above-ground collectives, 3D underwater robotic systems have not yet been able to achieve similar levels of self-organization. Several previous projects have envisioned robot collectives for applications from environmental monitoring in sites of high ecological sensitivity (e.g., coral reefs) to inspections of underwater infrastructure and search-and-rescue operations (3542). In addition, such collectives can provide a synthetic means to understand how fish school complexity arises from the decisions of individual fish. However, aquatic environments impose substantial challenges on perception and locomotion and especially limit communication and sensing; traditional above-ground communication methods such as wireless radio perform poorly underwater, and position localization methods such as GPS are unavailable. As a result, most underwater swarms coordinate only at the surface or have no coordination whatsoever (35, 42). For example, the commercially developed Data Divers by Apium Swarm Robotics (43, 44) communicate and spread at the surface before diving for samples, and the M-AUE robots (35) drift uncoordinatedly with the ambient flow to sample the ocean for offline data reconstruction. Such systems bypass underwater 3D interactions to focus on specific environmental tasks; as a consequence, they are unable to achieve the complex collective behaviors that fish schools display.

Some research groups have attempted more complex underwater coordination by designing new explicit communication and localization methods [e.g., optical/acoustic modems (36), centralized or networked underwater base stations (38, 39), and bio-inspired electroception (45)], demonstrating limited coordination usually with two robots. A recent project, CoCoRo, built a heterogeneous swarm combining multiple modes of mobility and communication: surface robots, underwater robots, and floating base stations, using radio frequency communication above water and modulated blue-light and acoustic communication underwater (38). Similar to the Swarmanoid project (34), CoCoRo provided a compelling vision for heterogenous collaboration, as well as engineering design insights; however, limited experimental studies of 3D submerged collective behavior were published (37). Although heterogeneity potentially enables more sophisticated behaviors through multimodal communication and task specialization, it comes at the cost of increased engineering and control complexity.

Overall, the focus of underwater multirobot systems has been on coordination through explicit and semicentralized communication, in contrast with coordination based on implicit and local perception used by fish. This approach has had limited success; compared with the incredible 3D maneuverability of schooling fish or even 3D aerial robot swarms, current 3D underwater artificial systems demonstrate a large gap in achievable collective complexity. Inspired by fish schools that coordinate using vision, we aim to achieve underwater robot collectives with similarly seamless and coherent coordination, high degrees of maneuverability, and independence from assistive technologies. Here, we demonstrate multiple complex 3D underwater collective behaviors, with a fish-inspired miniature underwater robot swarm, Blueswarm, that uses only local implicit vision-based coordination to self-organize (Fig. 1). We show that multiple types of 3D collective behaviors—coordinating time, space, dynamics, and task sequencing—can all be achieved using this very simple mode of communication and without any externalized assistance in position sensing or control. Our work experimentally validates the concept of a collective of autonomous underwater robots with implicit, self-organized, and decentralized coordination in 3D space. The Blueswarm platform enables the systematic laboratory investigation of realistic and broadly applicable swarming algorithms that can pave the way for more reliable real-world ventures with robot swarms. Capitalizing on the power of decentralized autonomy, we provide experimental evidence and new results for underwater 3D collectives (see Fig. 2).

Fig. 1 Blueswarm platform.

Bluebot combines autonomous 3D multifin locomotion with 3D visual perception. (A) Two cameras cover a near-omnidirectional field of view (FOV). One caudal and two pectoral fins enable nearly independent forward and turning motions; a dorsal fin affects vertical diving for depth control. (B to D) Seven Bluebots with streamlined, fish-inspired bodies are used in Blueswarm experiments. (E and F) The fins are powered by a custom electromagnetic actuator (see Materials and Methods). (G) Information on neighboring robots extracted from images enables local decision-making. (H) Fast onboard image processing is achieved by setting the cameras such that only the two posterior LEDs (and potential surface reflections) of neighboring robots appear in images. For illustration purposes, pairs of LEDs belonging to the same robot are color coded and have a white outline; a pair of the same color without the outline marks the respective surface reflection. (I) Bluebots’ relative positions and distances are derived from pairs of LEDs assigned to individual robots (color-coded vectors), facilitating self-organized behaviors such as visual synchronization, potential-based dispersion/aggregation, dynamic circle formation, and collective search.

Photo credit (C): iStock
Fig. 2 Blueswarm’s distinctive features in comparison with other robot collectives.

Blueswarm is a 3D underwater collective that uses only local implicit vision-based coordination to self-organize. Not depending on any assistance, Blueswarm is more autonomous than most aerial swarms. It advances fundamental research on decentralized and self-organized robot collectives from 2D to 3D space.

RESULTS

Fish-inspired robot design with 3D perception and locomotion

We designed miniature (235 cm3), autonomous, fish-inspired, underwater robots called Bluebots to systematically study self-organized 3D coordination in the underwater domain. Two fundamental individual-level capabilities for self-organization are 3D awareness of neighbors’ distance and bearing and swift 3D motion response to neighbors. We realize these capabilities with Bluebots using a suite of sensors and actuators that enables perception and locomotion along all three dimensions in space (Fig. 1, movie S1, and section S1). In several species of schooling fish, vision is the dominant sensory modality (4); these schooling fish have spherical 3D vision with a small blind spot (~40°) in the rear (13). To rapidly detect members of their school, many such species have evolved specialized visual patterns (e.g., “schooling marks” and prominent stripes), and nighttime schooling fish, such as the flashlight fish Anomalops katoptron, exploit individual bioluminescence (14, 46). Inspired by these natural systems, Bluebot achieves 3D vision and neighborhood sensing using a combination of cameras and blue-light light-emitting diodes (LEDs). Two cameras with 195° wide-angle lenses offer a quasi-omnidirectional field of view with an emphasis on the critical anterior direction (35° overlap) and limited only by a narrow 5° blind spot at the posterior of the robot (Fig. 1A). The Bluebots incorporate a pair of vertically stacked blue-light LEDs at the posterior as a simple visual feature that allows neighbors to quickly identify distance and angular position of each other via projective geometry (Fig. 1, G to J). Vision-based algorithms for neighbor detection that deal with the spherical distortion of the camera lens, LED reflections at the water surface, and the assignment of LED pairs to individual robots are presented in Materials and Methods. Image acquisition and processing is computationally expensive, limiting the sensing iteration frequency to 2 Hz. However, because the Bluebots move at speeds close to one body length per second (BL/s), we are able to achieve sensing-to-motion response times that are similar to schooling fish such as jack mackerels (47). Using this vision system, we are able to approximate aspects of fish vision such as constant awareness and swift response to surroundings, and a Bluebot can detect a single neighbor up to 5 m away (measured in air under ideal conditions; section S1.3). However, as with real fish (15), the Bluebot’s vision system has natural limitations, such as noisy observations when many neighbors are present and occlusion from nearby neighbors, yielding incomplete and imperfect representations during dynamic swarming activities.

A high degree of maneuverability allows Bluebots to capitalize on their 3D visual information by exerting 3D locomotive responses. Schooling fish, such as surgeonfish and damselfish, exhibit high degrees of maneuverability, forming agile coordinated schools in complex environments such as coral reefs (48, 49). The streamlined body of Bluebot, measuring 130 mm (equal to 1 BL) in the longest dimension, was modeled after typical surgeonfish (family Acanthuridae; Fig. 1, B and C) (50). To achieve high maneuverability, four independently controlled fins provide precise locomotion in 3D space; this actuation scheme is a streamlined version of a prototype demonstrated in our previous work (51). Turning in place, forward motion, and stopping in the horizontal xy plane are all achieved with two pectoral fins (Fig. 1D, left) and a caudal fin (Fig. 1D, right), respectively, and diving along the vertical z axis is controlled with a single dorsal fin and slight positive buoyancy. Bluebot is passively stable in roll and pitch. Operating at fixed amplitudes, the actuation frequencies of the caudal and dorsal fin can be modulated to reach cruise speeds of up to 150 mm/s (equal to 1.15 BL/s) and dive speeds of up to 75 mm/s. The pectoral fins allow for near on-the-spot turning at radii as small as 65 mm (equal to 0.5 BL), and 180° changes of direction can be achieved in less than 5 s.

The Bluebot design aims for high capability, 3D local perception and response, from simplicity, to be easily mass produced for swarm research (Materials and Methods). Here, our final Blueswarm has seven robots (Fig. 1B) that visually interact with each other in a confined freshwater tank of size 1.78 m by 1.78 m by 1.17 m (or 13.7 BL by 13.7 BL by 9.0 BL; fig. S18). All interactions are through onboard perception; no external global position information or centralized control is used, and all 3D trajectories are tracked for postexperiment analysis (section S5).

Self-organization across time through visual phase matching of LED flashings

Using only vision-based local interactions, we report several examples of self-organized underwater collective behaviors that coordinate groups in time (synchronization), space (controlled dispersion), and dynamic motion (milling), ending with a composition of multiple behaviors to achieve a search operation (Figs. 3 to 6). The first behavior that we investigate is spontaneous synchrony, a classic example of self-organized coordination in time. Millions of fireflies (Photuris lucicrescens) synchronize and flash in unison to attract mates (Fig. 3A); studies have shown that this global behavior emerges from individual fireflies visually detecting the flashes of neighbors and adjusting to match their phase (52). For Bluebots, the ability to synchronize can enable time-coordinated actions such as sampling of an environment. Our approach exploits flashing as a tacit mechanism to achieve synchrony and is based on the well-known Mirollo-Strogatz model (Fig. 3B) (53). Bluebots are initialized with different start times and programmed to periodically flash with a nominal time interval of tf = 15 s. The program running on each Bluebot proceeds in discrete time steps toward the next flash at tf, a 2-s-long light up of LEDs, by updating a counter variable n. Whenever a Bluebot i flashes, all observing neighbors j jump ahead by m = f(n) steps (Eq. 1). Mirollo-Strogatz proved that synchrony is guaranteed under any monotonically increasing and concave down function for f(n), for instance nni=tfnj=min(tf, nj+nj)  ji(1)

Fig. 3 Self-organization across time.

(A) Fireflies flashing in unison. (B) Mirollo-Strogatz synchronization model: Firing agents pull up observers closer to their firing times, and the pull-up magnitude increases monotonically with time an observer spent already on a given firing cycle. Left: y fires and x is pulled up; right: x fires and y is pulled up; result: their phase difference (red) was reduced. After multiple such rounds, x and y will fire in unison. (C) Seven Bluebots observed LED flashes of neighboring robots and adjusted their flash cycles to achieve synchrony (solid) after three initial rounds of desynchronized flashing (dashed). The SD σ in flash times among robots disappeared after four and seven rounds of synchronization for uniformly (blue) and randomly (red) distributed initializations, respectively. (D) Robots with randomly initialized flash times synchronized slower because they partitioned into two competing subgroups (rounds 5 to 7). (E) Stills from the randomly initialized experiment show uncoordinated (top, round 1) and synchronized (bottom, round 10) flashing.

Photo credit (A): iStock
Fig. 4 Self-organization across space.

(A) A shoal of surgeonfish foraging in a reef. (B) During dispersion/aggregation, a Bluebot (black) calculates its next move (black vector) as the weighted average of all attractive (blue) and repulsive (red) forces from neighboring robots. (C) Interrobot forces (red) are calculated as the first derivative of the corresponding Lennard-Jones potential (blue) with standard parameters a = 12 and b = 6 and a tunable target distance dt (equal to 2 BL). The forces f are dependent on the distance d between robots: f = 0 for d = dt, f << 0 for d < dt (repulsive), and f > 0 for d > dt (attractive). The target distance dt defines the robot density of the collective. (D to F) 3D dispersion (blue markers, dt = 2 BL) and aggregation (red markers, dt = 0.75 BL) with seven Bluebots. Robot density ρ changed 80-fold; ρ = 455 m−3 equates to one individual per cube of BL, a density commonly observed in fish schools (64). (G) Dynamically repeated aggregation and dispersion by change of dt between 0.75 and 2 BL. Black arrows indicate increases (up) and decreases (down) in dt.

Photo credit (A): Uxbona/wiki/File:Maldives_Surgeonfish,_Acanthurus_leucosternon.jpg; used under CC BY 3.0
Fig. 5 Self-organized dynamic circle formation.

(A) A school of barracudas milling. (B) Dynamic circle formation with binary sensors: A robot turns clockwise if no other robots are present within a predefined segment view (orange) and counterclockwise if at least one robot is present (blue). The emergent circle radius R is determined by the angle of view α and the number of robots N with approximately circular bodies of radius ρ (Eq. 3). (C) Dynamic circle formation on Blueswarm (arrows indicate robot headings). (D to F) Experimental data from 3D dynamic circle formation with seven Bluebots, where each Bluebot maintains a preferred depth, resulting in a cylindrical shape: (D) trajectories of all robots, (E) distances to centroid of all robots (colors) and their mean (black, dashed), and (F) depths of all robots. (G) Addition and removal of robots during a continuous 2D experiment, demonstrating robustness of the formation process and emergent adjustment of circle radius R to the number of robots N.

Photo credit (A): iStock
Fig. 6 Search operation composed from multiple behaviors.

(A to D) Experimental validation: (A) Seven Bluebots were deployed centrally and searched for a red-light source at the left bottom corner of the tank. Robots switch between three behaviors: search, gather, and alert, indicated by blue, green, and yellow, respectively, in figure diagrams. (B) Initially, the robots dispersed to cooperatively locate the source. The first robot detecting the source switched to alert behavior, maintaining position and flashing its LEDs (15 Hz). (C) Other robots close by the source also detected it and switched to alert. Further away, robots that had detected the flashing LEDs switched to gather, turning off their LEDs and moving toward the flashing robots. (D) The experiment concluded when all robots had found the red-light source. (E) The timing of events shows the cascade of information spreading. The first robot detected the source after 20 s of controlled dispersion. Within 10 s, all other robots noticed its alert and started migrating toward the flashing LEDs. Incoming robots catching the light source started flashing as well to reinforce the alert signal. The source was surrounded by all robots after 90 s. (F) During search, all robots acted according to the same finite-state machine (nbr, neighbor).

In experiments with seven Bluebots moving randomly underwater, the discrepancy between flashing times quickly decayed as Bluebots achieved synchrony within 105 s (Fig. 3, C to E, and movie S2). One of the key features of this algorithm is the simplicity of interactions: An individual Bluebot does not need to distinguish between neighbors. Because of the importance of time synchronization to many applications, several implementations of firefly-inspired synchrony exist in sensor networks and robots (5457). Our results show that this same approach also works well underwater, where access to global clocks is much more challenging than above ground.

Self-organization across space through attractive and repulsive virtual forces

Biological collectives also self-organize spatially; for example, fish shoals disperse over a region to feed or defend but stay connected as a group (Fig. 4A) (5). Control over the spread of a robotic collective is important, for example, to disperse robots for better coverage during environmental sampling or search or to aggregate robots for recovery (26). Fish shoaling and dispersion have been extensively modeled (5, 17, 18, 58). Most models assume that an individual fish experiences virtual forces from nearby neighbors based on distance, with neighbors that are too close repelling and those further away attracting, although the exact form of the virtual forces is unknown (5). Controlled dispersion has also been extensively implemented in 2D ground robots, ocean surface robots, and some 3D aerial robots (30, 32, 34, 5961); typically, robots detect relative positions of neighbors by using an infrared communication ring or by explicitly exchanging GPS positions wirelessly. In contrast, fish use vision to determine relative positions of neighbors and implicitly react without any direct communication. Regardless of implementation, the emergent result of the virtual force model is the same: The fish school or robot swarm tends to disperse over an area, and the balance of repulsive versus attractive forces determines the density and spread of the group (5).

To demonstrate coordination in space, we implemented fish-inspired dispersion using implicit interactions: Each Bluebot attempts to visually determine the relative distance and bearing of all visible neighbors in real time, compute their forces, and then move in the direction of the resulting 3D motion vector (Fig. 4B and movie S3). We picked a commonly used artificial potential, the Lennard-Jones potential (26, 58, 62, 63), to model the nonlinear interaction between robots based on relative positions extracted in real time from onboard vision (Materials and Methods). Variables a and b define the magnitudes of the repulsive and attractive forces, respectively, and were set to 12 and 6 (standard). A single adjustable parameter, namely, a target neighbor distance dt, controls the spacing of the collective. Neighbors j closer than dt exert a repulsive force on a robot i that approaches infinity as robots collide; neighbors farther away exert an attractive force that decreases to zero for far away neighbors (Fig. 4C). The force contributions Fij of all N visible neighbors are obtained by taking the first derivative of the Lennard-Jones potential Vij with respect to their distances ∣rij∣. The average of all individual forces multiplied with the respective relative positions rij determines the next move vector pi of a robot ipi=1NΣj=1NFijrij=1NΣj=1NVijrijrij=1NΣj=1N1rij [a(dtrij)a2b (dtrij)b]rij ji(2)

All robots move continuously to minimize forces without being constrained to achieve a particular final formation. We tracked 3D trajectories for all experiments (Materials and Methods) and report on several commonly used metrics (5), such as density, volume, and average nearest-neighbor distance (NND).

In the first experiment, seven Bluebots were centrally deployed, and we set dt = 2 BL during the first 120 s (“dispersed state”) and dt = 0.75 BL during the second 120 s (“aggregated state”) (Fig. 4D). We measured robot density ρ as the number of robots N divided by the volume V of their convex hull. Results show that the robot density quickly plateaus within 30 s after dt is set (Fig. 4F) and that a large density and volume change can be achieved (ρ = 12 m−3 and V = 0.568 m3 in dispersed and ρ = 990 m−3 and V = 0.007 m3 in aggregated state). In the dispersed state, the convex hull of the Bluebots is able to cover a large fraction of the tank (Fig. 4E), amenable for coverage or search. In the aggregated state, the robots group tightly together, although this creates collisions that temporarily break the group (movie S3). As an additional metric, we measured average NND = 0.8 m (~6 BL) in dispersed and NND = 0.2 m (~1.5 BL) in aggregated state (section S2.1). The parameter dt acts as a conservative lower bound for NND because a single too-near neighbor can trigger additional dispersion due to the heavily nonlinear Lennard-Jones potential (Materials and Methods). Experiments with dt > 2 BL did not increase dispersion because Bluebots started to collide with the tank boundary frequently. When fish congregate in schools, typical densities are on the order of one fish per cubic BL (64), with distances between nearest neighbors ranging from 0.5 to 4 BL (65), which is similar to distances achieved by Bluebots during density control experiments.

To demonstrate dynamic and repeatable control over robot density, we conducted a second experiment (Fig. 4G and movie S3), during which dt was varied four times in the following sequence: (dt = 2 BL, t = 0 s), (dt = 0.75 BL, t = 30 s), (dt = 2 BL, t = 60 s), and (dt = 0.75 BL, t = 90 s). Our density results mirror the results from the first experiment, showing that it is possible to quickly and repeatedly switch between dispersed and aggregated states. The trajectories in this condition resemble trajectories seen during “flash expansion” (2, 5, 66), where an aggregated group of fish or insects are startled by an overhead predator and seem to “explode” away from the center of the group but then later reaggregate. In our experiment, when the target distance shifts from aggregation to dispersion, the artificial potential directs Bluebots away from the center of the swarm, seemingly aligning their heading radially away from the center of the aggregation without any explicit alignment sensing. Overall, our results show that at the system level, potential-based dispersion/aggregation in 3D can be achieved underwater, using purely local visual interactions without external assistances. However, further analysis showed that at the local level, Bluebots see fewer neighbors than theoretically possible: 4.9 on average during the first experiment with expected loss due to occlusion and occasional misidentified robots due to reflections (section S2.2). Despite this, the behavior is robust and repeatable and allows for effective changes between low and high densities. In addition, idealized point-mass simulations indicate that the resultant average distance between robots grows linearly with the prescribed target distance dt and sublinearly with the number of robots, which shows that fine control over the spread of a robot collective via dt is theoretically possible (section S2.3).

Dynamic circle formation and milling based on binary sensing of neighbor presence

Milling is an impressive dynamic formation commonly observed in fish schools when evading predators (Fig. 5A) (3, 5), where the whole school coherently swims in a clockwise or counterclockwise circular formation, often forming large 3D funnel or ball-like shapes. Theoretical models of fish schools suggest that milling may be achieved as a special case of flocking (19, 20, 6769), through a delicate balance between parameters for alignment and attraction-repulsion or through radially asymmetric attraction-repulsion. Currently, however, such self-organized dynamics formations have not been implemented in physical robots. Experimental studies with flocking in ground and aerial robots (30, 59) suggest that detecting the alignment of neighbors is more challenging and noisy than determining position and bearing, and in both cases, alignment matching is achieved by robots explicitly exchanging messages with global heading information rather than local perception. Even in the alignment-free form (20), the system parameters need to be carefully tuned, although there is the potential for achieving many more dynamic formations. Currently, it is not fully understood how fish schools actually achieve milling (70), and several biological studies suggest that fish and birds may react to a limited number of neighbors rather than the whole neighborhood (71, 72).

Recently, a new trend in swarm robotics has been the study of minimalist self-organization, using models inspired by physics and derived by evolutionary algorithms; this work has shown that unexpectedly complex behaviors such as aggregation, clustering, and collective transport can be achieved by agents with extremely simple neighborhood sensing (e.g., binary sensing of presence or absence of neighbors or analog sensing of the amount of neighbors) (7375). In one such study, a behavioral rule for milling-like formations was found through evolutionary means by one of our authors (25). Instead of reacting separately to each visible neighbor, this rule relies only on a single binary source of information that indicates whether at least one other robot is within the line of sight. The rule takes the form of a memoryless mapping from each of the two possible cases onto a predefined locomotion pattern: in this case, turning slightly right if no one is visible and turning slightly left if any robot is visible. For many values of turning radii, robots spontaneously aggregate (25). However, for some parameters, emergent circle formation was observed, where robots spread equidistantly in a circle and rotated indefinitely. This emergent circle formation behavior was demonstrated in a simulation for 2D ground-based robots under the assumption of zero inertia but so far remains unvalidated on physical robots.

Here, we demonstrate self-organized milling, or dynamic circle formation behavior, for 3D underwater robots based on this minimalist formulation of milling. Instead of a line-of-sight sensor, our behavior rule uses a 3D triangular prism with nonzero opening angle 2α (Fig. 5B). We prove that for idealized robots, the radius R of the emergent dynamic circle is (Materials and Methods)R=ρ/(cos αcos(2πNα))(3)where N is the number of participating robots with approximately circular bodies of radius ρ. This equation suggests that the more robots present, the larger the circle radius R. Note that all robots have the same fixed turning radius r0R, but the actual radius R of the circle emerges as a function of the number of interacting robots and does not depend directly on turning radii; in section S3, we discuss parameter limits. Larger circles can also result from larger viewing angles α and body sizes ρ, which cause robots to see each other more easily (Materials and Methods). However, this theoretical result (Eq. 3) assumes idealized robot dynamics and does not include movement constraints or inertia, making experimental validation an important step. To test dynamic circle formation on the Bluebots, we chose α = π/12 so that a circle with all seven robots would fit within our tank. Each Bluebot used a precomputed mask on both cameras that returned a binary value based on whether at least one robot was within the specified field of view. The Bluebot then moved clockwise or counterclockwise depending on its sensor reading, which was achieved by actuating the caudal fin in conjunction with a pectoral fin, such that the ratio of frequencies determines the turning radius.

We tested dynamic circle formation and maintenance with seven Bluebots; all of them spread out on the water surface randomly at initialization (Fig. 5, C to F, and movie S4). In our first experiment, we also preprogrammed each robot to dive to a different preferred depth such that the dynamic formation is 3D, forming a rotating cylinder similar to some natural observations of milling. Tracked 3D trajectories are shown in Fig. 5D. The Bluebots were able to form and maintain the dynamic circle formation for several minutes at a time, limited, in part, by collisions with the tank boundary. While rotating, the Bluebots could maintain a radius accuracy of under 20% and a depth accuracy of within 5% (Fig. 5, E and F). Unlike simulated or 2D ground robots, the Bluebots are subject to inertia and imperfect motion, and our results suggest that this minimalist rule is robust to the real-world dynamics. We also observed that the collective was often able to recover from collisions with the tank, forming a new circle after a short time period. We investigated this robustness further with a second experiment, where we manually removed and added robots at different times to form circles with five to seven robots at the same depth (Fig. 5G and movie S4). The robots were able to reform the dynamic circle after each perturbation in less than 30 s, even in the face of interrobot collisions. The experiment also confirmed that the radius R of the emergent circle varies with the number of robots N, yielding radii R of 234, 357, and 489 mm for N of 5, 6, and 7, respectively, that are close to the predictions from Eq. 3, i.e., 190, 309, and 496 mm.

Overall, our experiments show that we can achieve milling-like dynamic formations using this simple emergent behavioral rule. The presence of substantial inertia in an underwater setting does not prevent circle formation; however, the instantaneous formation at any given time is qualitatively less regular than previous simulations of inertia-free ground robots (25). Our success with emergent milling formations on physical robots illustrates the opportunity for new forms of implicit coordination algorithms, more similar to synchronization than dispersion, in that an individual agent does not need to explicitly detect and react to all neighbors but rather reacts anonymously or to some simple summary statistic about the neighborhood (e.g., presence/absence, amount, optic flow, etc.). This may enable more complex self-organization in real robots than previously possible.

Multibehavior collective search with transitions between search, gather, and alert

In our final demonstration of decentralized complexity, we combine multiple behaviors to achieve a collective search operation. In fish, robot, or even human collectives, the work of scanning surroundings can be shared among the constituent individuals, potentially reducing the burden on each individual while achieving a higher level of collective vigilance. Schooling fish, for instance, find food faster as group size increases (6), and each fish can devote more time to feeding because all others are also watching for predators (7, 8). Swarms of underwater robots may also exploit this “many eyes” effect for collective sampling of oceanic data, mapping of plumes in coastal waters (35, 36), or faster search and rescue missions in collaboration with ocean surface and aerial robots (38, 76). Because a search operation may involve several subtasks, it is also a motivational example for swarm programmability, where multiple collective behaviors must be sequenced together (77, 78).

In our search experiment, seven Bluebots were placed at the center of the water surface and tasked to search for a red-light source in a bottom corner of the tank (Fig. 6, A to E, and movie S5). To achieve this task, we combine three behaviors—search, alert, and gather—using flashing LEDs as a visual signal to initiate behavioral transitions (Fig. 6F). As a first step, the robots used the dispersion behavior described earlier to collectively search by spreading out in the tank (with dt = ∞ for unbounded dispersion). The first Bluebot to detect the red-light source (i.e., within a range of about 3 BL) switched to alert behavior, where it holds its position and flashes its LEDs at 15 Hz as a signal to recruit others. As other robots observed the flashing signal, they switched off their LEDs and moved toward the signal (gather behavior). Once they were close to the red-light source, they also started to flash, thereby reinforcing the alert signal (Fig. 6). Implementation details on detection of the red-light source and LED flashings are in Materials and Methods.

This search experiment demonstrates the ability to design composite behaviors using signaling as a simple visual communication method, combining both implicit collective behavior and simple explicit state signaling, which allows a leaderless group to work together efficiently on a complex task. The gather behavior mimics the recruitment seen in natural collectives, for example, ants recruiting to collectively transport large bait or bees recruiting to high-value food sources (1). A recent study suggests that flashlight fish may use bioluminescent flashing to signal during nighttime schooling (46). For a single robot searching alone, expected red-light source detection time theoretically increases to 1024 s (Materials and Methods). Blueswarm completed the search operation efficiently, with all robots able to detect the source within ~90 s, getting notable results from their collaboration (Fig. 6E and movie S5).

DISCUSSION

Our results with Blueswarm represent an important advance in the experimental investigation of underwater 3D self-organized collective behaviors. In all the demonstrated behaviors, Bluebots solely relied on local visual information, which is acquired and processed onboard in real time and 3D but imperfect motion using low-cost fin actuators. However, Blueswarm is able to achieve multiple 3D collective behaviors by exploiting biologically inspired coordination techniques that are inherently robust to imperfect knowledge and that enable the emergence of complex and dynamic global behaviors from seemingly simple interactions. We demonstrated well-studied classic collective behaviors such as synchronization and dispersion, introduced dynamic behaviors such as milling, and programmed complex tasks by connecting multiple behaviors (movie S6). By focusing on a minimalist form of visual coordination, we were able to achieve versatility and demonstrate programmability for an underwater robot swarm.

This work on the design and validation of these autonomous robots with 3D perception and 3D locomotion represents an example of fully decentralized 3D underwater coordination using implicit local coordination and no external centralized sensing or control assistance. The Bluebot cameras paired with the LEDs result in a minimalist but versatile visual system that can be used to quickly infer the positions of neighboring robots but also allow synchronization and signaling. Future designs may use deep learning–generated visual patterns (i.e., “artificial” schooling marks) instead of LEDs to recognize neighbor pose (79). The fish-inspired body design with multiple fins offers a high degree of maneuverability at a low footprint to enable fast response to neighbor actions or precise maneuvers in more complex environments. Although the Bluebot platform is currently limited to a laboratory setting, the introduction of progressively more powerful and smaller microcomputers, underwater cameras, and new actuators will enable such robots in more complex natural environments (36, 40). Blueswarm was particularly inspired by schooling fish, such as surgeonfish and damselfish, that form highly agile coordinated schools in complex and visually rich environments such as coral reefs (48, 49). With the rapid improvement in camera technology, we envision underwater visual coordination to be effective in environments where fish use vision as their dominant sensing modality. Future camera-equipped miniature underwater robots may additionally record videos and take images, for instance, to inspect coral reefs or man-made underwater structures.

Environments less amenable to vision, such as turbid waters, may require a recomposed suite of sensors, e.g., inspired by fish lateral lines; harsher environments like the open ocean may be explored with a larger and more powerful robot and other options like acoustic sensing. Nevertheless, these systems can still take advantage of implicit coordination algorithms and will face similar robustness challenges due to physical movement and local perception constraints. Blueswarm enables physically validated studies of algorithmic robustness and scalability that will uncover important gaps in our theoretical understanding. Bluebots are also well suited as an experimental test bed for investigating natural collective behaviors and biomimicry, for example, studying dynamic evasive maneuvers or energy savings for different formations in schooling fish or collective predator behaviors exhibited by wolf packs or dolphins (1, 5, 20). Implicit coordination is a compelling approach to scalable and robust swarming because it not only is naturally decentralized and robust to individual failures but also reduces communication complexity in environments where direct explicit message passing is not possible or not desired. Insights from real-robot underwater experiments will contribute toward future unsupervised versions of coordinated maneuvers of unmanned vehicles, making it possible to combine multiple robot modalities (aerial, water surface, and underwater) to achieve scalable and robust realizations of ventures such as collective search of missing aircrafts, vessels, and persons in water (76).

MATERIALS AND METHODS

Experimental setup and testing routines

All experiments were conducted in a fresh water tank with dimensions of 1.78 m by 1.78 m by 1.17 m (51), which had a depth of 0.91 m (effectively, 13.7 BL by 13.7 BL by 9.0 BL). Most experiments are under 10 min in length; the robots have a top speed of 1.15 BL/s and are able to cross the surface of the tank in about 11.9 s. A digital single-lens reflex camera was mounted above the tank to film experiments and allow for planar tracking of individual robots (fig. S18). Experimental data including diving depth values along the vertical dimension were acquired onboard the robots. The reconstruction of 3D robot trajectories was possible from video materials and depth values and validated in previous research (51). A custom-built software automates large parts of this reconstruction, only asking for user intervention when there is potential for ambiguity. The software works by first tracking the video data for Bluebot positions using image processing techniques, then isolating the trajectories of individual robots, and lastly matching these trajectories spatially and temporally with depth values from the Bluebots’ pressure sensors and fusing all the data to recreate individual 3D trajectories.

Bluebot design

Bluebot’s functional design consists of three major modules: (i) Two cameras allow for 3D perception of surroundings, (ii) two LEDs serve as active beacons for neighbor recognition, and (iii) four independently controllable fins provide a high degree of maneuverability in 3D space.

1) The 195° wide-angle lenses (Arducam) of the cameras (Raspberry Pi Camera Module v2) penetrate the body on either lateral side and are angled 10° forward against the y axis. Thermoformed hemispheres made of clear plastic (Curbell Plastics PETG, 0.5 mm in thickness) cover the lenses for waterproofing (Fig. 1A). On the inside, the camera cables are routed to a duplexer board (Arducam), which is then connected to the onboard computer (Raspberry Pi Zero W). One camera can be used at a time, and the duplexer board allows for superfast switching (~20 μs).

2) The blue-light LEDs are mounted in prominent locations along the xz plane of Bluebot such that they are visible from almost any direction. The two posterior LEDs are always stacked vertically at a distance of 86 mm because Bluebot does not roll nor pitch (Fig. 1A). When detecting neighboring robots in camera images, custom-designed algorithms identify and assign LEDs to individual robots. An LED pair allows for the inference of direction and distance of a neighboring robot (Fig. 1, G to J).

3) In the horizontal xy plane, the caudal fin provides thrust in the forward direction along the x axis, and two pectoral fins produce near–on-the-spot turning. The pectoral fins are angled 30° forward against the y axis such that when run together, they provide thrust in the negative x direction and allow for stopping and backing up. Decoupled from planar motions are vertical ascent and decent. The robot itself is slightly positively buoyant such that it floats toward the surface unless the dorsal fin is actuated. The dorsal fin provides thrust along the z axis and allows for controlled diving (Fig. 1A).

Our current Bluebot design achieves 3D motion with a forward speed of 150 mm/s (equal to 1.15 BL/s), a diving speed of 75 mm/s, and turning at radii as small as 65 mm. Bluebot is able to process neighborhood images at the rate of about 2 Hz, which means that it can travel ~0.6 BL between observations.

All fins are powered by our custom electromagnetic actuators consisting of a coil inside which a permanent magnet is hinged (51). Oscillating the direction of an electric current flowing through the coil induces an oscillating magnetic field, with which the magnet tries to stay aligned. As a result, the fins oscillate around a single axis in a sinusoidal pitching motion (Fig. 1, E and F). The power of the fins can be controlled by changing the voltage across the coil with pulse width modulation. The actuators are submersible, and only two wires from each coil penetrate the Bluebot’s body, avoiding any need to seal off moving parts. The housings are 3D printed in assembled state, i.e., including the pivoted hinge to which fins, laser-cut from flexible plastic shims (ARTUS), are attached. The caudal and the dorsal fins are equipped with two actuators each for enhanced thrust and connect magnetically to the robot body. The magnetic connection allows for fast switching between different fins, which can be used in future studies (e.g., on the propulsive efficiency of fin designs).

Bluebot is designed to be easy to manufacture, recharge, and program to facilitate multirobot operations. Up to 10 Bluebots can be put on a custom charging rig simultaneously. The robots have an onboard charging circuitry (LTC2954), and the rig is powered from a power supply at 10 V. Similarly, programming multiple robots with a single command is possible using the Wi-Fi module of the onboard computer (Raspberry Pi Zero W). To start programs (implemented in Python3) on multiple robots simultaneously, we use a light pulse, which is perceived by the robots’ forward-facing photodiodes (VTP1112H) and causes switching from an idle loop to the main program. Experimental data from the perspective of Bluebots can be logged onboard on a microSD card. We used, for instance, data from a pressure sensor (TE connectivity MS5803-02BA) to reconstruct diving depths. All electronics are connected to a custom printed circuit board (PCB; OSH Park). A 7.4-V 950-mA·hour battery (Turnigy) provides power for run times of up to 2 hours, whereby the onboard voltage is reduced to 5 V by a step-down voltage regulator (Pololu D24V90F5). Bluebot is switched on and off with a custom ignition key that applies 3.7 V to two external pins, which are connected to an on/off controller (LTC2954). The controller also automatically shuts down the robot if battery is low.

Assembling a Bluebot takes roughly 6 hours, which starts with the installation of all actuators, cameras, and electronics inside the two 3D printed plastic halves (Stratasys PolyJet Objet500), continues with soldering all electronic components to the PCB and sealing those components that penetrate the body from the inside, and concludes with fusing the two halves into a single robot using plastic bonding epoxy (Loctite). Passive stability in roll and pitch and near-neutral buoyancy are achieved by careful placing of components such that the center of mass is directly below the center of buoyancy. A small compartment on the ventral side of Bluebot, which is sealed from the rest of the body and opened with a single bolt, allows for fine tuning of buoyancy with additional mass blocks. A Bluebot figure with all components labeled is shown in section S1.1.

Visual underwater navigation

The Bluebot’s vision system is composed of two cameras, which are capable of capturing still images at a resolution of up to 2592 pixels by 1944 pixels. However, in most cases, the images are downscaled to a resolution of 256 pixels by 192 pixels to allow for faster image processing and shorter control cycles. Captured images are in red-green-blue color, but because the points of interest are Bluebot LEDs, which are blue, only the blue channel gets used in image processing (except in red-light detection in the search experiments). The camera settings (brightness, contrast, and white balance gains) are tuned such that under experimental lighting conditions, only the bright Bluebot LEDs register substantially in the images; everything else appears mostly black (Fig. 1J).

LEDs appear as quasi-circular blobs in the images. These blobs are identified using a custom-designed algorithm, which is described here briefly and in more detail in section S1.2.2. First, each image undergoes thresholding to convert it to a binary image. Then, blob detection proceeds by searching for continuity in white pixels in the horizontal and vertical directions. Groups of white pixels that are continuous in both directions become designated as blobs. This algorithm, on average, requires 2.9 times fewer computational steps than more conventional algorithms (e.g., depth or breadth-first search) and is faster by roughly one order of magnitude on the Bluebot’s Raspberry Pi Zero W processor (see section S1.2.2 for a time complexity analysis). The notable speedup comes at the cost of susceptibility to some pathological cases where discontinuous groups of white pixels get lumped together as one blob; however, these pathological cases appear very rarely in practice.

The wide field of view provided by the lenses results in substantial spherical distortion in the images. Thus, an undistortion function is essential to convert blob locations in images to directions in the real world. We obtained this undistortion function using open-source software for omnidirectional camera calibration (OCamCalib Toolbox for MATLAB).

A typical image from a Bluebot’s camera contains several blobs because multiple other Bluebots are within the field of view. Moreover, reflections may appear if these Bluebots are close to the water surface. For obtaining information about the number of Bluebots or their distances, it is necessary to eliminate reflections and to identify pairs of blobs originating from the same Bluebot. The process of pairing blobs relies on the fact that Bluebots are passively stable in roll and pitch, and therefore, their two LEDs are always vertically aligned. When a pair of blobs originating from the same Bluebot is available, it is possible to estimate the distance to that Bluebot via projective geometry (see section S1.2.5 for details). This calculation makes use of the known directions to both LEDs in the world (obtained from applying the undistortion function to the blob locations) and the vertical distance between the LEDs (fixed at 86 mm).

Lennard-Jones potential function for controlled dispersion

During dispersion, Bluebots calculate next moves on the basis of the weighted average of all force contributions from neighboring robots (Eq. 2 and Fig. 4B). Such individual forces, derived from the Lennard-Jones potential, are repulsive (<<0) if robots are closer than a target distance dt and attractive (>0) otherwise (Fig. 4C). The individual force magnitudes scale nonlinearly with distance between robots with an emphasis on nearby neighbors that exert extreme repulsive forces to avoid collisions or substantial attractive forces to maintain cohesion. The stronger the final averaged force, the higher the oscillation frequencies of the actuated fins to swim toward the corresponding direction.

Bluebots move continuously to minimize the average of all forces, thereby achieving dispersion with controllable density. The robots are not, however, moving into particular and stable formations. NNDs, a metric for the spacing of robots, are generally lower-bounded by dt because repulsive forces outweigh attractive forces.

Dynamic circle formation

Here, we provide a simplified model overview to derive the radius of the dynamic circle formation; a more detailed model with explicit bounds on all parameter values is presented in section S3. Assume that we are given a number of robots N, with an approximately circular body of radius ρ and with a binary sensor whose field of view is defined by the half-angle α in the plane and not restricted in the vertical dimension. We can compute a formula for the size of a “perfect” circle (Fig. 7A), where each robot is placed equidistant along a circle, oriented in a clockwise direction such that each robot’s field of view is empty but just on the edge of detecting the robot in front of it. In this configuration, each robot’s binary sensor will detect a zero, and the robot will turn clockwise on the next time step. If we consider two adjacent robots and the triangle formed between them (Fig. 7B), then we can use trigonometry to derive the formula for the radius of this perfect circle. In the uppermost triangle in Fig. 7B, we havetan α=RR cos θρ cos αR sin θ+ρ sin αwhich can be rearranged to giveR=ρcos αcos θ cos αsin θ sin α

Fig. 7 Dynamic circle formation geometries.

(A) A regular polygon configuration with field-of-view sensors. Only two robots are shown for simplicity; additional robots lie on each vertex of the polygon. (B) Geometry for calculating the milling radius R with field-of-view sensors (γ, interrobot distance). (C) Scaling intuition based on Eq. 3 for circle radii R with nominal parameters (blue) and doubling of robots N (red), robot size ρ (yellow), and viewing angle α (purple), respectively.

By simplifying the denominator using trigonometric identities and noting that θ = 2π/N for equidistant robots, we obtain Eq. 3 in the main textR=ρcos αcos(2πNα)

Given N robots in a perfect circle, we can show that this circle rotates stably under certain assumptions. Full proofs and assumptions are provided in section S3; here, we describe the intuition behind the stability. One key assumption is that the robots’ clockwise turning radius when the sensor reports no other robots, r0, must be smaller than or equal to the milling radius: r0R. Intuitively, we can see that if the turning radius is perfect, i.e., r0 = R, then all the robots will simply rotate in that circle without ever being perturbed; milling is smoothest if the two values are similar. If r0 is less than R, then in the next step, each robot will rotate slightly into the circle and immediately observe their front neighbor intersect their field of view. This will cause an immediate response to rotate counterclockwise, once again putting them in the perfect circle. In addition to bounds on turning radii, there are also bounds on α, ρ, and response times, which are theoretically derived in section S3.

In our setting with a maximum of seven robots, we chose α to be π/12, which results in an expected circle radius of 496 mm (tank planar dimension is 1.78 m by 1.78 m). We tuned pectoral fin actuation frequencies to 6 Hz for clockwise and counterclockwise turning and ran the caudal fin at 3 Hz to swim such circles. Experiments during which robots drifted and bumped into the tank walls were discarded and repeated.

Search operation

This composite behavior introduces signaling, where a robot flashes its LEDs at 15 Hz and other robots detect that a robot is flashing. Flash detection is achieved by an algorithm that is designed and tuned to be robust to noise and robot motion. The Bluebot captures a rapid sequence of 30 images (in about 0.5 s) from each of its cameras. The two sequences of images are analyzed separately for the presence of flashes. The flash detection algorithm proceeds in three phases. First, blob identification is performed on each image in the sequence. Second, outlier blobs are identified between each two successive images. Outliers are blobs that appear in some location in the first image and are not present in the second anywhere within a small radius of that location. Third, streaks of outliers are identified. A streak is a sequence of outliers such that each outlier occurs within a small radius of the previous one. A flash detection is declared if a streak is sufficiently long. The threshold radii for outlier and streak detection and the minimum streak length for a flash detection were tuned empirically in the tank under experimental conditions to give a good balance between reliably identifying flashes and minimizing false-positive errors.

In addition, the robots also must detect a target that emits red light. This detection was tuned to restrict the detection radius to be small (within 3 BL) so that robots would need to search the tank before finding the target. Red-light source detection works by comparing the red and blue channels of the image. For source detection, blob identification is performed not on the blue channel of the image, but on the average of the channels (i.e., a grayscale version of the image). After blob identification is performed, every blob larger than two pixels is considered a candidate to be the source. Blobs that are smaller are not considered to avoid noise and to make sure that Bluebots can only detect the source when they are sufficiently close to it. For each candidate blob, the blue and red values of all the pixels within a Chebyshev distance of two of the blob’s centroid are summed up separately, and the ratio of the red total to the blue total is calculated. A blob is considered to be red (i.e., the source) if this ratio is larger than 1.2.

The search operation exploits cooperativity to be efficient. We can approximate the expected time for red-light source detection if Bluebots do not collaborate by using a Markov chain to model a random walk at an average speed of 0.5 BL/s on an undirected graph G. The vertices of G are the integers 0, …, n and represent distance to the red-light source. The source is at vertex n = 40, the center of the water surface (initial Bluebot location) at vertex j = 24, and the furthest location to the source at vertex 0. For 1 ≤ in − 1, vertex i is connected to vertex i − 1 and vertex i + 1 at a distance of 0.5 BL (41 × 0.5 BL = 20.5 BL ≡ tank diagonal). In each 1-s-long time step, a robot is assumed to move toward or away from the source with equal probability. The expected time to find the source is (n2j2) s = 1024 s (proof using linearity of expectations in section S4.1, cf. runtime of the randomized algorithm for 2-SAT). Although a random walk is not the most efficient search pattern for an unknown space, our Bluebots are simple robots that so far lack the sensing/motion complexity necessary for more sophisticated methods (e.g., simultaneous localization and mapping).

SUPPLEMENTARY MATERIALS

robotics.sciencemag.org/cgi/content/full/6/50/eabd8668/DC1

Section S1. The Bluebot 3D vision system.

Section S2. Controlled dispersion.

Section S3. Dynamic circle formation and milling.

Section S4. Multibehavior search experiments.

Section S5. 3D tracking.

Fig. S1. Additional components of the Bluebot platform.

Fig. S2. Example scenarios for the blob detection algorithm.

Fig. S3. Time complexity of rapid LED blob detection at an image resolution of 256 × 192.

Fig. S4. Camera and robot coordinate systems.

Fig. S5. x and y error in the Bluebot’s position estimation for 24 cases.

Fig. S6. Error in the z direction as a percentage of distance.

Fig. S7. Overall (x,y,z) error as a function of distance.

Fig. S8. Interrobot distances during single dispersion/aggregation.

Fig. S9. Interrobot distances during repeated dispersion/aggregation.

Fig. S10. Number of visible robots during controlled dispersion.

Fig. S11. Target distance and number of robots influence interrobot distances during dispersion.

Fig. S12. Seven robots arranged in a regular polygonal formation with line-of-sight sensors tangential to each other.

Fig. S13. Expanded view of the red quadrilateral in fig. S12.

Fig. S14. Example configuration of two robots to show that the regular polygon formation of Theorem 1 is the only stable formation.

Fig. S15. A pathological initial configuration for the case 𝑟_0 > 𝑅.

Fig. S16. A regular polygon configuration with field-of-view sensors (cf. fig. S12 for line-of-sight sensors).

Fig. S17. Geometry for calculating the milling radius with field-of-view sensors.

Fig. S18. Tank setup.

Fig. S19. Schematic side view of the tank showing the overhead camera and the salient parameters.

Fig. S20. Schematic of the tank view as seen from the overhead camera.

Movie S1. Introduction to Blueswarm.

Movie S2. Collective organization in time.

Movie S3. Collective organization in space.

Movie S4. Dynamic circle formation.

Movie S5. Multibehavior search operation.

Movie S6. Summary and highlights.

REFERENCES AND NOTES

Acknowledgments: We thank J. Dusek for early discussions and contributions to the development of Bluebot sensing and the testing environment. The camera system including cameras, lenses, and duplexer boards was custom made to satisfy the needs of the Blueswarm platform. We thank Arducam (www.arducam.com) for collaborating in their design and fabrication. Funding: This research was supported by the Office of Naval Research (ONR award no. N00014-20-1-2320), the Wyss Institute for Biologically Inspired Engineering, and an Amazon AWS Research Award. Author contributions: R.N., F.B., and M.G. proposed the research and conceptual design of the Blueswarm. F.B. and M.G. led the hardware (custom propulsion and vision) and electronics (custom PCB) design, respectively, and software control system jointly. F.B. led synchronization and dispersion. M.G. led milling, and both jointly led search. F.B. and M.G. processed data. All authors analyzed and visualized data and contributed to the writing of the paper. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper or the Supplementary Materials.
View Abstract

Stay Connected to Science Robotics

Navigate This Article