Research ArticleINDUSTRIAL ROBOTS

Effortless creation of safe robots from modules through self-programming and self-verification

See allHide authors and affiliations

Science Robotics  19 Jun 2019:
Vol. 4, Issue 31, eaaw1924
DOI: 10.1126/scirobotics.aaw1924

Abstract

Industrial robots cannot be reconfigured to optimally fulfill a given task and often have to be caged to guarantee human safety. Consequently, production processes are meticulously planned so that they last for long periods to make automation affordable. However, the ongoing trend toward mass customization and small-scale manufacturing requires purchasing new robots on a regular basis to cope with frequently changing production. Modular robots are a natural answer: Robots composed of standardized modules can be easily reassembled for new tasks, can be quickly repaired by exchanging broken modules, and are cost-effective by mass-producing standard modules usable for a large variety of robot types. Despite these advantages, modular robots have not yet left research laboratories because an expert must reprogram each new robot after assembly, rendering reassembly impractical. This work presents our set of interconnectable modules (IMPROV), which programs and verifies the safety of assembled robots themselves. Experiments show that IMPROV robots retained the same control performance as nonmodular robots, despite their reconfigurability. With respect to human-robot coexistence, our user study shows a reduction of robot idle time by 36% without compromising on safety using our self-verification concept compared with current safety standards. We believe that by using self-programming and self-verification, modular robots can transform current automation practices.

INTRODUCTION

Robotics has transformed the manufacturing sector in recent years. However, small and medium-sized enterprises have not yet reaped the benefits. Reasons for this are manifold: (i) Experts for programming and ensuring correct functionality of robotic solutions are lacking. (ii) Purchasing specialized robots is uneconomical for small-scale manufacturing and/or reacting flexibly to changing demands. Although the concept of physically assembling different robots from a collection of modules is not new (1), each new robot composition must be programmed by experts, which is time-consuming and expensive. (iii) Many production processes require that humans and robots work closely together; ensuring safe human-robot coexistence today requires a large separation of humans and robots (2, 3), making state-of-the-art solutions impractical. Still, 13,000 injuries and 60 deaths were caused by contact with machinery between 2014 and 2018 in the United Kingdom alone (4). If machines were able to verify each occurring situation by self-verification, i.e., only carry out safe actions in any given situation, then human error could be avoided.

We propose to untangle the abovementioned issues in a holistic way through the concept of interconnectable modules for self-programming and self-verification (IMPROV), providing three major improvements:

(1) Self-programming. After assembling standardized modules, the created robot programmed itself based on standardized information stored in each module (see Fig. 1A). This allowed us to provide out-of-the-box functionality for a given reference trajectory by generating a model of the robot on the fly. Self-programming of high-level tasks was not considered in this work. The created models were used for automatically synthesizing model-based controllers, as well as for the following two aspects.

Fig. 1 Self-programming, self-verification, and optimal composition of the robot.

(A) Self-programming before deployment: (1) Each module contains its own characterization. (2) The robot is assembled from standardized modules. (3) The information from each module is collected to automatically build a kinematic, dynamic, and geometric model. (4) The robot fulfills a given task without programming by a user. (B) Self-verification during deployment: (1) An intended trajectory is planned on the basis of the most likely movement of the human. (2) A fail-safe trajectory is created such that the robot never intersects the reachable set predicted until the robot reaches the end of the fail-safe trajectory. (3) At the next point in time tk+1, the robot updates its intended trajectory. (4) In case the updated intended trajectory combined with a new fail-safe trajectory can be verified on time, the updated intended trajectory is executed; otherwise, the fail-safe trajectory is engaged. (C) Optimal module composition before deployment: Given (1) modules and (2) a task, we (3) compute possible compositions and (4) select the optimal composition.

(2) Self-verification. To account for dynamically changing environments, the robot formally verified, by itself, whether any human could be harmed through its planned actions during its operation. A planned motion was verified as safe if none of the possible future movements of surrounding humans leads to a collision, as presented in Fig. 1B. Because uncountable possible future motions of surrounding humans exist, we bound the set of possible motions using reachability analysis as explained in more detail in Results. Our inherently safe approach renders robot cages unnecessary in many applications.

(3) Optimal module composition. We automatically chose the best composition of modules for given robot tasks through optimization (see Fig. 1C). The main goals were to minimize cycle time and to reduce energy consumption of the modular robot.

We believe that IMPROV enables users to swiftly change automation solutions without having to purchase new robots, to reprogram underlying controllers, and to redesign the safety concept; thus, flexibility is particularly improved for automation solutions that are frequently redesigned. Below, we review previous work on ensuring out-of-the-box functionality, human safety, and optimality for robotic solutions.

Ensuring out-of-the-box functionality

Previous approaches for controlling modular robots are decentralized; i.e., each module is controlled independently from other modules. A classical approach in decentralized robotic control is the use of proportional-integral-derivative (PID) control as presented in section 8.3 of (5); however, the resulting closed-loop robot dynamics is not necessarily stable. Approaches in (6, 7) enhance simple PID controllers for global stability, but their tuning is nontrivial because the knowledge of norm bounds of model terms is necessary. Given the same hardware, centralized control can always be made to perform equally well or better than decentralized control because a decentralized approach is a special case. Despite the degraded performance of decentralized control, many attempts have been made to mitigate its issues: fuzzy gain tuning (8), adaptive decentralized control (9, 10), incorporation of joint torque sensing (11), robust control (12), and virtual decomposition (13, 14).

To overcome the shortcomings of decentralized control, our self-programming concept automatically generates a kinematic, dynamic, and geometric model of a newly assembled robot from modules. This not only benefits the motion control of the robot but also is needed for the self-verification concept presented later. So far, only isolated solutions for (semi-)automatic model generation have been developed. One of the seminal works in modular robotics using data stored in modules for obtaining the gravity vector of a robot model is (15), but it did not present details for automatically obtaining the dynamics of the assembled arm. Models have been derived in (1618) based on Lie groups and the product-of-exponentials formulation (19); however, these methods have not shown seamless applicability to arbitrarily shaped modules (20). On the other hand, our approach builds on the standard Denavit-Hartenberg convention (21) and is applicable to arbitrary modules. Extensions of the Denavit-Hartenberg notations have been previously presented, but no complete approach for out-of-the-box operation has been presented: In (22), only revolute joints are considered; in section 2.8.2 of (5), only special cases of consecutive joint axes are presented, and (20, 23) have not addressed the problem of non-uniqueness of the Denavit-Hartenberg convention.

Ensuring human safety

To fully use the benefits of modular robots, robots must no longer be caged, thus requiring new methods to ensure safety. Properties of an impact that could cause serious harm to a human were characterized in (24). Reactive control methods have been proposed to reduce the time of impact and the risk of clamping (25). Other control methods limited forces (26) or impact energy of the end effector (27); the performance of the robot was then limited to the low speeds that such methods require. Another approach to reducing the impact energy was to build soft robots (28). These robots can be made either from rigid links with inherent compliance in the joints or actuators (29) or from deformable materials (3032).

Our approach is different: Instead of reducing the effects of impacts, we provide a formal technique that proves collisions are not possible, except when the robot is at rest. Because the exact future behavior of surrounding humans is unknown, we computed the entire set of possible behaviors using reachability analysis (3336). Inevitable collision states are alternative formal techniques (37, 38), but they cannot yet be extended to the high-dimensional problem of robotic manipulators.

Alternative approaches for collision avoidance exist, but none can prove the absence of collisions, which is particularly important due to the apparent safety risks. A simple concept for collision avoidance is that of potential fields (39), where obstacles exert repulsive virtual forces on the manipulator. This concept has been extended in many ways: Flacco et al. (40) accounted for the incomplete sensing of the environment and also for the velocity of the obstacle when generating repulsive virtual forces as introduced earlier in (41). Safety fields (42) created repellent forces that depended on not only the relative position but also the relative velocity of humans and robots. Using quadratic programming, they maximized adherence to the desired task while minimizing this measure of danger (43).

Ensuring optimality

Another important challenge for adopting modular robots is determining the optimal combination of modules for given tasks. Very few previous works considered heterogeneous modules and their dynamic properties for the automatic synthesis of modular robots. We do not survey works on identical modules because robots assembled from these are typically not aimed for industrial use (1, 44, 45).

In (46), an algorithm was presented that enumerated all possible compositions of a robot and eliminated kinematically equivalent assemblies. A minimized degrees-of-freedom approach (47) found the task-based optimal composition of modules using the enumeration algorithm in (46). A composition synthesis methodology was proposed in (48), but only for kinematic task requirements, such as workspace volume or maximum reach, and not for dynamic task requirements, such as maximum payload or maximum joint velocities. To reduce the search space, Farritor et al. (49) presented a hierarchical synthesis approach, which grouped useful combinations of basic modules; however, no dynamic task requirements were considered. Composition synthesis approaches based on genetic algorithms were presented in (4953), also without dynamic task requirements.

In contrast to previous works, we consider heterogeneous modules, dynamic task requirements, and an individual solution for each possible composition and present a computationally efficient composition synthesis algorithm, which eliminates failed compositions step by step.

RESULTS

Following Fig. 1, we first present our results for self-programming and then our self-verification procedure. Last, we present how we obtained optimal compositions for a given task.

Self-programming of the robot

A key concept of our approach is that the assembled robot is aware of the parts it consists of. For this reason, we stored the information characterizing each module in itself as illustrated in Fig. 1A. After assembling the robot, the central control unit collected the module data to build a kinematic, dynamic, and geometric model of the assembled robot. Last, the control unit automatically generated code, so that the motion control worked out of the box after assembly. This work considers serial kinematics and builds on our previous work in (54) for controller synthesis; we extended it for collision checking, self-verification, and composition synthesis in this work. Because of the previous work in this area, we kept this section concise, and refer to the Supplementary Materials for detailed information.

A module is considered to be a rigid object that serves as a building block for composing a serial robot arm by means of standardized connectors. We distinguished between joint modules and link modules as shown in Fig. 1A, whose schematic illustration is shown in Fig. 2; joint modules add degrees of freedom to the robot, and link modules do not add degrees of freedom. The two parts of a joint module connected by a joint are referred to as the proximal and distal part, as illustrated in Fig. 2. After assembly, we grouped all modules connecting two subsequent joints and refer to this group as an arm link. Note that one joint module can have more than one degree of freedom.

Fig. 2 Extended Denavit-Hartenberg notation for the ith arm link of a robot assembled from modules.

Fixed connections are shown in gray, and rotating connections (i.e., joints) are shown in white. The distal part of joint module j − 1, k-link modules, and the proximal part of module j + k constitute the ith arm link of the robot.

One typically derives the forward kinematics for serial robot manipulators by multiplying the homogeneous transformation matrices relating consecutive reference frames fixed at arm links from the base to the end effector (5). The Denavit-Hartenberg convention provides a systematic method for describing these link-fixed frames (21); however, this convention is not unique, rendering an automatic computation of the kinematics after assembly impossible. To ensure a nonambiguous solution, we added additional conventions and two parameters (pi and ni) to resolve ambiguity as shown in Fig. 2:

(1) When two consecutive z axes intersect, the xi unit vector is obtained from their cross product.

(2) When two consecutive z axes are parallel, the xi unit vector is set along the common normal between them, and the origin oi is set at the joint connection PJi.

(3) When two consecutive z axes overlap, the xi unit vector is aligned with xi−1, and the origin oi is set at the joint connection PJi.

The parameter pi is the z coordinate of the point PJi−1 measured from oi, and the parameter ni is the z coordinate of the point PJi measured from oi. Using these values, the previously ambiguous values di and θi are now uniquely defined asdi={ni1pi,(revolute joint)ni1pi+qi,(prismatic joint),θi={γi+qi,(revolute joint)γi,(prismatic joint)where γi is an angular offset between consecutive x axes when the joint angle qi is zero.

To obtain the extended DH parameters (ai, αi, γi, pi, and ni) of the assembled robot automatically, the central control unit gathers kinematic data between joint connections (i.e., for the generic link i, from PJi−1 to PJi as shown in Fig. 2). For generating the geometric model for collision checking of the assembled robot, we collected the geometry of each module and applied the same homogeneous transformations as for the kinematic model. When an object is grasped, the shape of the grasped object is added to the collision model. The object geometry may be known beforehand or can be reconstructed from sensor data. Note that our extended Denavit-Hartenberg convention is compatible with the standard convention so that our approaches can also be applied to existing robot platforms.

Next, we derived the dynamic model, which requires some notation. Superscripts of matrices and vectors identify in which frame they are defined. Input and output frames at the connection interface of modules are denoted by in and out, respectively, as shown in Fig. 2. When a connection is established, the in and out frames match, and we denote the common frame by io. We demonstrate how to automatically obtain the dynamic parameters of an arm link composed of one or several modules. Obtaining the dynamic model of the fully assembled robot from arm links is well known (5).

To obtain the dynamic model of the assembled arm, we used the recursive Newton-Euler (N-E) algorithm (5557), which requires the dynamical parameters of each arm link. Let us introduce for the ith arm link the mass mi, its inertia tensor Ii, and its center of mass ri. We first consider the connection of a link module to a joint module. Using basic mechanics of rigid bodies, the dynamical parameters of the rigid body resulting from this auxiliary connection arema,j=mdl,j1+ml,j, Ia,jio=Idl,j1out+Il,jin, ra,jio=mdl,j1rdl,j1out+ml,jrl,jinma,jwhere dl refers to the distal part of the joint module, l refers to the link module, and a refers to the auxiliary connection (see Fig. 2). The inertia tensor expressed in the output frame using Steiner’s theorem isIa,jout=(Rout,a,jio)T(Ia,jioma,jST(ra,jio)S(ra,jio))Rout,a,jio+ma,jST(ra,jout)S(ra,jout)where S(α) returns a skew-symmetric matrix so that the multiplication with a vector β equals the cross-product [S(α)β = α × β] and Rdc denotes a rotation matrix of frame d with respect to frame c. When another link module is added, IMPROV applies the above operations analogously. The last connection to be considered for an arm link is between the auxiliary distal part described above and the proximal part. This last connection establishes the final parameters of the arm link similarly to ma,j and Ia,joutmi=mj1+kdl+mj+kpl,Iiio=Idl,j1+kout+Ipl,j+kin,riio=mj1+kdlrdlj1+kout+mj+kplrplj+kinmiandIi=(Riio)T(IiiomiST(riio)S(riio))Riio

To obtain the dynamical model, IMPROV uses the obtained data for kinematics and dynamics of all arm links; the Denavit-Hartenberg parameters are stored in the table DH, and the dynamic parameters are stored in the table DynPar. After introducing q, q., and q.. as the vectors of joint positions, velocities, and accelerations, respectively, we obtain the dynamical model using the recursive N-E algorithm NEAg(⋅) (55)M(q)q..+c(q,q.)+f(q.)+g(q)=NEAg(q,q.,q..,DH,DynPar)

We have implemented several motion control concepts using the automatically derived dynamic model. The following controllers can be automatically implemented by IMPROV: inverse-dynamics control [see section 8.5.2 of (5)], passivity-based control [see, e.g., section 8.4 of (58)], and passivity-based control with adaptive friction compensation based on (59). To compute a reference trajectory in joint space from a desired trajectory in task space, we used the automatically generated kinematic model. We solved a second-order inverse kinematics scheme with quaternion feedback based on (60) by a damped least-squares approach (61) to circumvent singularity issues. In addition, we damped floating null-space motions (62) arising from redundant robot assemblies as shown in (63).

For validating the accuracy of the automatically obtained models and the performance of the controllers, we have conducted several experiments whose details and plots are presented in the Supplementary Materials. In all cases, the same performance levels could be replicated even after changing the composition. Because of the self-programming of modern control concepts, the performance was superior to a classical PID control scheme for both tracking performance and stability.

Self-verification of the robot

Because various robots are assembled from IMPROV, it is not feasible to verify collision avoidance capabilities after each new composition. Furthermore, future motions of humans in the vicinity of the robot are unknown. To consider these two aspects, we have developed self-verification capabilities that allow the robot to verify online whether its currently planned action is safe. The robot verifies itself that it must stop before any human can touch it. Next, we explain our concept as it applies to a single human, which one can trivially extend for arbitrarily many humans. To ensure that we capture all possible behaviors of humans, we use reachable sets. After introducing a possible trajectory of a human as χ(t; x(0), u(⋅)) for time t, initial state x(0), and input trajectory u(⋅), we define the set of reachable sets for a set of initial states X0 and a set of possible input values U asR(t)={χ(t;x0,u())x(0)X0,t:u(t)U}

The reachable set of a time interval [t0, t1] is defined as R([t0,t1])=t[t0,t1]R(t). For subsequent discussions, we also introduce the area operator A(x(t)), returning the occupancy of the robot for a given state vector x(t) and the reachable occupancy asΓ([t0,t1])={A(χ(t;x0,u(t)))χ(t;x(0),u(t))R(t),t[t0,t1]}

Our self-verification concept uses the principle of induction. We first present the base case (prove that the specification holds initially) followed by the inductive step (prove that, if the statement holds for step k, then the statement holds for k + 1).

(1) Base case for the first time interval [t0, t1]. The robot can only be started if it is initially at rest. Before time t0, we plan an intended trajectory, plan a fail-safe maneuver one time interval into this intended trajectory (i.e., at time t1), and verify that the robot does not intersect the reachable occupancy of the human, during neither the first part of the intended trajectory nor the fail-safe trajectory, as presented in Fig. 1B, 1 and 2. The trajectory is only executed at t0 if this is verified. Because the fail-safe maneuver is constructed so that the robot is stationary at the end, we verify that the robot is not moving when touched by a human.

(2) Inductive step from time interval [tk–1,tk] to [tk,tk+1]. The next step ([tk, tk+1]) of the intended trajectory is only executed if it, together with a fail-safe maneuver starting at tk+1, can be verified as safe before tk (see Fig. 1B4). If so, the intended trajectory is followed at tk. Otherwise, because the fail-safe maneuver starting at tk of the previous time interval ([tk-1,tk]) has already been verified in the previous time step, this fail-safe maneuver is executed.

To focus on the interaction between motion planning and reachability analysis, we assumed that a long-term plan for the robot considering the most likely movement of the human exists [see, e.g., (6467)]. To improve efficiency, one can use planners that anticipate the future movement of surrounding humans (6870). The fail-safe motions are computed on much shorter time scales and require very efficient implementations. We used path-consistent fail-safe trajectories because they are expected to be more predictable for the user, and predictability increases trust (71).

A remarkable property of our approach is that only the intended trajectory and fail-safe maneuver must be stored in the robot memory. In addition, even if the computer hardware for computing new trajectories breaks down, the robot remains safe because the default path is the fail-safe maneuver. In the event that a fail-safe trajectory is triggered, one need not follow the fail-safe maneuver until the robot is stationary. During the fail-safe trajectory, one can already compute a recovery maneuver branching off a fail-safe maneuver to recover to the original plan.

To ensure that the fail-safe maneuver is rarely engaged, we required very fast motion planning and verification. Considering the motion planner, we used the approach in (72); because no fast algorithm for predicting over-approximative human occupancy has been developed elsewhere, we present our approach below. In trials, an update frequency of 500 Hz was found to be practical. To meet these high computational demands, we used two quick-to-compute approaches in parallel, as shown in Fig. 3A. The joint-space approach considers that distances between joints are constant because of the bones of the human body. However, this approach is subject to singularities so that we additionally predict the occupancy in task space. Predictions of both methods are over-approximative; i.e., the obtained predictions account for all possible movements (73). Thus, even if only one prediction is verified as safe, the actual movement of the human will not intersect with the occupancy of the robot, and the robot is verified as safe. This parallelization concept is important because none of the presented approaches is superior in all cases. We first present the occupancies for a single human arm and later extend the results for the entire human body.

Fig. 3 Online verification using reachable occupancies.

(A) Two verification concepts are used in parallel: The upper part shows the verification procedure in joint space, and the lower one directly computes results in the task space. (B) Occupancy of the human arm using the task-space method.

(3) Self-verification in joint space. Our joint-space approach computes the reachable set of joint angles for a kinematic model of the human and maps these to occupancy in task space. The initial set of joint angles X0 is obtained by fitting a kinematic model to the human and adding an uncertain set capturing measurement uncertainties (as shown in the upper half of Fig. 3A). To obtain tight over-approximations, we considered joint values, joint velocities, and joint accelerations limited for humans. These constraints can be modeled by hybrid systems (74); however, computing reachable sets of hybrid systems is computationally demanding (35). Instead, we added moderate conservativeness by computing with models obtained from basic kinematics for maximum joint angles, joint velocities, and joint accelerations in parallel and then intersecting the results:

(1) Zeroth-order model of maximum joint positionRq(1)=[qinf,qsup]

(2) First-order model of maximum joint velocity (Minkowski sum: AB={a+baA,bB})Rq(2)(t)=Q(0)[q̇inf,q̇sup]t

(3) Second-order model of maximum joint accelerations [conv() returns the convex hull]Rq(3)(t)=conv(Q(0),Q(0)Q.(0)t[q¨inf,q¨sup]t22)

The input bounds qinf, qsup, q̇inf, q̇sup, q¨inf, and q¨sup have been validated using an established database of human motion and a high-fidelity simulator, as described later. Intersecting the partial results returns an over-approximation of the reachable setRq(t)=i=13Rq(i)(t)

On the basis of the reachable set in joint space ℛq(t), we obtain the reachable occupancy in task space Γ(t) using sphere-swept volumes (75). A sphere-swept volume is the Minkowski sum of a convex hull of a set of points P={P1,P2,,Pl} and a ball of radius r, denoted by ℬ(r): V=conv(P)B(r).

For the task-space approach described below, we also require capsules, which are a special case of a sphere-swept volume, where conv(P) is a line segment. At singularities of the human arm (5), we only use the self-verification in task space.

(4) Self-verification in task space. The task-space approach computes the reachable occupancies directly in the task space as sets of capsules, avoiding problems with singularities in the kinematic models. The intersection of capsules is computationally expensive. However, capsule-to-capsule intersection checks are fast, as presented in section 4.5 of (76), which we exploit by only checking for intersection with the robot for each model: If one model does not cause a collision, the trajectory of the robot is still safe.

After introducing the initial position of a point on the human body as y(0), its measurement uncertainties δy and δẏ for position and velocity, respectively, and its maximum velocity vy,max and acceleration ay,max, the reachable position of a point mass using a first- and second-order model can be bounded using basic kinematics byRy(1)(t)=y(0)B(δy)B(vy,maxt)Ry(2)(t)=y(0)B(δy)ẏ(0)tB(δẏt)B(ay,max2t2)

The reachable set of a single point on the arm is used to assemble capsules enclosing possible future occupancies of the entire arm as shown in Fig. 3B. The reachable sets of the ith order model of the shoulder RS(i)(t), the elbow RE(i)(t), and the wrist RW(i)(t) are obtained using Ry(1)(t) and Ry(2)(t). We further require the radius rs of the capsule enclosing the upper arm and forearm. The forearm, upper arm, and hand are enclosed by the following capsules:

(1) forearm: ℛF(t) = conv(ℛE(t), ℛW(t)) ⊕ B(rS),

(2) upper arm: ℛU(t) = conv(ℛS(t), ℛE(t)) ⊕ B(rS),

(3) hand: ℛH(t) = ℛW(t) ⊕ B(rH).

Combining the occupancies of different body parts results in the overall occupancyRacc(t)=RF(2)(t)RU(2)(t)RH(2)(t),Rvel(t)=RF(1)(t)RU(1)(t)RH(1)(t)

The occupancy of the zeroth-order model is simply a ball whose radius is that of an outstretched arm and whose center expands with the maximum velocity of the shoulder using the notations from Fig. 3BRpos=xS(0)B(vS,maxt+xS(0)xE(0)+xE(0)xW(0)+δy+rH)

Because we directly obtain the reachable sets in the task space, the reachable set equals the occupancy [ℛ(t) = Γ(t)] so that we do not have to transform the reachable states as presented for the joint-space approach.

The prediction of the full body is done analogously, with the only difference that we do not use the joint-space approach for the other body parts. The reason is that the arms are typically more important compared with the other body parts when working close to a robot, so it suffices to use just the task-space approach for the rest of the body.

(5) Example. We demonstrate the self-verification in Fig. 4. Let us introduce the scaling parameter c, which is the ratio between actual and planned velocity and thus ranges between zero and one. As long as the human is sufficiently far away (until t = tA), all trajectories are verified as safe, and the robot runs at full speed (scaling parameter c = 1). At t = tB, the human is approaching the robot fast so that the self-verification (“verified?”-signal turns to “no”) initiates a fail-safe maneuver. While stopping, the robot repeatedly tries to recover the original trajectory, which is indicated by the verification signal chattering between “yes” and “no.” The recovery/fail-safe maneuvers are planned such that the jerk (derivative of acceleration) and acceleration at each joint are bounded and that acceleration, velocity, and position are continuous at the transition from fail-safe to recovery maneuver and vice versa. In practice, this results in the robot organically finding a safe equilibrium speed. At t = tC, the hand is in close proximity to the robot, not allowing the robot to move at all (c = 0). Once the human is moving away (t = tD), safe-verified recovery trajectories are found, and the robot starts to move again. A similar scenario is also demonstrated in movie S1, where one can also see the self-verification for a reconfigured robot. The computation time for both the computation of the human reachable occupancies and their verification against the intended trajectory and fail-safe maneuver is typically around 124 μs for the joint-space approach and 4 μs for the task-space approach (not including image processing from sensors and communication delays).

Fig. 4 Self-verification of the robot.

(A) time tA, (B) time tB, (C) time tC, and (D) time tD. The graphs show the axes positions, the trajectory scaling, the distances between the hand and the gripper, and the verification results. Snapshots of four interesting points in time are shown above and are indicated on the graphs below with dashed vertical lines.

User study

To demonstrate the usefulness of self-verification compared with static safety regions, we performed a user study in which participants worked alongside the robot. We chose 30 healthy individuals (14 male and 16 female) aged between 22 and 30 years old. The task was to assemble a simple jigsaw puzzle on a table outside of the robot workspace, whereas the pieces for the puzzle were in the robot workspace as shown in Fig. 5. None of the participants had worked with the robot of the user study before. To account for accustomization to the robot (77, 78), participants had three sessions with the robot, spread over the course of up to 8 days; during each session, the participants completed the task alongside the moving robot four times. To focus on the benefits of self-verification, we did not reassemble the robot in between the trials.

Fig. 5 Setup of the user study.

(A) Beginning of the task. (B) The participant moves toward the robot to fetch a piece of the puzzle. (C) The participant picks a piece from the puzzle. (D) The participant places the piece on the other table. (E) Template movements of the robot: Away from the base in the y direction for 1.7 s and back again for 1.7 s, at three different starting positions, between which the robot moves randomly.

We compared our method with the safety-rated monitored stop from International Organization for Standardization (ISO) 13855 (2) and ISO 10218-1 (3), where the robot stops when the human enters the workspace of the robot; details are described in Materials and Methods. The participants were assigned at random to either the implementation using self-verification or the implementation using the static safety zone; the participants did not know which group they belonged to.

Whereas the humans were given a specific task, the goal for the robot was to follow its tasks with as little idle time as possible. Both human time to completion and idle time of the robot improved as the human became accustomed to the robot. In each case, the idle time of the robot was significantly lower using self-verification than with static safety regions, as shown in Fig. 6. The percentage reduction in idle time is 38% (P < 10−5; the P-value is the probability that we mistakenly reject the null hypothesis) for the first four trials and 37% (P < 10−5) for the last trials; for all trials, it was 36%.

Fig. 6 Results of the user study.

(A) The idle time of the robot is significantly reduced when using our self-verification scheme. The idle time remains rather constant over the different trials. (B) The time to completion for humans is not affected by the safety concept. It can be seen that humans became faster at solving the jigsaw puzzle due to training effects.

Determining optimal compositions

To fully exploit the potential of modular robots, users are interested in finding the best composition of modules to fulfill a desired task. A selection of our modules and a possible composition is shown in Fig. 7A. As an example, we consider a pick-and-place task, where the robot end effector should move as fast as possible from start (position S) to goal (position G) while avoiding surrounding obstacles. Our implementation returns to the user the order in which certain modules have to be assembled to obtain the optimal composition.

Fig. 7 Determining the optimal module composition out of available modules.

(A) Available modules and a possible composition. (B) Task description of tasks 1 and 2, which consists of start and goal points (S and G), the desired gripper orientation, and obstacles. (C) Optimal compositions for tasks 1 and 2.

Let us define the set of compositions as C={C1,C2,,CN}, where Ci is the ith composition and N is the number of considered compositions. Our cost function for finding the optimal composition considers the duration tf,i to reach the goal position and the required energy consumptions by the motors. We denote the weight for time optimality by wt and the weight for energy optimality by we. To exclude effects from varying energy efficiency of the motors used in different robots, we assume that no energy losses occur in the motors. After introducing the motor torque vector ui(t), the joint velocity vector q̇i(t), and the time to reach the goal tf,i, all for composition Ci, we can formulate the optimization problem asminCiCwttf,i+weEi,  Ei=0tf,iui(t)Tq̇i(t)dtsubject to ∀t ∈ [0, tf, i].qi(t)[q¯i,q¯i](joint limits)(1)ui(t)[u¯i,u¯i](torque limits)(2)Ai(qi(t))F(composition Ci stays within the obstacle-free space F)(3)where Ai(qi(t)) returns the occupancy of the robot as introduced before and ℱ is the space free of obstacles. The torques for each composition are evaluated using our self-programming approach and the N-E algorithm (55).

A brute force approach (46) computing the cost function over all possible compositions is subject to the curse of dimensionality because the number of compositions grows exponentially with the number of modules. In this work, we mitigated this problem by reducing the search space and through step-by-step elimination of assemblies that could not fulfill the given task.

Search space reduction

We reduced the search space by restricting certain combinations of modules; e.g., it does not make sense to first combine all joint modules and then all link modules. To alternate between joint and link modules in a reasonable manner, we restricted the number of link modules in between joint modules by three. In addition, we did not allow connections of joint modules solely established by link extensions because this typically results in distances between joints that are too small and thus limits the freedom of movement of each joint. Last, for a fair comparison with the other six-degree-of-freedom robots, we restricted the degrees of freedom of the assembled robot to six.

Step-by-step elimination

To reduce computation time, we discarded infeasible assemblies by performing simple satisfiability checks of constraints (1) to (3) first, followed by more computationally expensive satisfiability checks. As a consequence, the more complicated checks only have to be performed on the remaining feasible compositions. We performed feasibility tests in the following order:

(1) Can the composition reach the start and goal positions (S and G in Fig. 7B)?

(2) Are joint and torque limits met at the start and goal positions? This is checked by applying constraints (1) and (2) to positions S and G.

(3) Are joint and torque limits met between the start and goal positions? This is checked by applying constraints (1) and (2) to the entire trajectory.

(4) Does a collision-free path between the start and goal composition exist (including self-collision)? This is checked by applying constraint (3) to the entire trajectory.

The third and fourth checks require a trajectory of the robot from the start (position S) to the goal (position G). Several techniques for motion planning exist, and all of them can be embedded in our approach. For the study presented here, we computed a trapezoidal trajectory in joint space as presented in section 4.2 of (5).

Comparison

We show the advantage of using optimized module compositions for frequently changing tasks by comparing the IMPROV modules (see Fig. 7A) to a standard-configured Schunk LWA 4p, a KUKA LWR 4+, and a Stäubli TX90 in simulation. The robots have to fulfill two different tasks, in each of which the robot has to move from start point S to goal point G while avoiding an obstacle, as shown in Fig. 7B. A spherical obstacle is used because it is easier to understand and reproduce results when using simple geometries; however, our software uses a collision-checking library for arbitrary shapes (79). We assumed that the position of the robot base was fixed at x = 0, y = 0, z = 0 and that the base could be rotated about the z axis. Furthermore, we assumed q̇max=1[rad/s] and q¨max=1[rad/s2] for all joints of all robots. To demonstrate how the choice of the cost function affects the solution, we used three different settings:

(1) Balanced: wt = 1, we = 0.2

(2) Time-optimal: wt = 1, we = 0

(3) Energy-optimal: wt = 0, we = 1

Figure 7C shows the optimal compositions obtained using our approach, and Table 1 compares the results with the previously introduced robots. For task 1, we observed that the robot composed of IMPROV modules avoided the obstacle, whereas trapezoidal trajectories computed in joint space could not return collision-free trajectories for the other robots. For task 2, the KUKA robot is the fastest but only by a small margin compared with the fastest IMPROV composition, which, in turn, is vastly more energy efficient. Gravity has a large influence on the joint whose movement causes other joints to primarily move in the direction of gravity. For typical industrial robots, this is usually the second joint. In our solution, however, the axis primarily affected by gravity is the fourth axis (see Fig. 7C), so that fewer remaining parts of the robot are strongly affected by gravity. We also observe for task 2 that the IMPROV compositions are better in terms of overall cost. Simulation videos for this comparison are provided in movie S2.

Table 1 Results of comparison between IMPROV modules and other robots.

We abbreviate “no solution” by n. s. and “no collision-free trajectory” by n. c. t. Collisions include collisions with other obstacles and self-collisions. The best results in each category are shown in bold.

View this table:

DISCUSSION

Effortless creation of robotic manipulators can only realize its full potential when combining self-programming and self-verification. When only realizing self-programming, the created robot would have to be caged, so that several benefits of a customized solution are lost: For example, designing a cage for a particular robot is not economical in small-scale manufacturing, which is exactly the targeted application scenario. The second lost benefit is that a customized robot can be optimally integrated into its environment, but a cage would separate the robot from the objects it should manipulate. Other approaches for ensuring safety require demonstrating a sufficiently low risk for the user in case of impact (e.g., considering the power and force limiting of ISO/TS 15066); this, however, requires a dedicated analysis of each different application so that flexibility is lost. We next discuss the advantages of our system regarding programming, conventional safety solutions, and performance compared with standard robots.

Comparison with standard programming approaches

Modular robots are not used in manufacturing today because reprogramming costs exceed installation costs by a large margin (80) despite many efforts toward simplifying programming of robots (81). This issue becomes even more important when programming robots for safety-critical applications in human-robot coexistence, as discussed in this paper. According to Regan and Hamilton (82), around 50 software defects remain in 1000 lines of newly written uncommented code, around 10 defects can be found in thoroughly tested code, and still around 1 defect is found after extreme measures of additional testing. This can lead to hazardous behavior considering the number of lines of codes in software written today. This was well documented during the Defense Advanced Research Projects Agency Robotics Challenge for disaster response applications, where a number of teams failed due to programming errors despite good engineering practices and extensive testing (83, 84). Thus, self-programming capabilities drastically reduce the probability of incorrectly working software.

Comparison with static safety zones

Our self-verification approach is more efficient compared with static safety zones because, with our approach, the robot only needs to alter its movement if the human is directly in danger of collision, whereas when using static safety zones, the robot stops as soon as the human appears in its workspace. With practice, the human participants became more efficient and faster at their tasks and spent less time in the workspace of the robot as a result, explaining the improvement in time to completion shown in Fig. 6B over time. Because users gain confidence after time due to the formally correct safety verification, self-verification has a lot of potential, due to the significantly reduced idle time compared with static safety zones.

Comparison with non-modular robots

Modular robots have many obvious advantages with respect to flexibility, maintenance, and cost efficiency compared with standard robots. (i) Flexibility: Modular robots can be easily adapted to current needs and enable flexible manufacturing (44, 85, 86). (ii) Maintenance: Flexible robots are easier to maintain because broken modules can be easily replaced (1, 44). (iii) Cost efficiency: Modular robots are more cost-effective for two reasons. First, one does not require general-purpose machines for flexible manufacturing, creating capital waste; instead, the robots are only assembled to meet their current purpose [see section 2 of (85)]. Second, modular robots can be more cost-effective because one only requires a few modules to assemble many different robots and thus modules can be mass-produced (1, 44).

Finding optimal assemblies

Providing custom robotic solutions for a given task can provide substantial advantages. As an illustrative example, consider a pick-and-place task without any obstacles. There always exists a point with the same distance r between the start and the goal positions. By choosing this point as the position of the base, it suffices to have a robot with a single degree of freedom and a link that has length r. Obviously, for environments with obstacles and tasks where the orientation of the end effector is important, this most simple design is not possible. Typically, the number of possible assemblies is too vast to be evaluated by a human. By evaluating tens of thousands of assemblies, one obtains solutions that a human designer—with preconceptions of how a robot should look—might not have considered. An example of such a robot is the one found for task 2 in Fig. 7C, which has a nonstandard kinematics that is not available for purchase. This shows us that there is an untapped potential for nonstandard kinematics, particularly for special tasks and obstacle-laden environments.

Realization of results

One of the main obstacles to bringing the presented approach into practice is the typically long process of safety certification. Although many industries have established formal methods to a certain degree—see, e.g., avionics (87), railway systems (88), and ground vehicles (89)—formal methods for robotic manufacturing systems are still in their infancy. However, this makes our proposed approach especially appealing to certification agencies: Instead of certifying each robot anew, one only has to certify the software for self-verification once to ensure safe operation with respect to human-robot contact. The software for self-verification could be verified using theorem proving, as it has already been done for reachability analysis (90).

Further applications

Our presented approach can be used in principle for many other applications. We could extend our approach to robots with parallel kinematics. Our approach also works for compliant joints by replacing the controller generation with those for compliant joints (9193), which could be particularly useful in non-industrial settings. In addition, the safety concept could be used for service robots in households once human poses can be reliably tracked. Our online verification concept has also been applied to predict pedestrians for mobile robots (94) and automated vehicles (95).

MATERIALS AND METHODS

Validation of human models and their occupancy prediction

We have evaluated the models of adults used according to the requirements that the occupancy prediction of these models must be over-approximative, tight, and quick to compute. Our validation did not consider children because they are not allowed to work in factories. The degree of conservatism was tested twofold: We performed extensive conformance checking (96), and we explored the space of physically possible movements using a biomechanical model (97).

The data we used for validation were from the publicly available motion-capture database of the Carnegie Mellon University Graphics Lab. Movements include everyday movements—e.g., construction work, machining work, manipulating objects, and stumbling—as well as sports-related, dance-related, and acrobatic movements. Because sports, dance, and acrobatic movements are forbidden in a factory, we excluded them from our analysis. To try to find extreme movements not present in recorded data for the joint-space models in (73), we used the high-fidelity biomechanical model in (97) to systematically explore the state space using rapidly exploring random trees (98), which explore previously unexplored regions more efficiently compared with random testing (99).

For each movement in the database, we computed the occupancy for the duration of the recorded movement starting from its initial state. All behaviors of the dataset and from (98) are enclosed by our predicted occupancy. To demonstrate that our results are not overly conservative, we reduced the parameters for maximum velocity and maximum acceleration in half, which resulted in behaviors that were not enclosed.

Setup of the robot in the user study

The predefined long-term plans of the robot were three template movements, where the robot moved outward (in the y direction) for 1.7 s and inward again for 1.7 s, at three different x-axis locations, as shown in Fig. 5. The robot executed these movements in random order, moving between the start positions of the three template movements randomly and continuously until the human had finished his or her task.

Two possible modes of safe operation permitted in ISO 10218-1 (3) are the safety-rated monitored stop and speed and separation monitoring. The response of the robot to the human entering its workspace was one of the following:

(1) Safety-rated monitored stop: The robot stops when any part of the human enters a static safety zone.

(2) Speed and separation monitoring: The robot performs self-verification as described above, reducing or recovering its speed as a result of being verified unsafe or safe, respectively.

Static safety zone

A static safety zone is a standard approach in industry: When a human enters the workspace of the robot, it stops immediately. The workspace is enclosed by an axis-aligned bounding box enclosing the entire possible occupancy of the robot during the programmed movements. This box is extended in each direction by vmax,humanT + C, taken from ISO 13855 (2), where vmax,human is the maximum velocity of the human, taken as 2 m/s as in (2); T is total time the robot requires to come to a stop, including the latency and communications delay in the sensing loop and the calculation time; and C is the penetration distance of the sensing technology before a human is detected (e.g., penetration due to resolution of light curtain; because we used infrared markers, C is zero).

Self-verification

The static safety zone is based on the assumption that the human moves no faster than a maximum speed vmax,human. For a fair comparison, the self-verification is based on the first-order model Ry(1)(t)=y(0)B(δy)B(vy,maxt) as introduced previously with the maximum speed of the human also taken as 2 m/s, as stated in ISO 13855.

Statistical methods for user study

To test that the robot was significantly faster in the self-verification approach, we took the mean robot idle times for each participant over the last four trials, the first four trials, and overall trials and tested the data for normality with the Shapiro-Wilk test (100). The assumption of normality did not apply to the robot times of the last four trials (P < 0.05 that the data are normally distributed); thus, to ensure validity of the results, we used the Kruskal-Wallis H test (101).

SUPPLEMENTARY MATERIALS

robotics.sciencemag.org/cgi/content/full/4/31/eaaw1924/DC1

Text

Fig. S1. Experimental setup.

Fig. S2. Schematic diagram of the experimental setup.

Fig. S3. Selection of modules (left) and a possible assembly (right).

Fig. S4. Link module and its finite element analysis to test mechanical strength.

Fig. S5. Marker cluster positions on a human participant.

Fig. S6. Modular robot control architecture with self-programming capabilities.

Fig. S7. Characterization of the modules using simple modular units.

Fig. S8. Results of the friction identification procedure for all joint modules.

Fig. S9. Assemblies used for tests.

Fig. S10. Experimental verification of the automatically generated models.

Fig. S11. Tracking performance comparison when using inverse dynamics control (ID) and passivity-based control (PB) for different assemblies.

Fig. S12. Control torque commands required when using inverse dynamics control (ID) and passivity-based control (PB) for the trajectory tracking test.

Fig. S13. Demonstration of the adaptive friction compensation.

Fig. S14. Experimental comparison of different controllers when changing the assembly of the robot.

Fig. S15. Markers of the motion-capture database from Carnegie Mellon University Graphics Lab (photos taken from mocap.cs.cmu.edu, accessed 28 February 2019).

Fig. S16. Box and whisker plot of robot idle time.

Table S1. Technical data of Schunk Powerball Lightweight Arm LWA 4P.

Table S2. Maximum acceleration of different body parts.

Movie S1. Interplay of self-programming and self-verification.

Movie S2. Comparison of optimal IMPROV compositions with commercial robots.

Data file S1. Robot hardware.

Data file S2. Software.

Data file S3. User study evaluation.

References (102106)

REFERENCES AND NOTES

Acknowledgments: We thank the team of graduate and undergraduate students who worked at the Cyber-Physical Systems Group at the Technical University of Munich for constructing the IMPROV modules. In particular, we thank M. Baumann and J. Gerstner for contributing to the design of the user study and carrying it out. We also thank D. Beckert for the software development. Funding: The work was supported by the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Program FP7/2007-2013/ under REA grant 608022 and by the Central Innovation Programme of the German Federal Government under grant ZF4086004LP7. Author contributions: M.A. led the research project and is the main author. A.G. developed the self-programming capabilities. S.B.L. realized the selection of optimal compositions. A.P. developed the self-verification capabilities. Competing interests: S.B.L. and M.A. have filed a European patent under application number EP18201371.4 for collision avoidance of reconfigurable modular robots. The other authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper or the Supplementary Materials. Other data for this study have been deposited in https://bitbucket.org/MatthiasAlthoff/improv/src/master.
View Abstract

Stay Connected to Science Robotics

Navigate This Article