The human brain reveals resting state activity patterns that are predictive of biases in attitudes toward robots

See allHide authors and affiliations

Science Robotics  30 Sep 2020:
Vol. 5, Issue 46, eabb6652
DOI: 10.1126/scirobotics.abb6652


The increasing presence of robots in society necessitates a deeper understanding into what attitudes people have toward robots. People may treat robots as mechanistic artifacts or may consider them to be intentional agents. This might result in explaining robots’ behavior as stemming from operations of the mind (intentional interpretation) or as a result of mechanistic design (mechanistic interpretation). Here, we examined whether individual attitudes toward robots can be differentiated on the basis of default neural activity pattern during resting state, measured with electroencephalogram (EEG). Participants observed scenarios in which a humanoid robot was depicted performing various actions embedded in daily contexts. Before they were introduced to the task, we measured their resting state EEG activity. We found that resting state EEG beta activity differentiated people who were later inclined toward interpreting robot behaviors as either mechanistic or intentional. This pattern is similar to the pattern of activity in the default mode network, which was previously demonstrated to have a social role. In addition, gamma activity observed when participants were making decisions about a robot’s behavior indicates a relationship between theory of mind and said attitudes. Thus, we provide evidence that individual biases toward treating robots as either intentional agents or mechanistic artifacts can be detected at the neural level, already in a resting state EEG signal.


As robots become increasingly present in the day-to-day environment, people develop various attitudes toward such artificial agents. The attitudes range from enthusiasm, acknowledging the potential of robots to assist in daily living (1), to fear and anxiety of robots (2), and even to acts of brutalism and aggression (3). In this context, it is important to examine in more detail the general attitudes that humans have toward robots. This is particularly relevant given the amount of effort currently being dedicated into developing robots for daily assistance, such as health care, elderly care, childcare, and general daily living (46). Several researchers have addressed the issue of attitudes toward robots, such as anthropomorphism (7) or prejudice and anxiety (8, 9) with questionnaires. However, more detailed analysis of human attitudes toward robots with objective behavioral and neural measures alongside subjective reports is necessary. Specifically, it is important to understand how humans explain the robot’s “reasons” for actions. Do we use our human mental models to understand and predict robot behaviors? Or do we frame their behavior in purely mechanistic schemes?

In the background of these considerations lies Daniel Dennett’s conceptualization of the strategies that humans use when they predict and explain various systems that they interact with (10). For example, a driver would predict that their car will slow down when the brake pedal is pushed. Dennett proposed three different strategies (or stances) for predicting different systems. The physical stance is a good strategy for predicting systems in chemistry and physics, such as the entropy of molecules under heat. However, this stance is not efficient for explaining more complex systems. In the car example, a design stance is the most successful because the best (or most efficient) predictions are made when one refers to how the system has been designed to behave. In contrast, in the case of other human agents, the intentional stance works best. When we adopt the intentional stance toward others, we refer to their mental states—such as beliefs, desires, or intentions—to explain and predict their behavior.

We distinguish the concept of intentional stance from the process of mentalizing. Mentalizing refers to predicting a very specific and current instance of behavior with reference to a specific mental state. On the contrary, the intentional stance is more like a general attitude toward an agent—an assumption that the agent is an intentional entity rather than a simple mechanistic artifact. To use the example of the classic Sally-Anne experiments (11) addressing mentalizing skills, children are asked to infer the false belief that Sally should have regarding the location of her toy, given that the toy was moved from a basket to a box when Sally left the room for a moment. As a consequence, they would expect her to look into the basket (or the box) for her toy upon her return to the room, dependent on whether they have developed the cognitive tools to take Sally’s perspective or not. However, even if, by ascribing a wrong belief to Sally, they fail the mentalizing task (if they have not developed theory of mind yet), they would still be adopting the intentional stance toward Sally, ascribing to her mental states in general.

It is quite an intriguing question what stance humans adopt toward humanoid robots. As artifacts and machines, robots call for adopting the design stance. However, given their anthropomorphic appearance, they might elicit a tendency to use mentalizing to explain their behaviors, especially if they are involved in a human-like social context or display human-like behavior. Furthermore, because humans have a natural tendency to anthropomorphize even simple geometrical figures (12), it is not implausible to assume that humanoid robots are approached with the intentional stance and that their behavior is explained with attributing mental states to them.

Following this reasoning, Marchesi et al. (13) examined to what extent humans adopt the intentional stance toward the humanoid robot iCub (14). To probe the attitudes that humans have toward robots, the authors developed a tool, the InStance Test, which consists of sequences of photographs (Fig. 1) where iCub is depicted being involved in various activities.

Fig. 1 Example scenario from the InStance Test with response options.

One of the scenarios used [from (13)] with the two description options and a slider to make the decision (“mechanistic”/“design stance” explanation on the right versus “intentional” explanation on the left). Note that we refer to mechanistic descriptions as design stance, although they could also be referred to as descriptions relating to physical stance. However, given that the design stance is related more to man-made artifacts rather than natural phenomena and offers descriptions at a higher level of abstraction than the physical stance, we categorize these descriptions as stemming from a design stance rather than a physical stance [Credit: figure 3A of (13)].

In the InStance Test, participants are asked to decide between two descriptions of the depicted scenarios: one being a more mentalistic description (cf. Fig. 1; “iCub is cheating” when it can be seen leaning toward the other player’s deck of cards) and the other being more mechanistic (e.g., “iCub is unbalanced”). In Marchesi et al.’s study (13), results showed that on average, the entire sample was slightly biased toward “design-stance” descriptions of the iCub’s actions, but there were many instances in which participants chose the intentional interpretation of the depicted behavior of iCub. However, the acquired data allowed for grouping of participants who were more likely to choose intentional explanations of iCub’s behavior and a group of respondents who preferred the design-stance explanations. This suggests that people might have certain biases in attitudes toward robots by assuming their intentional agency or purely mechanistic functionality. Attitudes toward robots might vary depending on external factors (robot appearance, behavior, and specific context) as well as on internal predispositions (individual differences, experience, or even a particular state at a given point in time). Here, we decided to examine whether one can identify neural underpinnings of attitudes (intentional versus design stance) at the individual participant level, independent of whether those attitudes are a constant trait or a particular state at a given point in time. We kept external factors (robot identity and context in which it was presented) identical for all participants.

Neural substrates of the intentional stance

In the context of adopting various stances toward robots, it is important to address also the neural substrates underlying such biases. On the one hand, previous studies in cognitive neuroscience focused on theory of mind (ToM), defined as the ability to attribute mental states (i.e., beliefs, intents, and desires) to oneself and others (15, 16). These studies found specific correlates in gamma band neural oscillations (17) with tasks that require theory of mind and mentalizing. On the other hand, only a few studies addressed the neural correlates of adopting the intentional stance directly, which is a more general and high-level process compared with theory of mind. However, an elegant study (18) found that the activation of the default mode network (DMN) primes the adoption of intentional stance when explaining human behaviors depicted in pictures—a paradigm that is, in a way, similar to the InStance Test, except that it is related to explanations of human behaviors whereas the InStance Test focuses on attribution of intentionality toward robots.

The DMN is a broad bilateral and symmetrical neural network (19), which displays high activity during resting state, when the mind is not engaged in a specific task, and low activity when attentional resources are allocated to the external environment. For this reason, the leading hypothesis for DMN function postulates its engagement in self-referential processing (20), typically opposed to externally oriented goal-directed processes (21, 22). Because Spunt and colleagues (18) found that the DMN activity is strongly implicated in adopting the intentional stance, they discussed the fact that the intentional stance involves self-referential processes more than goal-directed cognition, although attribution is oriented toward external agents. In this respect, several studies found a well-documented anatomical overlap between the social brain (23, 24) and the DMN (25, 26).

Although most of the studies about the DMN used functional neuroimaging, an increasing body of literature is oriented to studying the DMN functions and its temporal (de)activation dynamics by using magneto-/electroencephalography (27, 28). In these studies, beta band oscillations (13 to 30 Hz) were shown to be a reliable index of spontaneous cognitive operations during conscious rest, strongly correlated with activation of cortical regions involved in the DMN (2931), especially medial and lateral parietal regions (32).

Motivation for the study and hypotheses

The present study was designed to examine whether the biases in attitudes toward robots can be predicted on the basis of individual default electroencephalogram (EEG) activity during resting state (without any experimental tasks). Resting state has been typically examined during a procedure to measure default neural activity during a period of time in which participants are not involved in a task and they are instructed to rest and let their mind freely wander (33). As mentioned earlier, it is the DMN that is typically activated during resting state periods.

In the present study, the question of interest was whether we can observe EEG correlates of the DMN activation during resting state that would be indicative of whether a participant is more likely to adopt the intentional or design stance when later exposed to robot stimuli during the experimental task. Considering the available literature, we focused our hypothesis on the beta frequency range of the EEG signal during resting state, a postulated correlate of DMN activity (29). In addition to resting state EEG activity, we also examined whether we could predict the intentional versus design stance attribution during the experimental task. Here, we specifically focused on a period immediately preceding response execution, where responses were related to making a choice with respect to the interpretation of observed robot actions (intentional versus design-stance interpretations). For task-related EEG analyses, we focused on neural activity in the gamma band, because gamma band has been postulated to be involved in mentalizing (17).

Experimental design

Resting state

Before participants (N = 52) took part in the experimental task, we measured their EEG activity during resting state to examine whether resting state activity pattern in the beta frequency range would predict attitudes (intentional versus mechanistic) toward the robot stimuli presented later, during the experimental task. Resting state was measured during eyes-open and eyes-closed sessions, each lasting 30 s, presented alternately five times (2.5 min of recording for each condition). During eyes-open sessions, participants were instructed to keep their gaze on a fixation cross presented in the center of the screen. They were asked to relax and to avoid blinking as much as possible. During eyes-closed sessions, they were asked to avoid movements and to wait for a beep signaling the end of the session.

Experimental task

The experimental task consisted of completing the InStance Test [cf. (13); although in our adaptation of the test to the EEG study, the response options were presented auditorily; see Fig. 2 for the timeline of an experimental trial in the present study]. As participants were involved in the test, we measured their EEG activity. We were interested in whether the patterns of the EEG signal before response execution (i.e., during the decision-making process) can predict the bias that determines these decisions.

Fig. 2 An example experimental trial.

A trial started upon spacebar press, which the participants were asked to keep pressed until they were ready to give a response. They heard both response options during the presentation of the sequence; the order of the response options was counterbalanced between participants. This was followed by a sliding scale, on which participants rated how well they found the sentences described the visually presented scenarios. The epoch of interest for EEG analysis is marked as the red rectangle on the timeline, immediately preceding spacebar release.


Behavioral responses

Mean InStance score was 43.3 (SD = 15.1; recoded so that 0 indicates extreme mechanistic responses and 100 corresponds to the extreme intentional responses). This score did not differ significantly from Marchesi et al. (13) (t = 1.00, P = 0.32). Because our research question addressed individual differences in biased attitudes toward robots, we split the sample into the following two groups based on z scores from the mean: the design-stance group (scores < 0.5 SD from the mean, MInStance = 24.0, n = 15) and the intentional-stance group (scores > 0.5 SD, MInStance = 57.8, n = 18). For the descriptive statistics of the “undecided” group, see the Supplementary Materials.

EEG pattern in resting state

EEG resting state activity was recorded with eyes open and eyes closed. For the resting state period, we compared the EEG activity of the two groups of participants in an unpaired t test. Considering the literature about lateralization of beta in DMN activity (2830), we specifically examined the average power spectrum at the C5 and C6 electrodes. These electrodes showed a distinct pattern in the beta range in the eyes-open state [effect of group, t(31) = 2.206, P = 0.035, Cohen’s d = 0.136; Fig. 3]. The intentional-stance group (MBeta-activity = −0.24) showed lower beta activity than the design-stance group (MBeta-activity = 0.61). In exploration beyond the literature-based C5/C6 electrodes, we found that these differences were particularly spread out over a left temporoparietal cluster (five electrodes: T7, TP9, C5, P7, and TP7; P = 0.037) and a right frontotemporal cluster (seven electrodes: T8, F8, TP8, C6, FT8, F6, and AF8; P = 0.047); see Fig. 4.

Fig. 3 Differences in beta activity during resting state.

Plot showing the differences between participants in the intentional-stance group and the design-stance group (on the x axis) in their resting state beta activity (13 to 27 Hz). For the y axis, the resting state beta activity with eyes open was computed for each participant, averaged across the C5 and C6 electrodes placed centrally on the scalp, and standardized in z scores. Z scores were obtained by subtracting the overall mean value from the raw values and dividing by the SD. The dots represent the average value for each group. Error bars represent the bootstrapped 95% confidence interval.

Fig. 4 Summary of results related to the resting state beta activity.

All topographies were obtained by calculating the average beta band power (13 to 27 Hz) by applying an FFT to the whole resting state recording (eyes open). Topographies show the activity displayed by participants in the design-stance group and the intentional-stance group, grand-averaged. The third topography shows a t values map of clusters where statistically significant differences (channels marked as “x”) between design-stance and intentional-stance participants were found by means of nonparametric cluster-based permutation tests. Z values indicate standardized beta activity, obtained by subtracting the overall mean value from the raw values and dividing by the SD. t values are defined as the ratio of the difference between the estimated mean values of two groups to its SE.

Task-related EEG pattern, before responses

We examined the task-related EEG activity in the 250-ms time window directly before response onset to examine whether differential patterns of neural activity in the ToM–related gamma band would be observed during the task itself between the groups. We found a distinct pattern in induced gamma activity: Design-stance participants showed a greater desynchronization than the intentional-stance group over an occipitotemporal cluster (P7, O1, Oz, O2, P5, PO7, PO3, and POz electrodes, P = 0.008; see Fig. 5).

Fig. 5 Summary of task-related gamma band activity (28 to 45 Hz) during the 250 ms before response.

The topographies were obtained by calculating the average power spectrum values obtained by means of Morlet wavelet transform on the selected time window and show the activity displayed by the design-stance group and intentional-stance group, grand-averaged. The third topography shows a t values map of clusters where statistically significant differences (channels marked as asterisks) between the groups were found by means of nonparametric cluster-based permutation tests. Z values indicate standardized gamma activity, obtained by subtracting trial-based mean value from raw values and dividing by the trial-based SD. t values are defined as the ratio of the difference between the estimated mean values of two groups to its SE.


The aim of this study was to examine whether patterns of individual default neural activity during resting state can predict different attitudes, intentional versus design stance, toward humanoid robots. To this aim, we analyzed resting state EEG before participants’ involvement in an experimental task. Our results differentiated between participants who were later (during the experimental task) inclined toward interpreting robot behaviors as either mechanistic or intentional. Specifically, participants who were later more likely to adopt the design stance showed a higher beta activity over a left temporoparietal and a right frontotemporal cluster compared with the other participants.

In line with the discussed literature, we postulate that these patterns correspond to previous findings related to DMN activity (31, 32, 34), which has been observed to be involved in mentalizing processes. The DMN activity has been found to predict the adoption of intentional stance in explaining the behavior of another human (18, 25). It seems that the more mentalizing processes were activated during resting state, the more likely participants were then to adopt the design stance toward the robot later during the task. Although, at first sight, this might seem to be a counterintuitive effect, it is actually quite plausible: If participants were involved in thinking about other people, and their intentions or mental states in general, before they took part in the task, the contrast with a robotic agent might have been larger, compared with those who were thinking about issues other than other people’s minds. Thus, those who thought more about other humans during resting state might have been more likely to adopt the design stance toward robots because of a more pronounced category boundary between the natural and artificial agents. However, independently of the exact direction of the effect, the most important result of this study is that we can pinpoint a pattern of default neural activity at rest that predicts how people approach embodied artificial agents, that is, whether they treat them as intentional systems or merely mechanical artifacts.

In addition, we also found differences in neuronal processing during decision-making in the task itself, illustrated by a greater gamma-activity desynchronization in an occipitotemporal cluster for the design-stance group compared with the intentional-stance group. This finding strongly indicates a relationship between theory of mind and the intentional stance. First, gamma activity over the left superior temporal sulcus, consistent with our topography, was proven to be a marker of mentalizing (17). Second, the left temporoparietal junction is a crucial region for the attribution of mental states (35, 36). Activation of this area, which might be related to our topography, was observed when attributing mental states, and patients with lesions over this area showed clear deficits of theory of mind attribution (37). These unique neuronal signatures that we found exclusively related to intentional stance adopted to robots, suggest that theory of mind may be a consequence of adopting the intentional stance. The group of participants that was more engaged in mentalizing in the resting state (as indicated by resting state beta activity) was then more likely to interpret robot behavior in mechanistic terms (as indicated by their preferred choices in the InStance test) and also showed less theory-of-mind–related gamma activity before response, as compared with the group that was more likely to adopt the intentional stance to the robot.


This study showed that it is possible to predict attitudes that people have with respect to artificial agents, humanoid robots specifically, from EEG data already in the baseline default mode of the resting state. This casts a light on how a given individual might approach humanoid robots that are increasingly occupying our social environments. Decoding such a high-level cognitive phenomenon from the neural activity is quite marked and can be highly informative with respect to the mechanisms underlying attitudes that people adopt. It might be that the intentional/mechanistic bias in attitudes toward robots is a similar mechanism to other biases (e.g., racial and gender biases). Therefore, future studies might address the question of whether the neural correlates of biases in attitudes toward robots generalize to other types of biases as well. The present study, however, does not address the issue of whether the observed differential effect across participants is related to a particular context in which they observe the robot, particular robot appearance, or a general attitude that a given individual has toward robots. Future research should address the question of whether the neural correlates of the biases/attitudes observed here are signatures of a general individual trait or are rather related to a given state or context. In either case, it appears that there are detectable neural characteristics underlying the likelihood of treating robots as intentional agents or, rather, as mechanistic artifacts.



We recruited 53 healthy participants (25 M; mean age: 23.8 ± 3.71 years). One participant was excluded from the analyses because of technical problems related to data quality (i.e., high number of electrical bridges during recording and low signal-to-noise ratio). All participants gave written informed consent before enrollment in this study and were screened for contraindications to EEG. Our exclusion criteria comprised the presence of a history of any neurological or psychiatric disease, use of active drugs, abuse of any drugs (including nicotine within 2 hours preceding the study and alcohol within 24 hours preceding the study), and any skin condition that could be worsened by the use of the EEG cap (examined by checking for potential skin irritation after application of electrolyte gel). The study was approved by the local ethics committee (Comitato Etico Regione Liguria) and was conducted in accordance with the ethical standards laid out in the 1964 Declaration of Helsinki. All participants had normal or corrected-to-normal vision and were right-handed.


The InStance Test (13) consisted of 34 scenarios. Each scenario was composed of three pictures representing the iCub robot (14) performing an action, alone, or with other human agents (cf. Fig. 1). Each scenario was associated with two sentences interpreting the behavior of the robot. In each pair of sentences, one was defining a mechanistic behavior, and another one was defining an intentional behavior. The sentences were synthesized by means of the Italian version of a vocal synthesizer (Oddcast text to speech) and presented to participants through in-ear headphones to avoid reading-related artifacts. The experiment was programmed in, and presented with, PsychoPy (38).

EEG apparatus

EEG data were recorded using Ag-AgCl electrodes from a 64 active electrodes system (actiCAP, Brain Products GmbH, Munich, Germany) referenced to FCz. Horizontal and vertical electrooculograms were recorded from the outer canthi of the eyes and from above and below the participants’ right eye, respectively. The EEG signal was amplified with a BrainAmp amplifier (Brain Products GmbH), digitized at a 5000-Hz sampling rate, and recorded. No filters were applied during signal recording. Electrode impedances were kept below 10 kiloohm throughout the experimental procedure.


The experimental session took place in a dimly lit room. After fitting the EEG equipment and earphones, we seated the participants at about 100-cm distance from the screen. We commenced the session with recording the resting state activity (with open and closed eyes). Eyes-open and eyes-closed sections lasted 30 s each and were presented alternately five times (2.5 min of recording for each condition). During eyes-open sections, participants were instructed to keep their gaze on a fixation cross presented in the center of the screen. They were asked to relax and to avoid blinking as much as possible. During eyes closed sections, they were asked to avoid movements and to wait for a beep signaling the end of the section.

Before starting with the InStance Test, participants read the experimental instructions on the screen, and the experimenter asked for any possible questions or uncertainties. Participants were then presented with a practice part, during which the same scenario was presented four times to familiarize them with the procedure. This scenario was not part of the 34-item test. Then, the participants started the experiment.

The InStance Test consisted of 34 trials presented in random order. Participants were asked to press the spacebar at the beginning of each trial and keep it pressed throughout the whole trial duration. Pressing the spacebar started the trial, beginning with the presentation of the scenario for 6000 ms. Scenarios were presented with a size of 800 pixels by 173.2 pixels. A small cross was presented below the scenario, centered on the x axis, at one-quarter of screen size on the y axis. Next, the cross below the scenario was replaced by the text “Sentence A,” and after a 500-ms onset, the first sentence of the scenario was played in the in-ear headphones. The duration of 6000 ms was decided to leave at least 1 s of silence after the longest sentence. Then, 6000 ms after the appearance of the text “Sentence A,” the text “Sentence B” was presented, and after a 500-ms onset, the second sentence was played. The order of intentional versus mechanistic sentences was counterbalanced across trials. A male voice was used for half of the participants and a female voice for the remaining half (counterbalanced for participants’ gender) to avoid gender-related effects. Six thousand milliseconds after the appearance of the text “Sentence B,” the scenario disappeared, and it was replaced by a slider with a rating scale, with “A” and “B” labels on the extremes. (To check whether positioning of the sentence A and B on the left/right extremes, respectively, might have had an influence on participants’ choices, we analyzed the responses by coding the raw score as 0 when the response was on the extreme left and 100 when the response was on the extreme right. The average score was 49.59, ruling out any biases toward left A or right B responses.) A reminder of the instructions “Move the slider towards the explanation you think is more plausible” was presented above the rating scale. Participants were instructed to keep the spacebar pressed throughout the whole trial and to release it only after they have decided their response. After releasing the spacebar, they were instructed to reach the mouse and move the slider with the cursor as fast as possible. This specific instruction was given to be confident that the whole decision-making process was exploited before the spacebar release. After the participants confirmed their response by clicking on an “OK” button on the screen (no time-out), a buffer screen was presented with the text “Press and hold the spacebar to start the next trial” (cf. Fig. 2 for a trial example).

Data processing

To investigate bias toward the design stance and the intentional stance, we divided our sample into three groups for the analyses, according to participants’ overall scores in the InStance Test. Mean score and SD were calculated among 52 participants (M = 43.26, SD = 15.09). Participants with an average score below −0.5 SD from the mean value (corresponding to 35.71; Mscore = 24.0) were included in the design-stance group. Participants with an average score above 0.5 SD from the mean value (equal to 50.80, Mscore = 57.8) were included in the intentional-stance group. This categorization led to three groups with homogenous numbers of participants: design stance = 15, intentional stance = 18, and undecided = 19. The three groups did not statistically differ on the demographic qualities of age, gender, or reported field of study/occupation [design and architecture (n = 5); economy (n = 4); life and human sciences (n = 18); math, physics, and engineering (n = 19); and others (n = 6); see table S1]. In the Supplementary Materials, we additionally present our data including the undecided group.

EEG data were analyzed using MATLAB version R2016a (The MathWorks Inc., 2016) and customized scripts as well as the EEGLAB (39) and FieldTrip toolboxes (40). Data were down-sampled to 250 Hz, and a band-pass filter (0.5 to 100 Hz) and a notch filter (50 Hz) were applied. Data were subsequently segmented into epochs (i.e., trials): Epoch extraction and baseline correction were based on different time windows to suit the different analyses specified in the next paragraph. After visual inspection, trials affected by prominent artifacts (i.e., major muscle movement and electric artifacts) were removed, and bad channels were deleted. On average, 33 trials per participant were included in the analysis. The signal was referenced to the common average of all electrodes (41), and independent component analysis (ICA) was applied to remove the remaining artifacts related to eye blinks, eye movements, and heartbeat. After we removed the remaining artifacts using ICA, noisy channels were spatially interpolated.

Resting state data were analyzed by means of fast Fourier transform (FFT) frequency analysis. This analysis was based on Hanning windows and was aimed to estimate oscillatory power spectra with eyes open and eyes closed for each participant. Frequencies from 2 to 60 Hz were considered when performing the FFT (frequency steps, 1 Hz), and later, beta band (13 to 27 Hz) was analyzed. Power spectra values were extracted from channels C5 and C6 in the beta band and averaged to have a measure of resting state lateral beta activity.

Regarding the experimental task, time-frequency representations (TFRs) of oscillatory power changes were computed separately for the two categories (intentional versus mechanistic trials). These categories were based on the participant’s rating when analyzing activity before the spacebar release and on the sentence category when analyzing sentence-related activity. This individual trial classification was carried to analyze representative trials in which participants manifested their bias, i.e., trials with a mechanistic choice for the mechanistically biased group and trials with an intentional choice for the intentionally biased group. Time-frequency power spectra were estimated using Morlet wavelet analysis based on 3.5 cycles at the lowest frequency (2 Hz) linearly increasing to 18 cycles at the highest considered frequency (60 Hz) (time steps, 10 ms; frequency steps, 1 Hz) (40). We performed single-trial normalization by z-transforming the TFR of each trial for each frequency (42). The z-transformation was performed on the respective mean and SD derived from the full trial length. After the z-transformation, an absolute baseline correction for each trial was performed by subtracting the average of the time window of interest for each frequency to ensure z-values represented a change from the baseline (40). Subsequently, TFRs were averaged across trials per experimental condition. After performing this procedure, the result consisted of an event-related spectral perturbation measure that is robustly normalized on the basis of the single trial level (43). In the end, TFRs were cropped to the period of interest (specified in the “EEG statistical analyses section”), removing time-frequency bins at the trial edges for which no values could be computed. Values were averaged across frequency bins to calculate power within the four major frequency bands, namely, theta (5 to 7 Hz), alpha (8 to 12 Hz), beta (13 to 27 Hz), and gamma (28 to 45 Hz). Segmentation time windows were based on different analyses: For resting state analysis, data were segmented into epochs lasting 1 s to optimize noisy segment removal and ICA. FFT was then performed over whole trials.

When analyzing the activity before the spacebar release (decision-making related), data were segmented in 4-s epochs, starting 2 s before and ending 2 s after the spacebar release. Each trial was baseline-corrected by removing the values averaged over a period of 500 ms (from 0 to 500 ms after the spacebar release). TFR of this activity was then baseline-corrected over a period of 400 ms (from 1000 to 600 ms before the spacebar release) to avoid evoked time-frequency activity that could be found some milliseconds after spacebar release in low frequencies.

When analyzing sentence-related activity, data were segmented in 8-s epochs, starting 1.5 s before and ending 6.5 s after the audio sentence presentation start. Each trial was then baseline-corrected over a period of 1000 ms (from 1500 to 500 ms before the sentence start). TFR of this activity was then baseline-corrected over the same time window. These data were then resegmented in 3 s epochs, starting 1 s before and ending 2 s after the end of sentence trigger when analyzing specifically post-sentence activity. TFR of these data was then baseline-corrected over a period of 1000 ms (from 1000 to 0 ms before the sentence end).

EEG statistical analyses

To compare resting state lateral beta activity, FFT power spectrum values were averaged across channels C5 and C6. These values were then compared via analysis of variance (ANOVA) among the three groups: undecided versus intentionally biased versus mechanistically biased participants (three-level factor). Post hoc multiple comparisons were performed across these three levels by using Tukey’s post hoc correction. Resting state data were then compared only in intentionally versus mechanistically biased participants across all channels by means of a nonparametric cluster-based permutation analysis.

To compare sensor-level EEG data, nonparametric cluster-based permutation analyses (using a Monte Carlo method based on paired t statistics) were performed (44). This method has been shown to be extremely accurate in solving the multiple comparisons problem in M/EEG data, and it has been compared with other broadly used approaches (i.e., bootstrap-based and Bayesian approaches) (45). Considering data separated by frequency range and time window, t values exceeding an a priori threshold of P < .05 were clustered on the basis of neighboring electrodes. Cluster-level statistics were calculated by taking the sum of the t values within every cluster. Comparisons were performed for the maximum values of summed t values. Using a permutation test (i.e., randomizing data across conditions and rerunning the statistical test 1500 times), we obtained a reference distribution of the maximum of summed cluster-level t values to evaluate the statistic from the actual data. Clusters in the dataset were considered statistically significant at an alpha level of 0.05 if <5% of the permutations (N = 1500) used to construct the reference distribution yielded a maximum cluster-level statistic larger than the cluster-level value observed in the original data.

Nonparametric cluster-based permutation tests were used (i) to compare resting state beta activity, (ii) to compare decision-making related activity (before the spacebar release), and (iii) to compare neural activity during sentences presentation and (iv) immediately after the sentences. For (ii), (iii), and (iv), all previously defined frequency bands were tested (theta, alpha, beta, and gamma). For (ii), two time windows were taken into account: an early time window (500 to 250 ms before the spacebar release) and a late one (250 to 0 ms). For (iii), a time window of 0 to 2500 ms (mean duration of a sentence) after sentence onset was considered, whereas for (iv), we tested a time window of 1000 ms after the sentence offset. In (i), resting state activity was compared in intentionally versus mechanistically biased participants between the groups; in (ii), only trials in which participants manifested their bias were taken into account, i.e., trials with a mechanistic choice for mechanistically biased participants and trials with an intentional choice for intentionally biased participants; in (iii) and (iv), the analyses were focused on sentence-related activity and, therefore, compared intentional versus mechanistic sentences across all participants.



Fig. S1. Plot showing the differences between participants in the intentional-stance group, undecided group, and design-stance group (on the x axis) in their resting state beta activity (13 to 27 Hz).

Table S1. Group demographic difference statistics.


Funding: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant awarded to A.W., titled “InStance: Intentional stance for social attunement.” G.A. no.: ERC-2016-StG-715058). Author contributions: F.B., C.W., and A.W. designed the experiment. F.B., C.W., and S.M. acquired the data. F.B., C.W., and J.C. analyzed the data. F.B., C.W., J.C., V.M., and A.W. wrote the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: The data and scripts are available at

Stay Connected to Science Robotics

Navigate This Article