SciELO - Scientific Electronic Library Online

 
vol.15 issue3Information Integration Theory: Unified Psychology based on three mathematical lawsDo faces and body postures integrate similarly for distinct emotions, kinds of emotion and judgent dimensions? author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Universitas Psychologica

Print version ISSN 1657-9267

Univ. Psychol. vol.15 no.3 Bogotá July/Sept. 2016

https://doi.org/10.11144/Javeriana.upsy15-3.bafa 

Brain Activation Follows Adding-Type Integration Laws: Brain and Rating Responses in an Integration Task with pairs of Emotional Faces*

La activación cerebral seguida a las leyes de integración tipo adición: Tasa de respuestas y actividad cerebral en una tarea de integración con pares de rostros emocionales

Telmo Pereira
University of Coimbra, Portugal telmo@estescoimbra.pt

Armando Oliveira
University of Coimbra, Portugal

Isabel B. Fonseca
University of Coimbra, Portugal

Notes
*Research article.

Received: 10 March 2016 Accepted: 22 June 2016


To cite this article

Pereira, T., Oliveira, A., & Fonseca, I. (2016). Brain activation follows adding-type integration laws: Brain and rating responses in an integration task with pairs of emotional faces. Universitas Psychologica, 15 (3). http://dx.doi.org/10.11144/Javeriana.upsy15-3.bafa


Abstract

This study was designed to investigate the relation between rating responses and the patterns of cortical activation in an integration task using pairs of emotional faces. Participants judged on a graphic rating scale the overall affective intensity conveyed by two emotional faces, each presented to one of the two hemispheres via a Divided Visual Field technique (DVF). While they performed the task, EEG was recorded from 6 scalp locations. Three discrete emotions were considered (Joy, Fear, and Anger) and varied across three levels of expression intensity. Some face pairs portrayed the same emotion (same-emotion pairs), others two different emotions (distinct-emotions pairs). The patterns of integration of the two sources of information were examined both at the level of the ratings and of the brain response (event-related-α-desynchronization: ERD) recorded at each EEG lead. Adding-type rules were found for the ratings of both same-emotion and different-emotions pairs. Adding-type integration was also commonly found when α-ERD was taken as a response. Outcomes are discussed with a link to the lateralization of emotional processing and the relations between the observable R (e.g., ratings) and possible implementational aspects of the implicit r posited by Information Integration Theory (IIT).

Keywords: divided visual field, facial expressions of emotion, functional measurement, cerebral organization.


Resumen

El objetivo de este estudio fue investigar la relación entre la tasa de respuestas y los patrones de activación cortical en la integración de tareas usando los pares de expresiones faciales. Los participantes emitieron un juicio sobre una gráfica y la calificaron en una escala de intensidad afectiva que transmitía dos expresiones faciales, cada una se presentó a uno de los dos hemisferios usando la técnica de Divides Visual Field (DVF). Mientras ellos realizaban la tarea, fue grabada su respuesta en el EEG usando 6 electrodos. Tres emociones discretas fueron consideradas (Alegría, Miedo y Rabia) y estas variaron en tres niveles de intensidad de la expresión. Varios pares de caras contenían la misma emoción, otras dos mostraban emociones diferentes. Los patrones de integración de las dos fuentes de información fueron examinadas tanto con las escalas como con las respuestas cerebrales (ERD) grabadas en cada seguimiento del EEG. El patrón de la regla de la adición fue observado en las calificaciones de pares de emociones iguales y pares de emociones diferentes. La integración de tipo aditivo fue comúnmente observada cuando el α -ERD fue tomado como una respuesta. Los resultados fueron discutidos teniendo en cuenta la lateralización de los procesamientos emocionales y las relaciones entre la R observable y los posibles aspectos prácticos de r propuestos por la Teoría de Integración de la Información (IIT).

Palabras clave: Campo visual dividido, expresión facil de emociones, medidas funcionales, organización cerebral.


Information Integration theory (IIT: Anderson, 1981, 1982) investigates how subjects arrive at producing global judgments based on the combination of several pieces of information. According to IIT, information integration takes place within a processing chain, comprising two types of external entities —an observable array of stimuli (always more than one, at a minimum S1 and S2) and an observable response (R)— and two corresponding unobservable entities: (1) the subjective counterparts of the stimuli, y1 and y2, and (2) the internal resultant of the integration of these subjective representations, r. The transition between the two external poles of the chain, from Si to R, is credited to three unobservable functions: Valuation, which transforms Si into yi; Integration, which converts the several yi into an unified implicit response r; and the Action or Response operator, which maps r onto an external, observable R. Solving the “problem of the three unobservables” (Anderson, 1981, 1982, 1996, 2001) means, in each concrete case, specifying these three functions. Contrasting with externalist approaches, IIT takes Integration, at the most inner part of the chain, as the basis of an operational solution to this problem. This stems from the circumstance that the Integration function lawfully embodies psychological structure (via a cognitive algebra), and that this structure contains implicit metrics of both yi and r, which can then be properly measured (functional measurement, hereafter FM) (Anderson, 1981, 1982).

Ratings and nonverbal responses

Being for the most part a theory of everyday judgment, the R in the IIT chain most typically brings to mind verbal or verbally translatable responses (e.g., numerical or graphical ratings). However, nothing opposes in principle that behavioral responses (central in such fields as animal and infant psychology) and physiological responses (common, e.g., in sensory and emotion research) are equivalently used (Anderson, 1989, 1996). One arising complication in those cases is that the benefits of a sound rating methodology no longer apply. Time and again, the IIT program has shown that, under suitable methodological constraints, ratings can afford linear scales of r, meaning that the action operator is a linear (non-distorting) transformation: R = ar + b, with a and b constant. Behavioral and physiological responses do not afford similar warranties as regards linearity. Nevertheless, if they keep a monotonic relation to r, and a valid psychological integration rule is known to apply, they can still be transformed into linearity by using the rule as a criterion (Anderson, 1981, 1982, 1996). Considering the criterion role of the integration law, situations where nonverbal responses can be associated with linear ratings appear as the most convenient. Isomorphism between the patterns obtained from ratings and from, say, physiological responses (of particular concern henceforth) would then help in validating them as linear. As with ratings, graphic parallelism associated with adding-type patterns (Anderson, 1981, 1982) would provide the most simple and favorable situation for assessing the linearity of physiological responses.

Cognitive algebra and lower level processes

To be sure, physiological variables can be used in IIT otherwise than as responses. When used as stimulus (with ratings as responses) they can be treated on an equal footing with other variables, and be attributed functional, psychological values as informers (Anderson, 1989). A particular feature of using them as responses, however, concerns the potential insights to be gained over the knitting of higher (more molar) and lower (more molecular) levels pertaining to cognitive algebra. One common critique to integration rules is that they correspond to disembodied mathematical structures, bearing no consequence to the study of lower level processes (Anderson, 1982). This overlooks the fact that, once established, algebraic rules enforce boundary conditions to which molecular mechanisms are bound to conform (Anderson, 1981, 1982, 1996). Anyhow, documenting the operation of algebraic rules at chiefly implementational levels (e.g., neurocortical) might assist in highlighting (1) the material embodiment of cognitive algebra (Anderson, 1996) and (2) the recurring function of integration rules as structural elements of processing across distinct levels, from more molecular ones to the phenomenology of everyday cognition (Anderson, 1982). Regarding the r in the IIT chain, some appreciation of more implementational aspects related, for example, to its neural representation, plausibly in the brain, might perhaps be envisaged in certain situations.

The present study

In the present study brain and rating responses were used in parallel in the context of an integration task requiring the combination of emotional information conveyed by two faces. The rated dimension was overall emotion intensity communicated by the face pairs. Among the several possible measures of brain response, event-related-α-desynchronization (ERD) was the one selected. ERD provides a general index of brain activation, defined in the method section.

The topic addressed by the study is the hemispheric lateralization of emotions, particularly in its application to the perception of emotional faces. Three major hypotheses compete in this terrain: (1) the right hemisphere theory, arguing for an overall privilege of the right hemisphere (RH) in emotional processing (Borod et al., 1998); (2) the valence theory, positing a preferential processing of positive emotions by the left hemisphere (LH) and of negative emotions by the RH (Reuter-Lorenz & Davidson, 1981; Davidson, 1995); and (3) the approach-withdrawal theory, proposing that approach-inducing emotions are LH-lateralized and withdrawal-inducing emotions RH-lateralized (Davidson, 2004; Demaree, Everhart, Youngstrom, & Harrison, 2005). The two latter models are sometimes considered as variants and treated indistinctly, but they invoke conceptually distinct lateralization principles (valence versus goal approach/avoidance) and diverge on the predicted lateralization of anger, a negative- valence but approach-inducing emotion.

Although the balance of evidence in recent years has come to favor the approach-withdrawal hypothesis (Davidson, 2004; Demaree et al., 2005), there are data supporting all models (Alves, Aznar-Casanova, & Fukusima, 2009; Anes & Kruer, 2004; Bourne, 2008), so that the overall picture remains one of inconsistency. Much of the behavioral results on hemispheric lateralization in non brain-injured individuals were obtained with the Divided Visual Field (DVF) paradigm, promoted as early as 1952 (Mishkin & Forgays, 1952). The characteristic feature of the DVF method is the ability to selectively stimulate each brain hemisphere, taking advantage for that of the anatomical arrangement of the visual system. Given the projection of the nasal and temporal hemiretinas to the contralateral and ipsilateral hemispheres, respectively, if a stimulus (e.g., an emotional face) is presented at an adequate eccentricity angle on the left visual field (LVF), while the perceiver fixates ahead, the information will be received first by the right hemisphere (RH). A symmetrical result will be obtained by presenting the same stimuli to the right visual field (RVF). Contrasting presentations on the LVF and on the RVF in terms of judgment accuracy (e.g., % of correct recognition of the expressed emotion) or response time (RT) then allows straightforward inferences regarding hemispheric asymmetries in information processing.

The current study incorporates the DVF technique in an adapted version to integration studies, first proposed in Anderson (1989) as a way of investigating cerebral organization (Anderson, 2008). The procedure differs from the standard paradigm in the following ways: (a) stimulation is bilateral rather than unilateral, meaning that specific pieces of information are selectively presented to each hemisphere all at once; (b) information is factorially manipulated across channels; (c) the assigned tasks require joint (i.e., integrated) use of the information given to each hemisphere. This allows turning the two hemispheres into factors independently varied in a factorial design, and applying the common IIT and FM methodologies.

As argued in Anderson (1989), this integration approach should allow embedding the study of asymmetries into a broader study of interhemispheric interactions (manifesting in the integration patterns) and the division of labor within a distributed two-fold brain system (reflecting, for example, in the relative importance of each hemisphere to the task). More than twenty years after this proposal, it is fair to say that lateralization studies have recognized the need to evolve along these lines. The notion of relative dominance has replaced the earlier notion of absolute hemispheric dominance (Compton, 2002; Tamietto, Corazzini, Gelder, & Geminiani, 2006; Wager, Phan, Liberzon, & Taylor, 2003) and several methods have been put in value for examining inter-hemispheric cooperation with DVF methodology (Bourne, 2006). However, while they all embrace bilateral selective stimulation, they lack all other essential features of the integration approach (neither an analysis of integration operations nor any functional quantification can be based on them). In this way, the 1989 proposal has not lost validity and awaits completion.

One possible limitation noted from the start (Anderson, 1989) concerns the exposure durations associated with the DVF methodology. Tachistoscopic visual presentations in between 150 and 180 ms (Bourne, 2006) are considered a defining trait of the DVF technique, to prevent saccades towards the stimulus. These are too short for typical integration tasks, requiring somewhat complex judgments and usually performed without time constrains. However, as remarked by Anderson, who refers on this to Klatzky and Atkinson (1971), any laterality effects found should be diagnostic in the end, irrespective of presentation times (Anderson, 1989, p. 179). The chimeric faces test, a more recently developed visual field technique, seems to illustrate just that, by deriving laterality indices without constraints imposed on exposure durations or ocular movements (Bourne, 2010). Meanwhile, the possibility of obtaining sound laterality results with emotional faces bilaterally presented in free viewing (no time or viewing constraints) conditions has been demonstrated by Jansari, Tranel & Adolphs (2000) and soon after replicated by Rodway, Wright, & Hardie (2003). In the current study, 1 second exposures (several times above the tachistoscopic range, extending to a maximum of 200 ms) were found convenient for the task and used.

The described integration approach to cerebral organization was originally meant for use with ratings. As indicated, brain EEG responses are here additionally collected. One advantage sought with that, identified above, involves the exploration of implementational aspects of the little r as (possibly) a neural representation in the brain. However, a second advantage is allowing analyses at a more specific, less coarse level than the whole hemisphere. Complex patterns of brain asymmetry have been found for specific brain regions (e.g., the prefrontal cortex) which do not emerge at the level of the entire hemispheres (Wager et al., 2003). The comparison of integration patterns arising from ratings and brain responses can be done for EEG data collected at different cortical sites, thus increasing the analytical bearing of the integration approach. Thanks to this possibility, the inclusion of physiological responses can be looked upon as a straightforward refinement of the approach.

Method

Participants

Thirty volunteer graduate students (20F, 10M; mean age: 21 + 1.7 years) were enrolled in the experiment, who where all right-handed (as assessed with the Edinburgh Handedness Inventory: Oldfield, 1971), naïve about the topic under study and with normal or corrected to normal vision.

Stimuli

Photographs of emotional faces from the Japanese and Caucasian Facial Expressions of Emotion (JACFEE) and the Japanese and Caucasian Neutral Faces (JACNeuF) databases (Matsumoto & Ekman, 1988). The JACNeuF includes photos of all subjects featuring in JACFEE showing neutral facial expressions. Displays of fear, happiness, and anger with the highest mean intensity ratings (provided by the accompanying documentation) were selected from the JACFEE, and the corresponding neutral faces selected from the JACNeuF. Each emotional expression and its neutral counterpart were then digitally morphed into each other at equal steps of 33%. This allowed obtaining for each targeted emotion additional low and intermediate levels of expression intensity, corresponding to the first and second morphs, starting from neutral. Facial expressions were further combined into pairs, with one face located at each side (left, right) of the resulting image (see Figure 1).

These pairings were made to embed the full combination of intensity levels for emotions taken two by two, so that for every two emotions (e.g., fear and joy) there were 3 (e.g., intensities of fear) × 3 (e.g., intensities of joy) = 9 pairs, doubled to 18 by exchanging the sides where faces were presented. Due to limitations of the JACFEE, distinct emotions were illustrated by distinct models, not allowing keeping facial physiognomy constant. Gender, however, was kept constant within each pair. In addition, the full combination of intensities for every emotion (same-emotion pairs) was also implemented, resulting in 27 additional images. In this case, both faces in a pair always belonged to a same model. The entire set of stimuli comprised 81 images.

Design and procedure

A lateralized emotion perception task with exposure durations of 1 sec. (cf. Jansari, Tranel & Adolphs, 2000; Rodway, Wright, & Hardie, 2003) was used, with bilateral selective stimulation of the brain hemispheres (DVF technique). To achieve that, the two emotional faces in a pair were presented in opposite visual hemi-fields, with their inside edges 5º from a fixation marker at the center. The viewing distance subtending this angle was 50 cm and was kept constant by means of a chin rest.

Each of the three sets of same-emotion pairings (fear-fear, joy-joy, anger-anger) obeyed a 3 (intensity at the LVF) × 3 (intensity at the RVF) full factorial repeated measures design. Each of the three sets of distinct-emotions pairings (fear-joy, fear-anger, joy-anger) obeyed a 3 (emotion 1) × 3 (emotion 2) × 2 (visual hemi-field: LVF and RVF) repeated measures factorial design. Stimuli were presented interspersed within a single randomized block of 81 trials.

Participants were run individually, after a variable number of training trials. Instructions asked them to judge the overall emotional intensity conveyed by each pair of expressions, while keeping their eyes at the fixation point. They sat in a recliner in a dimly lit room, in front of a VGA monitor, with their head positioned on a chin rest. Answers were given on a graphic rating scale which appeared on the screen 1 second after the offset of the stimulus. All aspects of the stimuli presentation and registration of responses (automatically converted to a 0-40 scale) were controlled with SuperLab 4.07, which also triggered the recording of EEG data.

EEG assemblage, data collection and analysis

Six EEG leads (locations F3, F4, T3, T4, P3, P4 of the 10-20 International System) were used, all referenced to Cz. Data were collected at a sample rate of 150 Hz with a band-pass filter of 0.1-35 Hz. Waves were edited offline according to the experimental conditions defined by the factorial design. Each time epoch included a 2 s baseline period and extended for 10 s after stimuli onset. A spectral analysis was performed via a Fast Fourier Transform (FFT) over the baseline interval and the first second after stimulus onset, and α-power was estimated (mean value on the α-band 8.0 – 13.0 Hz, expressed in mV2).

Event-related-α-desynchronization (α-ERD) was then calculated for each epoch as the post-stimulus decrease in α-power (as regards the pre-stimulus interval) multiplied by 100 and divided by α-power at the baseline. Since brain activation is associated with decreases in α-power, this ERD index allowed expressing the percentage of brain activation at each lead location following the presentation of the stimulus. To ease up the reading and interpretation of plots, ERD is always presented in modulus, resulting in a positive scale of brain activation (higher values corresponding to higher activation).

Graphical and statistical analyses

As is characteristic of IIT/FM, visual inspection of factorial plots aided by repeated-measures ANOVAs conform the basic tools of analysis. Similar analyses were conducted on ratings and on brain activation (α-ERD) at each of the cortical sites – including each of the hemispheres at frontal, temporal, and parietal regions.

For convenience, same-emotion and distinct-emotions pairs are dealt with in separate results sections. In the case of same-emotion pairs, analyses were performed for each emotion, with intensity on the LVF and intensity on the RVF as within-subjects factors. For distinct-emotion pairs, each pair of emotions was the subject of analysis, with emotion 1, emotion 2, and visual hemi-field as within-subjects factors. The Greenhouse-Geisser correction for df was used whenever the sphericity condition was violated, and the Bonferroni adjustment procedure adopted when follow-up multiple comparisons were performed to locate significant differences between the levels of a factor (Keselman & Keselman, 1988; Keselman, 1998; Bagiella, Sloan, & Heitjan, 2000).

Results

Same-Emotion pairs

Ratings. The factorial plots obtained from ratings of same-emotion pairs are presented in Figure 2, with the emotional expressions presented to the LVF-RH in abscissa and those presented to the RVF-LH as the curve parameter. Near-parallelism of lines is a feature common to all plots, with only a slight deviation trend in the fear-fear graph (mild narrowing of the vertical spacing at the middle). This was buttressed by statistical analyses with RM ANOVAs, which revealed significant main effects (all ps < 0.001) of both factors (LVF and RVF) and a non-significant LVF × RVF interaction in all cases: Joy – F (4,116) = 0.143, p = 0.996; Anger – F (3.04,88.26) = 0.214; p = 0.889); Fear – F (2.72, 79.06) = 1.127; p = 0.340.

These results (visual parallelism and absence of significant interactions) signal an adding-type integration of the emotional information conveyed to each hemisphere, which might correspond to adding or, more probably, averaging with equal weighting (Anderson, 1981, 1982). Granting that some of the displayed emotions are differently lateralized in the cerebral hemispheres (which follows from both the valence and the approach-withdrawal hypotheses), this directly illustrates the embedment of cerebral asymmetries into the unified operation of a distributed bilateral brain system.

Also, since algebraic integration models afford the basis for functional measurement (Anderson, 1981, 1982), a quantitative appreciation of the relative contribution of each hemisphere becomes possible (Anderson, 1989). Particularly useful in the context of adding-type models is the relative range index (RRI), which provides an overall measure of relative importance (Anderson, 1981). The RRI corresponds to the ratio of the range of one factor (its effect on the response scale) to the range of the other(s). It can be lawfully used when, besides the finding of parallelism in the data, factors have been manipulated as to cover some natural (non-arbitrary) range of variation, which was the case here (see section stimuli).

For each type of same-emotion pairing, the RRI was computed as the mean range of LVF divided by the sum of the mean ranges of LVF and RVF. Under this form, the RRI can be expressed as a percentage, after multiplying by 100. Since the LVF selectively stimulates the RH, percentages > 50% indicate a larger contribution of the RH, and < 50% a larger contribution of the LH. The obtained values were RRIJoy = 46.5%, RRIAnger = 46.8%, RRIFear = 49.2%, pointing to a predominance of the LH in joy and anger, and a close to even contribution of both hemispheres in fear. These percentages, however, did not differ significantly among each other ( p = 0.535, associated with a one-way RM ANOVA with type of emotion as a within-subjects factor), nor from a reference value of 50% (highest value of t (29) = 1.118, found for anger, with an associated p = 0.074).

Brain activation. The factorial patterns obtained by using a-ERD values (percentage of brain activation) in place of the phenomenological ratings can be seen in Figure 3. As brain response was recorded at several sites, plots are presented for the frontal, temporal, and parietal regions and, within each region, for sites located on the right and the left hemispheres (signaled by the even number 4 and the odd number 3, respectively). In every graph, LVF presentations are on the abscissa and RVF presentations are the curvature parameter (the same convention adopted with ratings). Visual inspection of the graphs reveals three main characteristics, relating to (1) differences between left and right EEG sites, (2) the LVF × RVF integration patterns, and (3) the suggested relations between the two hemispheres.

Differences between left and right EEG leads. All plots illustrate marked differences between brain activation recorded at left and right locations, varying with emotion type and with the cerebral region. The most salient differences occur at the frontal region, which has been reported to present the largest emotion processing asymmetries (Davidson, 1995, 2003, 2004, Demaree et al., 2005; Fox & Davidson, 1986; Jones & Fox, 1992; Wager et al., 2003). The profile of these frontal differences agrees with the approach-withdrawal lateralization principle, showing much higher activation in F3 (LH) for joy and anger (approach-inducing), and in F4 (RH) for fear (withdrawal-inducing). RM ANOVAs conducted for each emotion type with LVH, RVH, and EEG lead location (right, left) as factors revealed a significant effect of lead location in all cases, Joy: F (1.29) = 296.551; p < 0.001; Anger: F (1,29) =357.633; p < 0.001; Fear: F (1.29) = 292.833, p < 0.001.

The parietal region exhibits the next clearest differences, consistent this time with the valence lateralization principle. Higher brain activation is displayed in F3 for joy (positive valence) and in F4 for anger and fear (negative valence). These differences were all shown significant in RM ANOVAs equivalent to those performed over frontal data (all ps < .001, with lowest F = 80.039, found for anger). The finding of lateralization according to valence is at odds with Davidson’s early findings (Davidson, Schwartz, Saron, Bennett, & Goleman, 1979; Davidson, 1992) of an absence of emotion-related asymmetries at parietal sites. However, it is partly harmonizable with a variant of the valence theory positing a right parietal mediation of the perceptual processing of emotions, especially negative ones (Hellige, 1993; Killgore & Yurgelum-Todd, 2007).

Finally, temporal sites display higher brain activation at the right EEG lead (T4) for every emotion. These differences were all found significant (highest p = 0.003, associated with joy), concurring with the hypothesis of RH dominance for the perception of all emotions. More specifically, they are consistent with indications of a general involvement of right temporal structures in the perception of facial emotion (Narumoto, Okada, Sadato, Fukui, & Yonekura, 2001; Sato, Kochiyama, Yoshikawa, Naito, & Matsumura, 2004) and in the modulation, transversal to all emotions, of behavioral and autonomic arousal, with higher activation reflecting higher arousal (Heilman & Valenstein, 2012; Hellige, 1993). In this last view, processing of the more arousing aspects of emotion perception might be subserved by right-lateralized temporal circuits.

The overall picture is thus a complex one. The largest activations and activation asymmetries recorded at frontal leads support both a prime role of the frontal regions and the approach-withdrawal model of lateralization. However, other regions appear to take charge of distinct dimensions of face emotion processing (valence, possibly arousal-intensity), complying with different lateralization principles. Rather than irreconcilable alternatives, the three major models of cerebral lateralization of emotion perception might actually highlight distinct facets of a complex distributed (within an across-hemispheres) cortical processing system.

LVF × RVF integration patterns. While some of the plots in Figure 3 suggest an absence of effects of one or both factors (LVF and RVF), manifest as flat horizontal and/or vertically collapsed lines (e.g., joy P4, anger T3, fear T3 and P3), several other involve some visual suggestion of parallelism, despite irregularities. For convenience, results are discussed below for each brain region.

Frontal. RM ANOVAs separately performed by emotion and lead location (left and right) in the frontal region revealed significant main effects of both factors (highest p = 0.015, associated with factor LVF in anger F3) and a nonsignificant LVF × RVF interaction (lowest p = 0.630, found for joy F4) in all cases except fear F3. In the latter case, there were no effects of either factor (lowest p = .121, for RVF). For the frontal region, thus, an adding-type rule for the integration of information conveyed to the LVF and the RVF is also generally found when brain activation is used as a response. Since this algebraic integration is established at each EEG lead, it constitutes strong evidence for a role of inter-hemispheric transfer of information. This is also true for fear F4 (information must have flown from the right to the left hemisphere to allow integration in F4), despite the absence of effects in F3. Isomorphism with the rule found with ratings is striking, but it should not make us forget that ratings are plausibly expressing the entire contribution of both hemispheres and all their distinct (within-hemispheric) regions. The notion of a hierarchy of integrations is perhaps in order here, with those at the level of specific EEG sites positioned at some intermediate level, not directly subserving the final unified percept. Nevertheless, they document the use of similar (isomorphic) integration rules at distinct levels. A final worth-noticing feature is that preferential lateralization of emotions revealed by asymmetries in activation levels (see the preceding section) is also apparent in the magnitude of the effects of the factors when compared across right and left leads. In accordance with the approach-withdrawal model, both factors work more in F3 than in F4 for joy and anger, and more in F4 than in F3 for fear.

Temporal. The RM ANOVAs conducted over data obtained at the temporal leads revealed significant main effects (highest p = .019) and a non-significant LVF × RVF interaction (lowest p = .067, associated with anger) for anger T4 and fear T4. Only one of the factors had significant main effects in joy F4 (factor LVF) and joy F3 (factor RVF), and none of the factors presented significant effects in anger F3 and fear F3. Despite the more reduced number of proper integration patterns (anger T4 and fear T4), the same considerations applying to the frontal region still hold: additive-type (or close to that) integration rules were still the case, and larger effects of both factors at the hemisphere showing the highest activation (RH, in this case) were invariably observed.

Parietal. In the RM ANOVAs carried out on parietal data, significant main effects of both factors were disclosed for joy P3, anger P4, and fear P4 (highest p value = 0.019, found for RVF in anger), along with non-significant interactions (lowest observed p = 0.453, associated with anger). Only LVF had a main effect in joy P4, and there were no effects associated with any of the factors in anger P3 and fear P3. All three proper integration patterns (main effects of both factors) were thus, as before, of the adding-type. Additionally, integration was only observed at the more strongly activated hemisphere, supporting further a preferential lateralization according to valence at parietal sites.

Relative importance of LVF- and RVF-conveyed information. Just as with ratings, the general finding of adding-type integrations allows using the RRI as an index of the relative importance of the factors (LVF and RVF). It should be noted that, differently from ratings (which plausibly reflect the overall contribution of both hemispheres), the integrations obtained from brain activation are taking place at a particular hemisphere. The RRI thus expresses the relative contribution of the information presented at each visual hemi-field (bilateral stimulation) to the integrated response at a given hemisphere (unilateral integration). The RRI was calculated as before and expressed as a percentage. Values > 50% indicate a larger contribution of LVF-conveyed information. RRI values obtained are presented in Table 1.

With one exception (joy F4), all values indicate a larger contribution of the LVF-presented information. These percentages differed significantly from a 50% reference value for anger F3 ( p < 0.001), T4 ( p = 0.023), and P4 ( p = 0.031), and for joy P3 ( p = 0.024). This privilege of the information presented first to the RH has to be distinguished from the lateralization effects clearly documented for distinct emotions by the activation asymmetries and the increased effect of both factors in the preferentially activated hemisphere. It might conjecturally be related to a leading role of the RH in the modulation of the inter-hemispheric activation balance, which has been suggested by Hellige (1993). Or, more cautiously, it might be taken as a sign of the remaining complexities of the distributed processing occurring within and across hemispheres. In any case, it underlines the distinction between the overall integration reflected in the ratings of emotion intensity (showing a slight trend for a larger contribution of the RVF in joy and anger) and the integrations occurring at specific EEG sites (exhibiting a trend for an overall privilege of LVF-conveyed information). As suggested before, these results are generally consistent with a hierarchy of integrations, with each integration organizing and simplifying at every level the underlying processing complexities (see Anderson, 1981, pp. 8-9). The striking feature here is that these consecutive integrations seem to operate mainly according to an adding-type rule.

Relations between the two hemispheres. A third characteristic of the brain activation patterns concerns the relations between contralateral EEG sites. Whenever the effects of one or both factors can be appreciated, increased intensity of expressions has opposite effects at contralateral leads. This is particularly clear for joy and anger at frontal leads (and, less distinctly, at temporal leads): while both LVF- and RVF-conveyed information produce increased brain activation in F3, both factors exhibit decreasing effects in F4. These findings reinforce the already signaled preferential lateralization of joy and anger at F3 (left), but add to that the indication of a symmetric functioning at the two hemispheres. As for fear at all leads, and joy and anger at parietal leads, only the preferential hemisphere shows an (increasing) effect of both factors, along with very low activation and no impact of any factor on the contralateral one. It is not possible to empirically determine the exact meaning of this reversed contralateral functioning, and debate remains unsettled in the literature about the mechanisms of inter-hemispheric coordination (e.g., reciprocal or unilateral inhibition/excitation, suppression, insulation, interference; see Chiarello & Maxfield, 1996). Anyway, given the suitability of averaging operations for integrating opposing, possibly conflicting, tendencies into a single resultant (Anderson, 1981), an arising suggestion is that the found adding-type integration rules correspond to averaging (with equal weighting), rather than adding.

Different-Emotions pairs

Ratings. The plots for the distinct-emotions pairs are presented in Figure 4. The two rows of graphs correspond to the two possible ways of presenting each pair of emotions, with emotion 1 on the LVF and emotion 2 on the RVF and vice-versa (flipping of sides). Suggested near-parallelism is the dominant note, despite some trend for a convergence of lines to the right in several graphs (e.g., the center and rightward graphs in the bottom row). This was supported in all cases by non-significant LVF × RVF interactions in the associated RM ANOVAs (highest F -value = 1.654, p = 0.166, found for the anger-joy pair), along with significant main effects of both factors (all ps < 0.001, lowest F -value = 12.974, found for anger in the joy-anger pair). These results are once again consistent with an adding-type integration rule, applying this time to qualitatively distinct emotional information selectively inputted to each of the hemispheres.

Given the hypothesis of a differential lateralization of distinct emotions and the support given to it by the analyses of same-emotion pairs (particularly as regards the frontal and the parietal regions) a natural question to ask is whether swapping the visual hemi-fields where emotions are presented has an effect on ratings. To that end, RM ANOVAs were performed for each pairing of emotions with emotion 1, emotion 2, and side of presentation (emotion 1 on the left or on the right) as within-subjects factors. The single main effect of side of presentation was found for the pairing of fear and joy, with a higher mean rating (23.19, compared to 21.76) when fear expressions where on the LVF (RH) and joy expressions on the RVF (LH). This difference agrees with what could be expected from the robust right-lateralization of fear at all cortical regions, and the predominant left-lateralization of joy at frontal and parietal sites. No significant interactions of presentation side and any of the other factors (emotion 1 and 2) were disclosed in the ANOVAs, suggesting that, concerning the effects of emotions, there was no detectable impact of lateralization. This was further checked by comparing the F-ratios associated with each emotion on each type of presentation (left or right) with the Schumann-Bradley test (Weiss, 2006), which provided non-significant results in all cases.

Based on the findings of parallelism (graphical and statistical), RRI values were obtained as previously, expressing the relative contribution to the integrated judgment of the emotion presented on the LVF (coming first in the compound designation of each pair): RRIJoy-Fear = 52.4%; RRIFear-Joy = 49.2%; RRIJoy-Anger = 57.3%; RRIAnger-Joy = 40.5%; RRIAnger-Fear = 45.9%; RRIFear–Anger = 52.4%. As before, RRIs > 50 % signal a larger contribution of the right hemisphere and < 50% a larger contribution of the left hemisphere. In all three types of pairings, flipping the presentation sides induced a reversal of the relative importance of the two hemispheres. This indicates an effect specific to the emotion categories, such that, irrespective of the preferential lateralization of joy and fear, the hemisphere to which joy is conveyed contributes more than the one stimulated by expressions of fear. Similarly, the hemisphere stimulated by joy (be it the RH or the LH) always contributed more than the one stimulated by anger. Finally, in the anger-fear pairings, whichever hemisphere to which fear was conveyed, it invariably contributed more to the unified judgment than the one selectively stimulated by anger. These emotion-dependent changes in the relative importance of the two hemispheres were found significant for the joy-anger, t (29) = 3.421, p = 0.002, and the anger-fear pairings, t (29) = 2.213, p = 0.039 (two-tailed paired-samples t-tests).

Brain activation. Figure 5 presents the factorial plots obtained from percentage of brain activation (a-ERD) at the frontal sites for the three emotions. For space and economy reasons, temporal and parietal data will not be discussed here. Compared to the ones obtained with same-emotion pairs, one differentiating feature of these plots is the noticeable departure from parallelism in several of them. These departures were reflected in statistically significant interactions in the pairs fear-joy (first emotion presented to the LVF) at F3, F (2.59, 75.118) = 6.383, p < 0.001, and anger-joy at F3, F (2.485, 72.075) = 64.946, p < 0.001.

The reason for the first interaction (fear-joy) is a decreasing functioning of fear in F3, more pronounced for the less intense levels of joy, which operates in turn by increasing brain activation. This adds to the finding of a reversed contralateral functioning in some of the same-emotion pairs, by illustrating now an opposing functioning, at the same region of the same hemisphere, of the processing of two distinct emotions. Of note, both fear and joy expressions in this pairing have been selectively conveyed to their preferred hemispheres, respectively the RH and the LH (according to the results obtained with same-emotion pairs). Fear processing (which, with same emotion pairs, had no detectable effects on the LH) is thus exerting an effect on brain activation at the preferred processing site of joy, with the highest level of joy resisting better the depleting effects of fear. Again, the exact significance of this opposing functioning cannot be ascertained. However, the fact that this is but one form of inter-hemispheric interaction among others can be seen by contrasting this pattern with the one for joy-fear at F3. When conveyed to their non-preferred hemispheres, these same two emotions integrate at F3 according to an adding-type rule (joy × fear interaction: F (2.95,85.60) = 2.952, p = 0.936), and both emotions have a significant increasing effect on brain activation (highest p = 0.006, associated with fear). Also instructive is the comparison with the pattern for fear-joy (again, both kinds of expressions conveyed to their preferred hemispheres) at F4. Contrary to the effect of fear in F3, joy in F4 (the preferred site for the processing of fear) significantly increases, rather than decreases, brain activation (associated p < .001). Thus, no symmetry exists in the way fear and joy integrate in each hemisphere or interact across-hemispheres. Meanwhile, a robust confirmation of the preferred lateralization of fear and joy is given by the differences in brain activation between joy-fear and fear-joy presentations. The latter form of presentation (fear to the LVF, joy to the RVF) results in significantly higher a-ERD values at both F3, F (1,29) = 197.410, p < 0.001, and F4, F(1,29) = 3064.074, p < 0.001, consistent with a preferential right-lateralization of fear and left-lateralization of joy.

A similar analysis would apply to the pair fear (LVF) - anger (RVF) (see the two rightward graphs in the bottom row of Figure 5). Even if it did not manifest as a statistically significant interaction, a depleting effect of anger, more pronounced for the lower levels of fear, can be seen in F4, while fear works conversely to increase brain activation. Since both anger and fear in this pairing were sent to their preferred hemispheres, this provides an analogous of the previous situation, with anger behaving in relation to fear as fear did in relation to joy. As before, this no longer happens when the emotion expressions are sent to their non-preferred hemispheres: in anger (LVF)-fear (RVF) at F4 both emotions have a significant positive effect on brain activation (highest p = 0.003, found for fear). Finally, no effects of fear symmetrical to those of anger occur at F3 for the fear-anger pair (fear has no significant effect at this site, p = 0.661).

Significant larger activations at both F3 and F4 are associated with the pair fear (LVF)-anger (RVF), as compared to anger-fear (highest p = 0.003, for F3), consistent with a preferential lateralization of fear on the RH and of anger on the LH. The other significant interaction, concerning anger-joy at F3, illustrates a distinct situation. Since both anger and joy are left-lateralized, one of the emotions (anger) is first conveyed to its non-preferred hemisphere (RH) and the other (joy) to its preferred one (LH). The found interaction mainly reflects a fanning of lines to the right, associated with a significant linear × linear component of F(1.29) = 147.48, p < 0.001. Rather than opposite effects, thus, both emotions now appear to reinforce each other at F3 (while they both act to decrease brain activation at F4). Consistent higher activation in F3 as compared to F4, irrespective of the side of presentation of each emotion (associated p < 0.001, both in joy-anger and anger-joy), further supports the preferential lateralization of both emotions on the left frontal hemisphere.

The finding of both competing and reinforcing effects of distinct emotional information may be broadly relatable to the suggested involvement of both excitatory and inhibitory processes in the coordination of the cerebral hemispheres (van der Knaap & van der Ham, 2011). However, the integration approach provides an explicit analysis of the dependencies of these regulatory mechanism on both functional lateralization and the variable relations (often asymmetric) holding between emotions, which any candidate account, in terms of neural inhibition/excitation, is bound to accommodate. Also, by illustrating new instances of conflicting tendencies in the processing of emotional informers, the analysis of distinct-emotions pairs further contributes to the notion that averaging, not adding, plausibly underlies the often observed adding-type rule. Indeed, even the non-additive pattern disclosed for fear-joy at F3 (just as the similar pattern for fear-anger at F4) can be interpreted as differential weighting averaging (Anderson, 1982, 1982). Under this view, integration by averaging affords one important way for the two hemispheres to work out their differences (Chiarello & Maxfield, 1996).

RRI values were calculated as before (relative contribution of LVF-conveyed information to the response) for patterns where an adding-type rule was established (significant main effects of both factors and a non-significant interaction term). They are presented in table 2, including for comparison, on the first row, the RRI values previously obtained for ratings. Values > 50% indicate a larger contribution of information arriving first at the RH; values < 50%, conversely, a greater contribution of information selectively transmitted to the LH.

No exact correspondences can be made between the RRIs for ratings and frontal brain responses. As discussed above, the RRIs for ratings reflected in all cases a privilege of one of the emotions in a pair (the side of presentation of joy determines the inter-hemispheric balance when combined with fear or with anger; the side of presentation of fear determines the relative importance of the two hemispheres when combined with anger). This is not manifest in the RRIs for brain activation, even if activation differences in F4 within each type of pairing match the directional changes observed for ratings (a decrease from joy-fear to fear-joy and from joy-anger to anger-joy; an increase from anger-fear to fear-anger). Summarizing an earlier suggestion, this should be taken as a reminder that the integrations established at specific EEG sites do not directly subserve the final unified percept, which plausibly requires a higher order integration of all distributed contributions (including temporal and parietal) across both hemispheres.

Discussion

Results obtained in this study are fourfold, concerning (a) the debate over the functional lateralization of perceived emotional faces; (b) the patterns of inter-hemispheric cooperation and their relative importance; (c) the cortical underpinnings of the little r; and (d) the viability and potential of the integration approach to cerebral organization. They are all linked to each other and ultimately converge on (d).

As regard cerebral lateralization, unambiguous converging evidence was provided for a prime role of frontal asymmetries in face emotion perception, and for the approach-withdrawal model at the level of frontal regions. Additional robust evidence was found for distinct lateralization strategies at other brain regions, with valence as an organizing principle at parietal sites, and an overall predominance of the RH at temporal sites. According to these data, all the major lateralization models of emotion perception to date (RH dominance, valence-dependent asymmetries, and the approach-withdrawal hypothesis) ultimately express some particular aspect of the processing of emotion across a broadly and bilaterally distributed brain network.

Concerning the relations between the two hemispheres, direct support for a role of cross-hemisphere transfer of information was obtained with both ratings and brain activation at distinct EEG sites. Inter-hemispheric inhibition has been commonly invoked as a regulatory mechanism of cortical processing (Chiarello & Maxfield, 1996). Among the several varieties of inhibition proposed, hemispheric isolation (understood as the suspension of cross-lateral information transfer) is clearly disavowed by the found results. As for other varieties of inhibition (e.g., reciprocal/unilateral inhibition, interference), the findings of reversed effects of one and the same emotion at contra lateral sites, and of conflicting functioning of distinct emotions at the same hemisphere, are still compatible with their operation. However, results also strongly limit the generality of any of these mechanisms in different ways. First, opposing/competing functioning among distinct emotions was only observed when both emotional informers were conveyed to their preferred hemisphere, not otherwise. Second, the finding of competing processing tendencies for two emotions at a given hemisphere did not entail a symmetric opposing functioning at the contra lateral site (as illustrated with the fear-joy and the fear-anger pairs at F3 and F4). Finally, seemingly instances of positive reinforcement (co-activation) between the effects of two emotions on a same hemisphere were also observed (anger-joy pair at F4). These constraints should be accounted for by any comprehensive explanation of inter-hemispheric coordination in terms of inhibition/activation balances.

Adding-type integration rules were predominant with both ratings and brain activation. Given the conflicting tendencies documented in the cortical processing of emotion informers, it likely corresponds to averaging with equal weighting, as an easy means of conflict resolution. Despite isomorphism in the rules, several aspects of the patterns of brain activation are not directly reflected in the ratings. This is perhaps most apparent in those few cases where patterns of activation deviated from linearity (fear-joy at F3, anger-joy at F3), finding no counterpart in the steady parallelism of ratings. However, divergence between RRI values (which index the relative contribution of LVF-conveyed information) computed for ratings and for brain responses affords the most widespread evidence for a distinction between integration at the probed brain sites and as manifested in the ratings. The notion of a hierarchy of integrations, with the one subserving the final ratings at the highest level and those localizable at specific brain regions at lower/intermediate levels, is perhaps the most in line with the evidence gathered. This being the case, one entailed consequence is that any suitable cortical representation of the inner r can only be found at the level of the entire distributed (within- and across-hemispheres) underlying processing network.

Compared to extant alternative approaches, the obtained results were particularly tidy as regards functional asymmetries and unusually encompassing as regards outwardly competing lateralization principles. This should be credited to the quantitative analytical character of the integration mode, establishing it not only as practically feasible but also as comparatively advantageous. In this last regard, further contributions can legitimately be expected from the approach. As illustration, supplementary results were obtained with selective unilateral (in lieu of bilateral) presentations of pairs of emotional faces to a single hemisphere (see Anderson, 1989, for the associated rationale). Besides an independent confirmation of the lateralization patterns unveiled at distinct brain regions (frontal, temporal, parietal), increasing support for the distributed nature of the cortical representation of r was acquired thereby (for a partial presentation of these results, see Pereira, Oliveira, and Fonseca, 2012).

Neurophysiologic measures other than a-ERD can moreover be tried out and used, such as evoked response potentials (ERP). Targeting integration patterns at the level of specific ERP components seems like an attractive prospect when theory or prior evidence suggests them as a plausible locus for integration. Earlier attempts at implementing the integration approach with ERP as responses (in parallel with phenomenological ratings) can be found in Oliveira, Fonseca, Teixeira, and Santos (2003) and Fonseca, Oliveira, Teixeira, Santos, and Simões (2005). These studies, which addressed the integration of valence and arousal at a late component (500-700 ms) of the P300, were met with difficulties in achieving an orthogonal manipulation of the experimental factors, and ended up as inconclusive. However, for distinct problems and independent variables, the ERP-based approach might still prove valuable. Similar considerations apply to autonomic physiological variables, which could as well be explored as integration responses. A study coming close to that goal, employing heart sinus arrhythmia and galvanic skin response (GSR), was conducted by Pereira, Fonseca, and Oliveira (2005). However, RT and percentage of correct emotion recognition were used instead of ratings, which heavily limits the integration approach. One still unexplored avenue involving the parallel use of physiological responses (particularly EEG-based) and ratings concerns time-dependent changes in the cortical dynamics of the integration. Examining the unfolding of brain responses at distinct time epochs might shed new light on the mechanisms of both intra (across regions within each hemisphere)- and inter-hemispheric integration, while still keeping a reference to the overall integration patterns obtained from ratings.

Acknowledgement

This work was supported by Grant PTDC/PSI/73406/2006 from the Portuguese Foundation for Science and Technology


References

Alves, N. T., Aznar-Casanova, J. A., & Fukusima, S. S. (2009). Patterns of brain asymmetry in the perception of positive and negative facial expressions. Laterality , 14 (3), 256-272.         [ Links ]

Anderson, N. H. (1981). Foundations of information integration theory . New York: Academic Press.         [ Links ]

Anderson, N. H. (1982). Methods of information integration theory . New York: Academic Press.         [ Links ]

Anderson, N. H. (1989). Information integration approach to emotions and their measurement. In R. Plutchik & H. Kellerman (Eds.), Emotion: Theory, research, and experience. Volume 4: The measurement of emotions (pp. 133-186). San Diego, CA: Academic Press.         [ Links ]

Anderson, N. H. (1996). A functional theory of cognition . Mahwha, NJ: Lawrence Erlbaum Associates.         [ Links ]

Anderson, N. H. (2001). Empirical directions in design and analysis . Mahwah, NJ: Lawrence Eralbaum Associates.         [ Links ]

Anderson, N. H. (2008). Unified social cognition . New York: Psychology Press.         [ Links ]

Anes M. D., & Kruer, J.L. (2004). Investigating hemispheric specialization in a novel face-word Stroop task. Brain and Language , 89 (1), 136-141.         [ Links ]

Bagiella, E., Sloan, R., & Heitjan, D. (2000). Mixed-effects models in psychophysiology. Psychophysiology , 37 , 13-20.         [ Links ]

Borod, J. C., Cicero, B. A., Obler, L.K., Welkowitz, J., Erhan, H.M., Santschi, C., Whalen, J.R. (1998). Right hemisphere emotional perception: Evidence across multiple channels. Neuropsychology , 12 (3), 446-458.         [ Links ]

Bourne, V. (2006). The divided visual field paradigm: Methodological Considerations. Laterality , 11 (4), 373-393        [ Links ]

Bourne, V. (2008). Examining the Relationship between Degree of Handedness and Degree of Cerebral Lateralization for Processing Facial Emotion. Neuropsychology , 22 (3), 350-356.         [ Links ]

Bourne, V. (2010). How are emotions lateralised in the brain? Contrasting existing hypotheses using the chimeric faces test. Cognition and Emotion , 24 , 903-911        [ Links ]

Chiarello, C., & Maxfield, L. (1996). Varieties of Interhemispheric Inhibition, or How to Keep a Good Hemisphere Down. Brain and Cognition , 30 , 81-108.         [ Links ]

Compton, R.J. (2002). Inter-hemispheric interaction facilitates face processing. Neuropsychologia , 40 (13), 2409-2419.         [ Links ]

Davidson, R. J., Schwartz, G. E., Saron, C., Bennett, J., & Goleman, D. J. (1979). Frontal versus parietal EEG asymmetry during positive and negative affect. Psychophysiology , 16 , 202-203.         [ Links ]

Davidson, R. J. (1992). Emotion and afective style: Hemispheric substrates. Psychological Science , 3 , 39-43.         [ Links ]

Davidson, R. J. (1995). Cerebral asymmetry, emotion, and affective style. In R. J. Davidson (Ed.), Brain Asymmetry (pp. 735). MIT Press, Cambridge, MA.         [ Links ]

Davidson, R.J. (2003). Affective neuroscience and psychophysiology: toward a synthesis. Psychophysiology , 40 (5), 655-665.         [ Links ]

Davidson, R. J. (2004). What does the prefrontal cortex "do" in affect: perspectives on frontal EEG asymmetry research. Biological Psychology , 67 , 219-233.         [ Links ]

Demaree, H. A., Everhart, D. E., Youngstrom, E. A., & Harrison, D. W. (2005). Brain lateralization of emotional processing: historical roots and a future incorporating dominance. Behavioural and Cognitive Neuroscience Review , 4 , 3-20.         [ Links ]

Fonseca, I., Oliveira, A., Teixeira, M., Santos, E. J. R., & Simões, F. (2005). Comparing Oddball and Free Context ERP Paradigms in Evaluative Tasks. In J. Monahan, J. Townsend & S. Sheffert (Eds.), Fechner Day 2005 (pp. 97-100). Mt. Pleasant, MI The International Society for Psychophysics.         [ Links ]

Fox, N. A., & Davidson, R. J. (1986). Taste-elicited changes in facial signs of emotion and the asymmetry of brain electrical activity in human newborns. Neuropsychologia , 24 (3), 417-422.         [ Links ]

Heilman, K. M., & Valenstein, E. (2012). Clinical Neuropsychology (5th ed.). New York, NY: Oxford University Press.         [ Links ]

Hellige, J. (1993). Hemispheric Asymmetry . Cambridge, MA: Harvard University Press.         [ Links ]

Jansari, A., Tranel, D., & Adolphs, R. (2000). A valence-specific lateral bias for discriminating emotional facial expressions in free field. Cognition and Emotion , 14 , 341-353.         [ Links ]

Jones, N.A., & Fox, N. (1992). Electroencephalogram asymmetry during emotionally evocative films and its relation to positive and negative affectivity. Brain Cognition , 20 , 280-299.         [ Links ]

Keselman, H. (1998). Testing treatment effects in repeated measures designs: an update for psychophysiological researchers. Psychophysiology , 35 , 470-478.         [ Links ]

Keselman, H., & Keselman, J. (1988). Comparing repeated measures means in factorial designs. Psychophysiology , 25 , 612-618.         [ Links ]

Killgore, W., & Yurgelun-Todd, D. (2007). The right-hemisphere and valence hypothesis: could they both be right (and sometimes left)? Social Cognitive and Affective Neurosciences , 2 (3), 240-250.         [ Links ]

Klatzky, R.L., & Atkinson, R.C. (1971). Specialization of the cerebral hemispheres in scanning for information in short-term memory. Attention Perception & Psychophysics , 10 (5), 335-338.         [ Links ]

Matsumoto, D., & Ekman, P. (1988). Japanese and Caucasian facial expressions of emotion (JACFEE) . San Francisco, CA: San Francisco State University, Intercultural and Emotion Research Laboratory.         [ Links ]

Mishkin, M., & Forgays, D.G. (1952). Word recognition as a function of retinal locus. Journal of Experimental Psychology , 43 , 43-48.         [ Links ]

Narumoto, J., Okada, T., Sadato, N., Fukui, K., & Yonekura, Y. (2001). Attention to emotion modulates fMRI activity in human right superior temporal sulcus. Cognitive Brain Research , 12 (2), 225-231.         [ Links ]

Oldfield, R.C. (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia , 9 , 97-113.         [ Links ]

Oliveira, A. M., Fonseca, I. B., Teixeira, M., & Santos, E. J. R. (2003). Do valence and arousal prompt the same or different ERP components? Pointing towards an inner psychophysics. In B. Berglund & E. Borg, (Eds), Fechner Day 2003 (pp.217-222). Stockholm: International Society For Psychophysics.         [ Links ]

Pereira, T., Fonseca, I., & Oliveira, A. (2005). Influence of emotion category and intensity on affective brain processing: CNS and ANS indices. In J. Monahan, J. Townsend & S. Sheffert (Eds.), Fechner Day 2005 (pp. 267-272). Mt. Pleasant, MI: The International Society of Psychophysics.         [ Links ]

Pereira, T. S., Oliveira, A. M., & Fonseca, I. B. (2012). The observable R and the unobservable r: Brain and rating responses in a free viewing task with pairs of emotional faces . In Craig Leth-Steensen (Eds.) Proceedings of the 28th Annual Meeting of the International Society for Psychophysics (209-213). Otawa, CAN: International Society for Psychophysics.         [ Links ]

Reuter-Lorenz, P., & Davidson, R.J. (1981). Differential contributions of the two cerebral hemispheres to the perception of happy and sad faces. Neuropsychologia , 19 (4), 609-613.         [ Links ]

Rodway P., Wright L., & Hardie S. (2003). The valence-specific laterality effect in free viewing conditions: The influence of sex, handedness, and response bias. Brain and Cognition , 53 , 452-463.         [ Links ]

Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., & Matsumura, M. (2004). Enhanced neural activity in response to dynamic facial expressions of emotion: an fMRI study. Cognitive Brain Research , 20 (1), 81-91.         [ Links ]

Tamietto, M., Corazzini, L., de Gelder, B., & Geminiani, G. (2006). Functional asymmetry and interhemispheric cooperation in the perception of emotions from facial expressions. Experimental Brain Research , 171 , 389-404.         [ Links ]

van der Knaap, L. J., & van der Ham, I. J. (2011). How does the corpus callosum mediate interhemispheric transfer? A review. Behavioral Brain Research , 223 (1), 211-221.         [ Links ]

Wager, T. D., Phan, K. L., Liberzon, I., & Taylor, S. F. (2003). Valence, gender, and lateralization of functional brain anatomy in emotion: a meta-analysis of findings from neuroimaging. Neuroimage , 19 , 513-531.         [ Links ]

Weiss, D. (2006). Analysis of variance and functional measurement: A practical guide . Oxford: Oxford University Press.         [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License