SciELO - Scientific Electronic Library Online

 
vol.26 número1Introduction to the special issueA few philosophical ruminations on the human condition and choosing to live well índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

Links relacionados

  • Em processo de indexaçãoCitado por Google
  • Não possue artigos similaresSimilares em SciELO
  • Em processo de indexaçãoSimilares em Google

Compartilhar


Avances en Psicología Latinoamericana

versão impressa ISSN 1794-4724versão On-line ISSN 2145-4515

Av. Psicol. Latinoam. v.26 n.1 Bogotá jan./jun. 2008

 

Colamus humanitatem: Nurturing human nature

Mauricio R. Papini*

* Texas Christian University, U. S. A. Please correspond with: Mauricio R. Papini. Department of Psychology, Texas Christian University. Box 298920, Fort Worth, TX 76129, USA. E-mail: m.papini@tcu.edu.

Fecha de recepción: septiembre de 2007
Fecha de aceptación: marzo de 2008


Abstract

In an essay on anger, the ancient philosopher Seneca warns of the futility of harboring negative emotions given the imminence of death—the ultimate human equalizer. Ancient philosophers like Seneca believed that emotions are based on cognitions (beliefs) and are therefore modifiable through spiritual exercises. Modern research shows that the emotional and cognitive aspects of human psychology are malleable (nurture), but also require gene expression (nature). A parallel between individual behavior and socio-political forces suggests a framework for the current environmental crisis—another human equalizer. Two critical questions are suggested: Is the amassed experience of the last few centuries sufficient to lead to corrective measures that would avoid environmental degradation? Or would a catastrophic event with significant long-term environmental degradation have to occur before corrective measures reach consensus at the socio-political level?

Key words: emotions, cognition, epigenesis, us-versus-them view, environmental crisis.

Resumen

En un ensayo sobre la ira, el filósofo antiguo Séneca advierte sobre la inutilidad de albergar emociones negativas, dada la inminencia de la muerte, condición que, en últimas, nos hace iguales como humanos. Los antiguos filósofos, como Séneca, creían que las emociones estaban basadas en cogniciones y que por eso eran modificables a través de ejercicios espirituales. Las investigaciones actuales demuestran que los aspectos cognitivos y emocionales de la psicología humana son maleables (crianza) pero que también requieren de expresión genética (naturaleza). Un paralelo entre el comportamiento individual y las fuerzas socio-políticas sugiere un marco para la crisis ambiental actual, otro “ecualizador” humano. Dos preguntas críticas son sugeridas: ¿es suficiente la experiencia acumulada de los dos últimos siglos para conseguir medidas correctivas que puedan impedir la degradación ambiental? o ¿es necesario que ocurra un evento catastrófico de degradación ambiental significativa a largo plazo para que las medidas correctivas puedan alcanzar consenso en el nivel socio-político?

Palabras clave: emociones, cognición, epigénesis, visión de nosotros- versus-ellos, crisis ambiental.


Seneca’s Plea

Seneca ended his essay On Anger with an expressive plea on the ephemeral character of life and on the futility of harboring contempt for others. According to Seneca, angry men look like the bull and the bear, tied together at the arena, fighting with each other while the slayer patiently waits for the right moment. What, then, is the point of their anger? Similarly, human beings are tied by their mortality, which makes anger unnecessary and irrelevant. Would we not benefit more from a peaceful attitude toward ourselves and others? Seneca (III, 43, 5):

Soon shall we spew forth this frail spirit. Meanwhile, so long as we draw breath, so long as we live among men, let us cherish humanity (colamus humanitatem). Let us not cause fear to any man, nor danger; let us scorn losses, wrongs, abuse, and taunts, and let us endure with heroic mind our short-lived ills. While we are looking back, as they say, and turning around, straightaway death will be upon us.

Seneca’s plea is powerful because it touches on some of the most basic aspects of human nature. First, the entire plea crumbles unless we assume that Seneca’s reader is capable of imagining death and, more precisely, his or her own death. What would be the point of this passage if it were directed at a being incapable of understanding, at some cognitive level, that life has a discrete dimension—that it starts and ends at definite points in time? Perhaps African elephants are “thinking” about these very issues when they gather to touch the bones of their predecessors (Moss, 1988). Likewise, it seems possible that a young chimpanzee is grieving when it falls in a state of quiescence for many days after the death of his mother, showing signs strikingly similar to those of bereavement (Goodall, 1986). But are these just examples of grief, or do they also imply the cognitive capacity to think about death? Whereas we are uncertain about the extent to which the ability to think about death is uniquely human or shared with other species, there is no doubt that this cognitive or intellectual capacity is part of our mental experience and endowment. The concept of death may be seen as the pinnacle of what is a sophisticated cognitive capacity for representing the physical, social, and technological worlds in which humans live.

Despite the equalizing razor of death and the capacity to be aware of their own finitude, humans engage in seemingly irrational surges of feeling that dominate behavior, decisions, and daily experience. Such emotions and affective states seem irrational in the sense that they often lead to consequences that are against the person’s own interest and that they are immune to intentional control. Ancient philosophers, from Socrates to Seneca, may have held a different view, allowing for emotions to be based upon sets of beliefs (Hadot, 1995; Nussbaum, 1994). To Seneca, anger cannot be aroused without the “approval of the mind,” that is, without a person’s conscious belief that somebody has willfully made him or her the object of important and undeserved wrongdoing. Seneca (II, 1, 5) summarizes his definition of anger by arguing that the mind has grasped something, has become indignant, has condemned the act, and now tries to avenge it.

But, is anger reducible to these beliefs? Or are the beliefs releasing anger? It would seem that the reductionist argument that turns emotions into a special case of cognitions like beliefs is like arguing that the entire event of a gunshot can be reduced to the gun’s triggering mechanism, without reference to the gunpowder and the bullet. Seneca himself would seem to be suggesting that the control or, even, eradication of anger (and all the emotions, by implication), can be achieved by educating the person about the factors that activate the triggering mechanism, so that the “irrational surges” never occur (see below).

The second assumption behind Seneca’s opening plea is, then, that humans have an enormous capacity to experience emotions, some as extreme as anger, and to fall under their influence, rightly or not, in an almost ballistic manner. Indeed, folk psychology prints a picture according to which the self is at the mercy of its emotions: A person can experience emotions, but cannot hope to control them in any fundamental way. Even more, Western culture seems to be imbued by the notion that not only are emotions uncontrollable, but also attempts to repress them may lead to even worse consequences, including mental disorder and physical pathology.

Extending Seneca’s plea to build a more general view, I suggest that it includes two major, irreducible capacities that characterize human psychology: the cognitive capacity to represent a vast variety of phenomena, including the person’s own death in the future, and the ability to be aroused by emotions, affections, and feelings, including anger. The distinction between the cognitive and emotional aspects of mental experience is supported by modern psychology. In contemporary learning theory, for example, a stimulus associated with an important event, such as food or pain, not only has cognitive value (i.e., it predicts the important event), but also acquires emotional value (i.e., it becomes important in its own right). The same stimulus can, therefore, activate both cognitive and emotional processes, each experimentally dissociable (Rescorla, 1980). Furthermore, the brain develops separate representations of these processes. Indeed, the mammalian brain seems to be especially equipped to learn both about the external world, or allocentric learning, and about the organism’s own emotional reaction to events in the environment, or egocentric learning (Papini, 2003). For example (Bechara et al., 1995), human patients with lesions in the amygdala (located deep within the temporal lobe of the brain), can acquire knowledge about a simple association between a visual stimulus paired with a startling loud noise, but show no evidence of emotional arousal to the visual stimulus; vice versa, patients with hippocampal lesions (also located deep in the brain hemispheres) acquire the emotional arousal, but cannot describe the facts of such a training.

However dissociable, cognitions and emotions are simultaneous in terms of subjective experience. Imagine that you have recently experienced a house burglary and, one night, you hear a sudden noise inside your house. The noise triggers two brain events that, while felt as a unitary phenomenon, actually occur independently. One event is the cognition that something dangerous is impending, whereas the other is an emotional state of fear; both events originate in your previous experience of the robbery. Despite the unified introspective experience that follows confrontation with an important event about which a person acquires both allocentric and egocentric information, the brain proceeds to analytically separate these two kinds of information into different streams of processing. On this basis, the distinction between cognition and emotion remains viable, despite their interdependence.

There is a third and final component in Seneca’s opening plea that he, as a Stoic, shared with other ancient philosophers (e.g., the Epicureans), and that relates to the interdependence just mentioned. If anger arises with “the approval of the mind,” if it is based on beliefs, then anything powerful enough to change a person’s belief should just as powerfully modify that person’s emotions, no matter how intense. And since beliefs are essentially forms of cognition, then it follows that emotions can be influenced with arguments, self-inspection, evaluation, and other forms of intellectual activity. Thus, just as one becomes angry if one believes to be somebody’s target of significant and willful wrongdoing, anger would dissipate if one were persuaded that the damage was trivial or unintentional (Nussbaum, 1994). Similarly, the night fear prompted by a sudden noise may recede immediately if the noise is believed to be caused by the falling of a framed picture, one that has fallen before and that poses no threat.

The ancient conception that human emotions are based upon beliefs provides justification for the claim that classic thinkers thought of philosophy as a “way of life,” a practice aiming at the “good life,” a life free from disturbance, serene, and emancipated from the oppression of anger, fear, frustration, greed, anxiety, hate, and other similarly degrading emotions. According to Hadot (1995), the philosophers of ancient Greece and Rome were fundamentally interested in developing and teaching a series of exercises designed to develop control over these aversive states of mind. Even the most fundamental worry, the fear of death, was conceptualized as being vulnerable to training, provided the disciple was willing to undergo radical changes in lifestyle through the practice of spiritual exercises. For the present purpose, the major point derived from ancient philosophy is that the fundamental properties of human nature, the cognitive and emotional capacities, are essentially malleable, modifiable, and subject to the force of experience. On this basis, I argue both that fundamental elements of human nature can be nurtured and that, as a consequence, nurture is actually required for human beings to become completely human in psychosocial terms.

Pliocene Park

Human capacities for cognition and emotion are inseparable from the shaping influence of experience. According to the view encapsulated in Seneca’s plea, nurture is an inseparable part of human nature. Then “colamus humanitatem” implies both cherishing and cultivating humanity as well as learning to be human—human in the common sense of the word. The diversity of views on human nature is more that just the result of philosophical traditions in Western culture. Rather, it is the result of the malleability of higher cognitive and emotional capacities, as they influence individual thinkers during their life. Consider the scenario of Michael Crichton’s Jurassic Park, only imagine that the scientists are part of an extraterrestrial culture studying human DNA extracted from fossil material. Imagine, furthermore, that humans have disappeared long ago and these alien scientists have no clue about them, except for fragmentary remains of bones and technology. As humans are cloned and begin to function in their surroundings, would they behave as we do today, or would their behavior be archaic? Would they acquire some language and speak to each other, or just stare with empty minds? There is even reason to question whether these individuals would adopt the bipedal posture at all, for although their bones would have the capacity to support such gait, social input may be required for its development.

There is one condition that comes close to this fantastic scenario—that of children who, after an accidental separation from their parents, are lost in a forest and are reared by nonhuman surrogate parents, such as wolves or monkeys (Candland, 1993). It is the sad, but interesting story of children whose biological nature is fully human but, because of their unusual developmental conditions, their nurturing did not originate in others like them. The most striking results are observed in cases in which the child was lost at a very early age and found years later. Such cases reveal otherwise biologically normal children who behave more in tune with the behavior of their adoptive parents than that of their own species. In some cases, these children show resistance to learn even some basic motor skills (such as using utensils and sitting at a table during a meal), and may learn to understand and use language only with great difficulty. Most interestingly, a typical observation is that they move around using all four limbs, just as their surrogate parents do. Although in some cases feral children learn to walk in the human fashion, this is only after explicit training. In other words, walking upright is not engraved in the human genome. Given that bipedalism is generally taken to be the single, most important novel trait distinguishing members of the hominid family from all other primates since the early Pliocene (e.g., Willoughby, 2005), the difficulties of some feral children to learn to walk erect is particularly revealing.

Although we do not know for sure that the Jurassic Park scenario would apply to dinosaurs, as described in Crichton’s novel (i.e., would dinosaur “normal” behavior arise in their cloned relatives in the absence of continuity across generations?), we can be reasonably confident that, if applied to humans, the properties of the resulting product are impossible to predict—much less without knowing the biological and social characteristics of the alien species to which these humans would be exposed during their early development. Human nature is designed to respond to nurturing inputs that are not themselves encoded in the DNA hardware. Perhaps the paradigmatic example of just such a lock-and-key disposition is provided by Edgar R. Burroughs’ novel Tarzan of the Apes.

Tarzan’s parents, the noble Lord Greystoke and Lady Alice, died when he was a baby, leaving him to be adopted by an African ape. The ape mother had found in this strange white infant a substitute for her own, who had been brutally killed by the band’s leader. She protected and educated Tarzan against the counseling of other apes that saw nothing of value in this small, weak creature. Burroughs struggled to strike a balance between nature and nurture, showing Tarzan thinking like the ape that he was raised to be, but responding to some nurturing opportunities unlike any ape would around him. Thus, when Tarzan discovered his father’s books, he became intrigued by the “little bugs” that accompany every picture and managed to remain motivated for years to study them in detail. He eventually developed the ability to read, for the little bugs were nothing but words. Like an archeologist evaluating Egyptian hieroglyphs, Tarzan could read and write English, but had no idea how written words would relate to speech. That is, his brain responded just to the type of input available and it was this input that triggered his natural abilities allowing them to be expressed. Just as the pictures and words awakened Tarzan’s human cognitive skills, a different type of input, a young American girl named Jane Porter, was in charge of arousing love. In an interesting passage, Tarzan gently places a pendant around Jane’s neck as a gift; surprised by this gesture, Jane kisses the pendant. Tarzan does not fully understand her action, but guesses that it must be her way of acknowledging the gift. Thus, Tarzan

rose, and taking the locket in his hand, stooped gravely like some courtier of old, and pressed his lips upon it where hers had rested. It was a stately and gallant little compliment performed with the grace and dignity of utter unconsciousness of self. It was the hall-mark of his aristocratic birth, the natural outcropping of many generations of fine breeding, an hereditary instinct of graciousness which a life-time of uncouth and savage training and environment could not eradicate. […]  Contact with this girl for half a day had left a very different Tarzan from the one on whom the morning’s sun had risen. Now, in every fiber of his being, heredity spoke louder than training. (Burroughs, 1914/2003, p. 179.)

What seems utterly obvious is how Tarzan’s “heredity” and “aristocratic birth” would have been silenced forever, had the appropriate nurturing experience of love never present itself in the African jungle.

Breaking the Dichotomy

One may ask, what do the literary example of Tarzan and the scientific study of feral children bring to the discussion table that was not already there? The distinction is both subtle and important, and the best way to appreciate the difference is to present it against a background provided by the two traditional approaches to understanding human nature: nativism and empiricism. I will not claim that the view presented here is novel; in fact, its ties to the epigenetic view of development are easily recognizable to the expert. As I will argue below, however, some of the elements of this epigenetic view can now be defined more clearly and some broader implications explored in more detail (e.g., the connection between views of human nature and the environmental crisis).

Traditionally, human nature has been conceptualized in terms of two extreme positions, depending on whether heredity (nativism) or experience (empiricism) is considered the most fundamental source. When pitted against each other, this is also known as the nature-nurture dichotomy. Nativism was introduced into psychology by Francis Galton, who suggested that even the most complex psychological functions are inherited. Galton was influenced by the evolutionary ideas of his cousin, Charles Darwin, and devoted the last years of his life to providing support for the hypothesis that inheritance is a major factor in human psychology. To demonstrate this point, he studied psychological traits in families, among twins, and in individuals who had been raised by adopted parents, thus laying the foundation of human behavior genetics. As a summary of his findings, Galton (1883, p. 241) suggested that individual differences in behavior among people living in broadly similar environments are attributable mainly to genetic variability, rather than to personal experience:

There is no escape from the conclusion that nature prevails enormously over nurture when the differences of nurture do not exceed what is commonly to be found among persons of the same rank of society and in the same country.

In the opposite camp, John B. Watson argued that experience can produce not only the astonishing behavioral flexibility of complex human behavior, but also the stereotypical habits of everyday life. Watson argued that, apart from some simple innate reflexes, the psychological traits of an individual are fundamentally determined by life experiences. Watson’s (1924, p. 104) famous challenge is a paradigm of his empiricist view of human nature:

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.

Despite their extremism, both Galton and Watson avoid a strong version of their respective positions. A strong version of Galton’s view would demand no concessions to experience; but for heredity to work its way into the psychological profile of adult individuals, the conditions of nurture must not be unusual, as Galton pointed out in the previous citation. Similarly, a strong empiricist view would demand a mental tabula rasa; yet, Watson requests healthy and well-formed infants to fulfill his challenge, as if these provided some critical mental precursors. Because these extremes cannot work in isolation, compromising positions have gained always a certain degree of popularity, even if they fail to attract the public for their lacking of flamboyance and closer adherence to reality. Somehow, Jurassic Park would not be as spectacular if the velociraptors were unable to “spontaneously” connect with their predatory nature and know exactly how to group-hunt their prey with amazing sophistication.

It would seem obvious that both nativism and empiricism must bear some proportion of the truth. On the nativist side, it is hardly surprising that one usually gets a human baby out of the mating of two human parents—not a chimpanzee, not a horned frog, not a sage plant. The same degree of specificity applies, of course, to every other sexually reproducing species. Thus, genetic information must be setting the boundaries of the developmental process, outside of which no viable outcome is possible. On the empiricist side, unless one believes that there is a little heart, or brain, or metabolic process, or behavior, or idea embedded in the DNA molecule (as a preformationist view of development would suggest), it is a patent truth that genes only code for the building of proteins. Thus, something other than merely genes is required to go from a fertilized egg to a mature organism, and that “something” must necessarily come from outside of the genome. However, simply recognizing that there is a bit of truth in nativism and in empiricism does not automatically yield the epigenetic view.

Strictly speaking, epigenesis refers to the development of structural and functional differentiation from a relatively less differentiated previous stage. Beginning with a fertilized egg, multiple cells are created, they differentiate into various types, these form tissues, which, in turn, give raise to organs, and a complete organism. Because none of the outcomes resembles the components of the previous stage (e.g., differentiated cells have properties not contained in their precursors), they are described as “emergent properties” of the developing organism and assumed to be able to influence the emergence of components of the next level of organization. Epigenesis, then, views developmental outcomes as the result of complex interactions at all levels of organismic structure and function. Moltz (1965, p. 44) provided a clear definition:

An epigenetic approach holds that all response systems are synthesized during ontogeny and that this synthesis involves the integrative influence of both intraorganic processes and extrinsic stimulative conditions. It considers gene effects to be contingent on environmental conditions and regards the genotype as capable of entering into different classes of relationships depending on the prevailing environmental context. In the epigeneticist’s view, the environment is not benignly supportive, but actively implicated in determining the very structure and organization of each response system.

A key aspect of this characterization of epigenesis is the emphasis placed on interactions. But it may be misleading to think that only the external environment (external with respect to the organism) is the one interacting with genetic information. As Lehrman (1953, p. 345) put it:

At any stage of development, the new features emerge from the interaction within the current state and between the current stage and the environment.

Interactions “within the current state” refer to influences occurring within a given level of organization, such as is the case, for example, when transcription of a structural gene is initiated by a protein produced by a regulatory gene (i.e., gene-gene interaction). Gottlieb (1992) referred to such interactions as horizontal coactions, that is, the influences that occur among elements at the same biological level (perhaps the most notorious instance of horizontal coaction is that which occurs among neurons in the central nervous system). Importantly, there are also vertical coactions, that is, processes occurring at one level may influence the development of the system at other levels. A fascinating example of a vertical coaction is the effect of early experience on the development of cortical neurons (Greenough, 1987). Infant rats exposed to a complex environment exhibit a larger number of synaptic sites (i.e., sites where information flows from one neuron to another). Early stimulation also leads to more efficient performance in a variety of learning tasks. Similarly, the formation of a stable, long-term memory through training and experience (organism-environment interaction) requires changes in neural networks in the brain (neuron-neuron interactions and neuron-behavior interactions), which, in turn, require gene transcription (neuron-DNA interaction) that modifies the transmission of neural potentials at malleable or plastic synapses (Nguyen et al., 1995; Stork et al., 2001). It is such long-term memories that probably underlie the development of the very beliefs of nativists, empiricists, epigeneticists, and any others who have thought about these issues. Such vertical coactions have important consequences understanding the relationship between nature and nurture, which can no longer be viewed through the Galtonian glass of a dichotomy. Clearly, fundamental components of nurture (such as the acquisition of long-term memories) require the intervention of fundamental components of nature (such as gene expression), or, put in different terms, nature is “prepared” to take care of nurture. In epigenetic terms it makes perfect sense to refer to the “genetics of learning” or to “experience-activated genes” as nature and nurture are no longer viewed as opposites. Vertical coactions are at the heart of Gottlieb’s (1992, pp. 159-160) concept of probabilistic epigenesis:

Individual development is characterized by an increase of complexity of organization—i.e., the emergence of new structural and functional properties and competencies—at all levels of analysis (molecular, subcellular, cellular, organismic) as a consequence of horizontal and vertical coactions.

The qualifier “probabilistic” implies that coactions make it impossible to conceptualize development as a simple serial process whereby one stage invariably leads to the next. In contrast, because properties emerge as a result of inputs that can vary dramatically across individuals, there is a substantial degree of unpredictability about the final outcome of development, even within the boundaries set by genetic information. So, for example, it is unlikely that any set of inputs could be devised that would result in the development of a human being out of a fertilized frog or mouse egg. Nonetheless, a look at human diversity around the world—both physical and cultural—shows an impressive degree of plasticity (Lewontin, 1982). Phenotypic plasticity, defined as an organism’s ability to adjust to environmental pressures in terms of morphology or function, is not only a human characteristic, but also a general feature of the animal and plant worlds (West-Eberhard, 2003).

Whether one believes that nature and nurture are dichotomous entities, as pure nativist and empiricist views suggest, or the extremes of a single dimension, as suggested by probabilistic epigenesis, the view correlates with one’s particular ethical stance. Although the relationship between a person’s view of the nature-nurture issue and the facts considered relevant to that view’s evaluation is suspiciously similar to the egg-and-chicken paradox, we can certainly point to some significant, broad correlations. For example, working closer to a nativist viewpoint, Herrnstein and Murray (1994) concluded that the proportion of human intelligence that depends on genetic factors was large enough to explain such social problems as unemployment, homelessness, and school dropout. An implication they pursued—and one that generated substantial controversy (see Jacoby & Glauberman, 1995)—indicated that government spending on social programs aimed at solving some of these problems was doomed to fail, precisely because of the presumed lack of plasticity of genetically determined characters, such as—they presumed—human intelligence. On the other extreme, an empiricist view may suggest that such human traits as selfishness, a religious drive, or criminal behavior are simple results of social experience. Communist governments appear to have acted on the belief that, for example, a generation educated under the principle of social altruism (e.g., individuals working for the benefit of the community) would eliminate self-centered individualism. The corruption that ensued in such countries as the Soviet Union, at all levels of government, is a clear indication that selfishness is not a simple product of the capitalist system, but a rooted aspect of human nature.

Just as nativist and empiricist viewpoints are consistent with socially relevant ideas, so is the case with epigenesis. Consider the relationship between nutrition and brain development. Few would doubt that the type and quality of nutrients a person consumes would affect that person’s brain physiology. An epigenetic approach, however, warns about the possibility that nutrition may play an organizing role during brain development and, more importantly, that these effects are likely to be more dramatic as a function of whether the appropriate nutrients acted on brain development during a sensitive period in early infancy. Sensitive periods reflect a unique point during development when a system is ready to produce some emergent property. This is consistent with data on the effects of malnutrition on both experimental animal models (Bedi, 2003), and human populations (Lukas & Campbell, 2000). In other words, the effects of a nurturing factor on brain development depend on a complex interaction with internal constraints such as the presence of a limited window of opportunity for action.

Many serious implications stem from this fact, including some related to the social consequences of widespread malnutrition. When large numbers of people are subjected to deficient nutrition, as it is the case in some developing countries, the psychological and neurological sequelae become one of the most important obstacles for socioeconomic progress. A nation populated by a critical mass of adults who lacked exposure to an adequate diet as infants may face such a low psychological ceiling that many reforms are simply impossible to implement for lack of human resources. Given present-day technology, the situation could be easily solved in terms of food supply; but a resolution of this problem requires a kind of long-term political commitment that seems to be difficult to envision, even for well-nurtured leaders.

Us vs. Them

All conceptions of human nature have an ethical dimension. Individuals imbued in nativism, empiricism, or epigenesis will, as a result of their views, find comfort in certain social events, support certain forms of social change, and fight against ideals that they consider false, unjust, or dangerous. These views of human nature are not passive adornments of the mind, but actively shape a person’s beliefs about how human beings should relate to each other both within and across cultures. Most importantly, perhaps, while the human brain exhibits neural plasticity throughout a person’s lifetime, such plasticity is mainly concerned with the acquisition of information about specific events: who did what and when. In contrast, there seems to be much less plasticity devoted to changing attitudes, beliefs, and even motor habits. Psychologists coined different labels to these types of memory: representational (for events and episodes) and dispositional (for beliefs and habits; Squire, 1987), and it seems reasonable to extend these labels to the brain and think in terms of representational and dispositional plasticity. Beliefs crystallize during early development and also, possibly, after exposure to extreme circumstances that may induce dispositional plasticity (e.g., imprisonment, family tragedy, sudden wealth). Once crystallized, the resulting brain networks are difficult to modify. Incoming information is therefore forced to accommodate to preexisting frames of reference, or, if this is not possible, the information is simply ignored or dismissed as an exception. This provides stability to a person’s behavior, even if such stability is achieved at the expense of the objective assessment of reality and self-scrutiny. This characterization implies, therefore, two stages. First, an early developmental stage (or an adult stage under special circumstances) characterized by dispositional brain plasticity during which nature-nurture views crystallize and, second, an adult stage characterized by the relative absence of dispositional changes during which nature-nurture beliefs can affect a person’s interpretation of social events and act as causes of cultural change or stasis.

Self-inclusion in a social group and the evaluation of other groups are processes guided by particular nature-nurture beliefs. I resort again to Tarzan of the Apes (Burroughs, 1914/2003) for an extreme case that illustrates this point. Tarzan found it natural to include himself among the apes with which he was raised. As such, he had no option but to fight for leadership according to the rules of the ape group. Only contact with the human archetypes provided by his father’s books slowly instigated the notion that he was different. He sought for affinity no longer among the apes, but elsewhere, leaving the leadership of the ape group to search for his own place in the world. In other words, Tarzan’s initial “ape nature” allowed for choices that his new “human nature” precluded (e.g., leadership of the ape group and interest for the “little bugs,” respectively). As Burroughs showed in his novel, Tarzan could never quite escape from being a sort of “hybrid,” an outcome that, in fact, nicely fits his epigenetic disposition to integrate nurture into nature.

In shifting from an ape to a human nature view, Tarzan’s social beliefs of belongingness were shaken to their core. Such a crisis also shows the extent to which his life was guided by acceptance of an “us” world and rejection of a “them” world. Ape nature gradually became less a part of “us” and more a part of “them.” But Tarzan is a very unusual character. Regular human beings generally live their entire lives as part of the same social group; only a minority experiences something analogous to Tarzan’s worldview shift (e.g., immigrants, instant celebrities, and bankrupts, among others). As a result of the stability of life styles, it is possible for most people to partition the human world into those who are like them versus everybody else. Most, if not all, of human history appears to have been dominated by this “us-versus-them” view, which social psychologists refer to as the ingroup-outgroup tension. Lewin (1948) first pointed out that groups are formed when an aggregate of individuals finds a common fate and a common task. Such interdependence transforms a collection of people into a cohesive group capable of a type of coordinated activity that would not result from individual actions alone. For example, people traveling in an airplane would hardly constitute a group; but a threat posed by hijackers can transform the aggregate in a group of hostages with a common interest and facilitate the development of strong bonds of cohesion (Jacobson, 1973).

The potential adaptive significance of such an ingroup bias based on interdependence can hardly be overemphasized, particularly in regards to sparsely distributed groups of early hominins in the African savanna. This is, after all, the common pattern for many social species. Spotted hyenas, for example, use clan odors to mark a wide territory and exclude conspecifics from outside the clan (Kruuk, 1972). In rodents, attacked intruders rapidly learn to fear the smell of colony odors (Williams, Worland, & Smith, 1990). A sort of basic ethics derives from this biological fact, one that could be summarized in the proposition “accept those carrying the ingroup signal and reject anybody else.” It seems thus plausible that ethical principles, as known to philosophers, theologians, and laypersons, ultimately derived from this basic biological fact, as traits that promote social behavior passed on across generations from remote ancestors. Sociality evolved many times independently in the animal kingdom (e.g., in primates and hymenopteran insects) and studies demonstrate that many factors correlate with the presence of relatively complex forms of grouping, including the distribution of food resources, strategies for food procurement, predatory pressures, and breeding opportunities (for a review, see Papini, in press). Undoubtedly, some of these pressures have contributed to the evolution of human sociality by a conventional process of natural selection acting at the individual level. Additionally, Wilson (1987) has argued that a process of group selection may also have contributed significantly to the evolution of social traits, including altruistic behavior. Group competition similar to that seen among human societies occurs in chimpanzees, the species with the closest common ancestor to humans, as far as fossil and DNA evidence suggest. Chimpanzees exhibit a great deal of hostility toward members of other groups, engaging in take-over tactics, infanticide, and even a sort of warfare activity that can result in killing among adult males (Goodall, 1986). There is no question that similar, although more sophisticated, behavior is present in modern humans, as described in any world history textbook, and it seems plausible that violent behavior sprouting from an us-versus-them worldview has been the rule during much of primate evolution.

According to the epigenetic view, the type of social cognitive skills described by psychologists should originate from a set of complex interactions between genetic and cultural influences acting upon individual human beings. Whatever their origin, these social-cognitive biases are patently responsible for social conflicts of large proportions. At this point, moreover, it is a self-evident truth that the us-versus-them worldview is not only a propensity of human development, but also a purposely implemented tactic of domination of one group over another. Practically every war, whether ancient or modern, is associated with the dehumanization and abuse of the enemy. During the past few centuries, the same western societies that developed the ideals of freedom and human rights were also responsible for slavery, apartheid, colonialism, and the support of countless dictatorships in developing countries.

What are the cultural pressures that promote the us-versus-them tension? Eidelson and Eidelson (2003) identified five group-level beliefs that contribute to social conflicts: superiority, injustice, vulnerability, distrust, and helplessness. When such beliefs spread on a substantial number of individuals within a society, they promote conflict with other similar societies. Consider superiority as an example, that is, the view according to which “we” are morally superior, chosen, and entitled, whereas “they” are inferior, contemptible, and immoral. One major drawback of a culture afflicted by a superiority belief is its inability to compromise and apologize, both of which are needed for a lasting resolution of conflicts. Group-level beliefs can be conceptualized as yet another emergent level of epigenetic organization, linking individuals (who still are required to hold and express a particular view for it to become a group-level belief), to their culture. Cultural pressures affecting the developing child, adolescent, and young adult could be particular important in shaping beliefs that will influence adult behavior. Such pressures are instrumented through family interactions, school, religious institutions, government and local authorities, law-enforcement agents, role models, mass media, and, perhaps most importantly, what peers think, like, and do.

When a person passionately wants to change the world, the means available may range from involvement in community activities, to politics and military careers, depending on the culture in which the person is embedded. Despite our ignorance about the fundamental causes that shape the beliefs of regular people, those who shape group-level beliefs, including politicians, educators, soldiers, artists, and entertainers, may sometimes have a surprisingly high level of effectiveness. This is because while social processes are complex, people can still sometimes influence them. This is obviously not true for every individual who would like to exercise some control over his or her own social environment, but only for some selected set of people. But the point is still a valid one: Human beings have a great deal of experience on how to influence social processes, they have some knowledge of basic factors, and they know that social change of large magnitude can sometimes be unleashed in relatively short periods of time.

Over the past several centuries, a growing fraction of the world’s nations has also experienced an impressive degree of technological change. Social changes (e.g., the spread of democracy) and the impressive cognitive abilities of human beings (e.g., the industrial revolution) have led to the emergence of a new factor: the environmental crisis. It would seem natural for a culture based on the us-versus-them worldview to substitute “environment” for “them” and to continue operating under the same basic assumption, namely, that the “us” is detached from the “them” and fundamentally independent. But in so doing, the us-versus-them worldview has been extended to a domain regulated by principles far more powerful than any politician or general can imagine or hope to control on a short-term basis with campaigns, fund raising, or military interventions.

Wishful Thinking

Nature-nurture views are not purely theoretical issues. However a person conceptualizes his or her place in society has practical implications not just for social views, but also for the larger problem of the place of humans in nature. If the products of human activity are viewed from an evolutionary perspective, there can be little doubt as to the amazing achievements of our species. There is no other living creature, as far as is known, that has generated a multiplicity of schemes for government, judicial systems, technology, forms of entertainment, art, libraries, and scientific research, among so many other achievements. Even an updated knowledge of animal behavior shows that nothing of this sort exists outside of human societies. Other species show precursors of many of these products and perhaps it would be unfair at this point to discard the parsimonious assumption that the differences are only of degree, not kind. But the degree of sophistication characterizing human behavior, both individual and social, is just unparalleled in nature. In addition, urban life has effectively broken daily contact with natural forces. For most city-dwelling people, “antipredator defense” is more related to traffic than to large carnivores, and “foraging for food” refers to grocery shopping rather than to hunting-gathering activities. As a result, it is not that surprising that many find it difficult to see themselves (and the entire human species) as part of the intricate system of nature.

However sophisticated humans may have become in their recent cultural evolution, everything they do ultimately requires natural resources. But nature is so large and its forces so powerful, that societies have been operating under the assumption that the natural processes will take care of the byproducts of human activity. So, for example, house trash is dumped in special fields, industrial waste into the waters of rivers and oceans, and automobile exhaust gases into the atmosphere. Somehow, most people expect that the land, water, and air would take care of these byproducts of human activity without losing their quality. To some extent and within certain limits, such waste can be naturally recycled without affecting the environment. However, there is an increasing amount of substances that are stressing the environment to a point that many scientists consider dangerously close to a serious breakdown of major world ecosystems. This is not a new phenomenon, something unique to contemporary industrialized societies, but appears to be a constant of human cultures. The difference between the current environmental situation and others that have been documented in the past is the scale: The entire planet is now under threat. Consider the example of the Polynesian culture of Easter Island (Hunt, 2007).

Easter Island (with a surface of about 116 square kilometers) is located in the south Pacific, some 3,600 kilometers away from any other land. It is thought that the first inhabitants arrived in the island about 2000 years ago, from the west, bringing their culture as well as Pacific rats (Rattus exulans) in their canoes. The main remnants of the Polynesian culture that flourished until the 17th century are giant stone monuments, called moai, that have attracted considerable attention because of their size and number. The mystery of these monuments lies in the fact that, in its current barren state, the island could not sustain the life of a population capable of building the moai. Studies show, however, that the island was covered with forests when the first inhabitants arrived and their culture developed on the basis of the resources provided by them. Deforestation may have been the result of human overexploitation or of the effect of the Pacific rat (Rattus exulans) on the seeds of giant palms and other trees endemic to the island. Whether Pacific rats were introduced on purpose or accidentally by the early colonizers of Easter Island, their effect appears to have been catastrophic. Extensive deforestation beyond the environment’s capacity to replenish trees led to instability in water supply and land erosion. Violence may have been the result of an increasing shortage of resources and, in turn, contributed to the further breakdown of the island’s economy leading, eventually, to a complete cultural disintegration. The culture came to an end around 1680, when warfare between two factions resulted in the virtual extermination of one of them, the collapse of economic activities, and widespread disease. Europeans arrived in the 18th century carrying new diseases (like small pox) and slavery, and introduced a number of exotic species, including sheep and rats (Rattus norvegicus), all of which resulted in further decimation of the population and the environment. By 1900, the population reached a low of 111 people. The case of Easter Island highlights the interplay of ecological and cultural factors in the demographic collapse of an entire society, thus providing a plausible scenario in a small scale of what may be happening with contemporary industrial societies on a planetary scale.

Economic growth can be sustained only up to a point. When the resulting waste byproducts and the depletion of natural resources stress ecosystems beyond their ability to remain stable, growth (either in economic terms or in population terms) is no longer possible. The maximum population that an ecosystem can support without degradation is referred to as the system’s carrying capacity. The main point of the Easter Island case is that economic systems can grow beyond their carrying capacity for a limited amount of time; eventually, however, the system may collapse to a point of no recovery. Of course, the question of interest is whether contemporary economic growth is below or above the carrying capacity of the planetary ecosystem. There are signals suggesting that the ecological stress caused by human activity is reaching or has already surpassed the earth carrying capacity. Some such signals include air, water, and land pollution (e.g., high marine levels of mercury and heavy metals), a drop in biodiversity (e.g., extinction of rain forest species caused by deforestation), atmospheric changes (e.g., ozone depletion), overexploitation of natural populations (e.g., overfishing), and global warming (e.g., caused by carbon dioxide, methane, and other greenhouse gases that disrupt heat dissipation in the atmosphere).

A resolution of the environmental crisis will require more than just political determination, sound economic decisions, and reliable scientific information. The process can only produce timely results if a critical mass of citizens, including political and corporate leaders, embraces the idea that human cultures and the ecological systems that sustain them are in a fundamentally trophic relationship that can be disrupted or perturbed only at the cost of extreme consequences for both. A new consensus on the value of environmentalism is emerging (e.g., Belshaw, 2001), but the political decisions seem to be lagging behind, as they are vulnerable to the lobbying capacity of corporate leaders who continue to operate on a wishful-thinking mode. The us-versus-them principle appears to be exhausted by the very success that it generated. But, are these signs of environmental crisis sufficient to induce corrective measures?

Conclusions

If the rules of human behavior underlying the current environmental crisis are viewed as part of human nature, then the ability to device corrective measures based on experience would reflect human nurture capacities. If human behavior is naturally guided by such rules as “treat nature as a them,” “assume the renewability of natural resources,” “pursue immediate gratification,” and so on, then the effects generated by such rules (e.g., water contamination, global warming, climate change) are the experiential factors that feed human nurture. Thus, drawing a parallel between the individual level of human behavior and the socio-political level of social change, one may ask: What type of experiential factors would induce the needed corrective measures?

In the film Groundhog Day (1993, directed by Harold Ramis, written by Danny Rubin), the main character undergoes a deep change of personality, as a mysterious power forces him to live the same day hundreds of times. This is a man who despises his job, treats coworkers with an air of superiority, looks at women as sexual objects, and displays his sarcastic view of affairs with pride. This is an arrogant, pompous, and egotistical man. Above all, he loathes Punxsutawney, a little town in Pennsylvania where people gather every February 2nd to look at a woodchuck named Punxsutawney Phil that, according to tradition, predicts whether the winter will last for an additional six weeks. As a basically selfish and manipulative person, he soon realizes that he can use this strange turn of events to take advantage of people. Thus, he proceeds to steal money and to convince a beautiful, trusting woman that they know each other from high school so as to take advantage of her. Eventually, he takes over the task of seducing his coworker by memorizing her favorite drink, poetry, ice cream, and a number of other details so as to, again, take advantage of her. But he fails miserably. In desperation, he even tries suicide only to learn that everything starts back at 6:00 a.m., the next morning. As his hypocritical disposition brings no change to his situation, he develops a deeper sense about what is important in life. Out of this process emerges an altruist man, one who provides help to others at no personal gain, worries about his coworkers, learns the pleasure of performing the piano, struggles unsuccessfully to save a homeless man from passing away, and becomes genuinely poetic. Furthermore, in trying to make his mark without ulterior motives, he finally gains the love of the same woman who has persistently rejected his advances when they were faked and insincere.

When evaluating the implications of this story, one cannot help but conclude that a change of character really requires a fantastic set of circumstances. In the film, this fantastic component is the fact that this man is trapped in time, forced to go through this particular day of his life whatever number of times is required until he changes from the inside. In real life, similarly extraordinary circumstances may be required to induce the sort of dispositional plasticity required to change a person’s character. But crystallization may be a property displayed by any stable system. For example, economic and social systems also have rules that remain stable while change occurs at a more superficial level. Once the system is in place, it would tend to remain stable until a crisis of sufficient proportion disrupts it. Without the conditions afforded by a “fantastic” set of circumstances, social systems may also lack something akin to dispositional plasticity. Thus, when posing the question “are these signs of environmental crisis sufficient to induce corrective measures?” the answer would have to be something along the lines of “hopefully yes, but probably not.” Extraordinary circumstances may be needed to reintroduce “dispositional cultural plasticity,” so that it can undergo the changes that are required to correct the problem. In the case of the current environmental crisis, perhaps nothing major will be done by the socio-political establishment of industrialized societies until an episode of transcendental consequences ensues, with devastating effects in terms of human lives and long-term environmental damage. It took several thousand deaths on September 11, 2001, to mobilize the United States against terrorism. The alarming possibility is that an environmental degradation that never produces a catastrophic event, but emerges gradually, may delay corrective measures until it is too late to avoid long-term environmental damage, as it may have occurred in Easter Island—except that moving to another “island” is not viable in this case.

Just as there are reasons to be concerned with the current environmental situation, there are also reasons to be moderately optimistic about the human capacity to respond to social conflicts. A recent example is provided by the threat of nuclear war during the Cold War era, which, despite all indications (or, perhaps, because of them), fortunately never materialized. Groups can be brought together by the emergence of a superordinate goal—a common problem. Clearly, the current environmental crisis constitutes an example of an event that affects everybody and solving it could easily become a superordinate goal. Like the bull and the bear at the arena, all human beings are tied by the prospect of a worldwide environmental collapse, which has come to play a role much like that of mortality’s in Seneca’s opening plea. Thus, hope lies in the possibility that the amassed experience of the last few centuries can nurture human natural capacities with ideals of respect for, and protection of each other and of the environment, so as to avoid the environmental degradation and the collapse of civilization that would almost certainly follow.


References

1. Bechara, A., Tranel, D., Damasio, H., Adolphs, R., Rockland, C., & Damasio, A.R. Double dissociation of conditioning and declarative knowledge relative to the amygdala and hippocampus in humans. Science, 269, (1995), 1115-1118.        [ Links ]

2. Bedi, K. S. Nutritional effects on neuron numbers. Nutritional Neuroscience, 6, (2003), 141-152.        [ Links ]

3. Belshaw, C. Environmental philosophy. Reason, nature, and human concern. Montreal, Canada: McGill-Queen’s University Press, (2001).         [ Links ]

4. Burroughs, E. R. Tarzan of the apes. New York: Modern Library, (1914/2003).        [ Links ]

5. Candland, D. K. Feral children and clever animals. Reflexions on human nature. Oxford, UK: Oxford University Press, (1993).        [ Links ]

6. Eidelson, R. J., & Eidelson, J. I. Dangerous ideas. Five beliefs that proper groups toward conflict. American Psychologist, 58, (2003), 182-192.        [ Links ]

7. Galton, F. Inquiries into human faculty and its development. London, UK: Mcmillan, (1883).        [ Links ]

8. Goodall, J. The chimpanzees of Gombe. Patterns of behavior. Cambridge, MA: Harvard University Press, (1986).        [ Links ]

9. Gottlieb, G. Individual development and evolution. The genesis of novel behavior. New York: Oxford University Press, (1992).        [ Links ]

10. Greenough, W. T. Experience effects on the developing and the mature brain: Dendrite branching and synaptogenesis. In N. A. Krasnegor, E. M. Blass, M. A. Hofer, & W. P. Smotherman (Eds.), Perinatal development: A psychobiological perspective (pp. 195-221). Orlando, FL: Academic Press, (1987).        [ Links ]

11. Hadot, P. Philosophy as a way of life. (Translated by M. Chase). Malden, MA: Blackwell Publishing, (1995).        [ Links ]

12. Herrnstein, R. J., & Murray, C. The bell curve. Intelligence and class structure in American life. New York: Free Press, (1994).        [ Links ]

13. Hunt, T. L. Rethinking Easter Island’s ecological catastrophe. Journal of Archaeological Science, 34, (2007), 485-502.        [ Links ]

14. Jacobson, S. R. Individual and group responses to confinement in a skyjacked plane. American Journal of Orthopsychiatry, 43, (1973), 459-469.        [ Links ]

15. Jacoby, R., & Glauberman, N. (Eds.). The bell curve debate. History, documents, opinions. New York: Random House, (1995).        [ Links ]

16. Kruuk, H. The spotted hyenas: A study of predation and social behavior. Chicago, IL: University of Chicago Press, (1972).        [ Links ]

17. Lehrman, D. S. A critique of Konrad Lorenz’s theory of instinctive behavior. Quarterly Review of Biology, 28, (1953), 337-363.        [ Links ]

18. Lewin, K. Resolving social conflicts. New York: Harper & Row, (1948).        [ Links ]

19. Lewontin, R. C. Human diversity. San Francisco, CA: Freeman, (1982).        [ Links ]

20. Lukas, W. D., & Campbell, B. C. Evolutionary and ecological aspects of early brain malnutrition in humans. Human Nature, 11, (2000), 1-26.        [ Links ]

21. Moltz, H. Contemporary instinct theory and the fixed action pattern. Psychological Review, 72, (1965), 27-47.        [ Links ]

22. Moss, C. Elephant memories: Thirteen years in the life of an elephant family. New York: Morrow, (1988).        [ Links ]

23. Nguyen, P. V., Alberini, C. M., Huang, Y.-Y., Ghirardi, M., Abel, T., & Kandel, E. R. Genes, synapses and long-term memory. In D. Ottoson (Ed.), Challenges and perspectives in neuroscience (pp. 213-237). Oxford, UK: Elsevier, (1995).        [ Links ]

24. Nussbaum, M. C. The therapy of desire. Theory and practice in Hellenistic ethics. Princeton, NJ: Princeton University Press, (1994).        [ Links ]

25. Papini, M. R. Comparative psychology. Evolution and development of behavior, Second Edition. New York: Psychology Press, (in press).        [ Links ]

26. Papini, M. R. Comparative psychology of surprising nonreward. Brain, Behavior and Evolution, 62, (2003), 83-95.        [ Links ]

27. Rescorla, R. A. Higher-order conditioning. Hillsdale, NJ: Erlbaum, (1980).        [ Links ]

28. Seneca, L. A. Moral essays, Vol. I (Translated by J. W. Basore). Cambridge, MA: Harvard University Press, (1928).        [ Links ]

29. Squire, L. R. Memory and brain. New York: Oxford Univesity Press, (1987).        [ Links ]

30. Stork, O., Stork, S., Pape, H.-C., & Obata, K. Identification of genes expressed in the amygdala during the formation of fear memory. Learning and Memory, 8, (2001), 209-219.        [ Links ]

31. Watson, J. B. Behaviorism. New York: Norton, (1924).        [ Links ]

32. West-Eberhard, M. J. Developmental plasticity and evolution. New York: Oxford University Press, (2003).        [ Links ]

33. Williams, J. L., Worland, P. D., & Smith, M. G. Defeat-induced hypoalgesia in the rat: Effects of conditioned odors, naltrexone, and extinction. Journal of Experimental Psychology: Animal Behavior Processes, 16, (1990), 345-357.        [ Links ]

34. Willoughby, P. R. Paleoanthropology and the evolutionary place of humans in nature. International Journal of Comparative Psychology, 18, (2005), 60-90.        [ Links ]

35. Wilson, D. S. Altruism in Mendelian populations derived from sibling groups: The haystack model revisited. Evolution, 41, (1987), 159-187.        [ Links ]

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons