Saturday, January 14, 2017

August 30, 2015




August 30, 2015

Written by Maximus Peperkamp, M.S. Verbal Engineer

Dear Reader, 
This is my thirteenth response to Chapter 5.4 “Vocalizations as tools for influencing the affect and behavior of others”by Rendall and Owren, (2010). How interesting it is that “words that convey “smallness” are disproportionately characterized by high-front vowels whose spectral density is biased toward higher frequencies, and words that convey “ largeness ” are characterized by low-back vowels whose spectral density is biased toward lower frequencies ( Hinton et al., 1995 ).” As someone who played on different sizes of flutes this is nothing new to me. There is a lawful relationship between size and sound and this has nothing to do with language differences. A similar relationship exists between emotion and sound. However, we underestimate the importance of this relationship. We are biased toward semantics and tone-deaf for emotion as we fixate on the words and don’t listen to how we sound while we speak. In every language of the world there is SVB and NVB. Only during SVB does the sound of the speaker’s voice “facilitate semantic learning. 

“This phenomenon occurs cross-linguistically and represents the semantic extension in languages of the natural sound-symbolic relationship that exists in the wider world as noted above. Hence, a young infant’s affective familiarity with these basic sound-symbolic relationships could subsequently facilitate semantic learning, at least of words that obey the cross- cultural pattern of semantic diminution and augmentation.” Not much learning is going to occur as long as the listener’s “affective familiarity with these basic sound-symbolic relationships” is based on fear of punishment. Unfortunately, much, if not most of our cultural conditioning is based on coercion. Our cultural conditioning is as strong as it is, as it is based NVB which dominates in every culture. However, there are cultures in which there is more SVB than in others. I am certain that in Holland there is more SVB than in the United States. “And these linguistic patterns in turn are likely to have arisen naturally but unintentionally through historical processes of cultural selection, which favored the use and survival of word forms that convey their meaning more “naturally” in the sense that they are easier to learn, recall and deploy, precisely because they exploit biologically preprepared sound – meaning relationships.” Cultures change. It seems to me the national dialogue in Holland, my country of origin, has in recent years become more NVB, while conversation in the USA is slowly beginning to change toward SVB. I like to think of myself as helping that process along. This gives the saying ‘don’t ask what your country can do for you, but ask what you can do for your country’ an entirely new meaning.

If we engage in SVB, we will be exploring what the authors have called “affective semantics.” In SVB we say different things because of how we sound. We will say things which we couldn’t say before, because our sound wouldn’t let us. The authors describe SVB; “Accepting this possibility suggests even wider scope for semantic constructs to be grounded in the perceptuo-affective “impacts” of sound structure – what might be termed affective semantics.”Our nervous system prefers certain sounds over others. If we were given the choice, we would choose SVB over NVB 100% of the time. Fact is, however, that we are not given the choice and we don’t even realize that we have a choice. The choice is apparent when the distinction is made between SVB and NVB. As stated, we have a semantic bias and we don’t listen to ourselves while we speak and we engage in NVB by default. 

In every textbook about the scientific method we are told that we must start with observation and description before we formulate a hypothesis. From the get-go there is visual bias. Auditory data, as pertaining to how we sound while we speak, is given short shrift in psychology. Even if it is mentioned, since everything in science has to be defined and written, the auditory data are lost in translation, because written and spoken language are two entirely different things. Experiencing the affective effects of how we sound while we speak and listen isn’t the same as reading and writing about it.


“Kohler (1947) famously reported a bias for human subjects to match particular nonsense words, such as naluma and takete, with unfamiliar objects whose form was rounded or jagged, respectively. This bias has been confirmed in other populations and in young children, and it has been extended to include other objects and other nonsense words, such as bouba and kiki (e.g., Westbury, 2005 ; Maurer et al., 2006 ). One explanation for this implicit semantic bias is that it reflects reciprocal linkage between the visuo-sensory processing of a rounded object and activation of motor areas responsible for coordinating the articulation of the round-mouth vowels both in bouba and naluma, but not kiki or takete" (Ramachandran and Hubbard, 2001). As is clear from this line of research, the interpretation is “reciprocal linkage between the visuo-sensory processing.” Yet, another interpretation is needed.

“A related alternative is that the semantic bias reflects the differential affective quality of sounds with different spectral density and signal onsets, as reviewed earlier for animal vocalizations. In this case, the consonants /k/ and /t/ are unvoiced and have relatively plosive onsets and noisy spectra. Therefore, the words kiki and takete might more naturally conjure “harsh/fractured” and similar semantic constructs, and so be matched to jagged objects preferentially. In contrast, the consonants /b/, /n/, /l/ and /m/ are all either voiced, or nearly so, and therefore have smoother onsets and more patterned spectral structures. Hence, the words bouba and naluma might more naturally conjure “smooth/ connected” and similar semantic constructs, and so be matched to rounded objects. Recent experiments using words that both replicate and cross the original consonant and vowel contexts (i.e., boubakiki; kouka-bibi) provide some support for this account (Rendall & Nielsen, unpublished data). By extension, a vastly larger set of affective semantic effects might await future discovery and might ultimately be shown to account for the form of at least some real words.”

Since they are animal researchers, these authors are on the right track when they state: “The semantic bias reflects the differential affective quality of sounds.” I think the “vastly larger set of affective semantic effects” that "awaits future discovery and might ultimately be shown to account for the form of at least some real words” will remain unknown as long as we continue to have NVB and don’t discover SVB and learn how to maintain it. SVB holds the key to a many discoveries. Certain things can only be said and understood during SVB, but as long as NVB reigns, we are not going to hear it, as we are not listening to ourselves. Even if we overcame our semantic bias and we would focus more on how we sound, it still remains to be seen (pun intended) if we would be willing to be concerned with how we ourselves sound. Our willingness to pay attention to how we as speakers sound depends on the circumstance in which we talk. If the circumstance is such that we, because of threat, focus on each other, instead of on ourselves, we again engage in NVB. If others are aversively influencing us as speakers or as listeners, our communication becomes a struggle between the speaker and the listener, based on our outward orientation and fixation on words. 

SVB requires the absence of aversive stimulation and only if we arrange for such a circumstance we are able to achieve and maintain it. “Vocalizations can exert direct and indirect influences on listener affect and behavior. Some of the effects are taxonomically widespread, evolutionarily conserved and very difficult for listeners to control or resist.” Both SVB and NVB are “evolutionary conserved” as we have a survival need to express safety, wellbeing, but also fear and threat. “Of course, as functional as such affective effects of vocalizations can be, they do not undercut the role of cognition, nor do they preclude the possibility of more complex communicative processes and outcomes in many species.” I disagree with this general statement, because I think of human vocalizations. NVB not only “undercuts the role of cognition”, but it also “precludes the possibility of more complex communicative processes and outcomes.” 

Only in SVB do vocal signals “scaffold communicative complexity.” Only in SVB there can be “the complementary and integrated nature of affective and cognitive systems.” Only in SVB there is “semantic complexity of human language” based on positive affect and subsequent “sound – meaning relationships.” Only in SVB do communicators experience that “such pre-prepared, or early acquired, sound – sense relationships represent a form of intrinsic (i.e., original) meaning that provides a natural foundation from which to construct increasingly complex semantic systems.” None of these ontogenetic effects will be achieved with negative affect, that is, with NVB.  NVB creates a negative motivation for our actions. With NVB we are not for something, but we are always against something. “The corollary is that the communicative importance of the affective influence of vocal signals do not disappear when brains get larger and the potential for cognitive, evaluative control of behavior increases. Rather, complex communicative processes might often specifically exploit and build on the phylogenetically ancient and widespread affective effects of vocal signals.” NVB doesn’t and can't provide us with “cognitive, evaluative control of behavior”, instead it makes us justify and downplay our actions, specifically our way of talking, because of our negative emotions.

No comments:

Post a Comment