AN ANTHOLOGY OF THOUGHT & EMOTION... Un'antologia di pensieri & emozioni

Saturday, 17 February 2018

MUSICAL NOSTALGIA

Neural Nostalgia


Why do we love the music we heard as teenagers?




As I plod through my 20s, I’ve noticed a strange phenomenon: The music I loved as a teenager means more to me than ever—but with each passing year, the new songs on the radio sound like noisy nonsense. On an objective level, I know this makes no sense. I cannot seriously assert that Ludacris’ “Rollout” is artistically superior to Katy Perry’s “Roar,” yet I treasure every second of the former and reject the latter as yelping pablum. If I listen to the Top 10 hits of 2017, I get a headache. If I listen to the Top 10 hits of 2003, I get happy.

Why do the songs I heard when I was teenager sound sweeter than anything I listen to as an adult? I’m happy to report that my own failures of discernment as a music critic may not be entirely to blame. In recent years, psychologists and neuroscientists have confirmed that these songs hold disproportionate power over our emotions. And researchers have uncovered evidence that suggests our brains bind us to the music we heard as teenagers more tightly than anything we’ll hear as adults—a connection that doesn’t weaken as we age. Musical nostalgia, in other words, isn’t just a cultural phenomenon: It’s a neuronic command. And no matter how sophisticated our tastes might otherwise grow to be, our brains may stay jammed on those songs we obsessed over during the high drama of adolescence.

To understand why we grow attached to certain songs, it helps to start with the brain’s relationship with music in general. When we first hear a song, it stimulates our auditory cortex and we convert the rhythms, melodies, and harmonies into a coherent whole. From there, our reaction to music depends on how we interact with it. Sing along to a song in your head, and you’ll activate your premotor cortex, which helps plan and coordinate movements. Dance along, and your neurons will synchronize with the beat of the music. Pay close attention to the lyrics and instrumentation, and you’ll activate your parietal cortex, which helps you shift and maintain attention to different stimuli. Listen to a song that triggers personal memories, and your prefrontal cortex, which maintains information relevant to your personal life and relationships, will spring into action.

But memories are meaningless without emotion—and aside from love and drugs, nothing spurs an emotional reaction like music. Brain imaging studies show that our favorite songs stimulate the brain’s pleasure circuit, which releases an influx of dopamine, serotonin, oxytocin, and other neurochemicals that make us feel good. The more we like a song, the more we get treated to neurochemical bliss, flooding our brains with some of the same neurotransmitters that cocaine chases after.

Music lights these sparks of neural activity in everybody. But in young people, the spark turns into a fireworks show. Between the ages of 12 and 22, our brains undergo rapid neurological development—and the music we love during that decade seems to get wired into our lobes for good. When we make neural connections to a song, we also create a strong memory trace that becomes laden with heightened emotion, thanks partly to a surfeit of pubertal growth hormones. These hormones tell our brains that everything is incredibly important—especially the songs that form the soundtrack to our teenage dreams (and embarrassments).

On its own, these neurological pyrotechnics would be enough to imprint certain songs into our brain. But there are other elements at work that lock the last song played at your eighth-grade dance into your memory pretty much forever. Daniel Levitin, the author of This Is Your Brain on Music: The Science of a Human Obsession, notes that the music of our teenage years is fundamentally intertwined with our social lives.

“We are discovering music on our own for the first time when we’re young,” he told me, “often through our friends. We listen to the music they listen to as a badge, as a way of belonging to a certain social group. That melds the music to our sense of identity.”

Petr Janata, a psychologist at University of California–Davis, agrees with the sociality theory, explaining that our favorite music “gets consolidated into the especially emotional memories from our formative years.” He adds that there may be another factor in play: the reminiscence bump, a name for the phenomenon that we remember so much of our younger adult lives more vividly than other years, and these memories last well into our senescence. According to the reminiscence bump theory, we all have a culturally conditioned “life script” that serves, in our memory, as the narrative of our lives. When we look back on our pasts, the memories that dominate this narrative have two things in common: They’re happy, and they cluster around our teens and early 20s.

Why are our memories from these years so vibrant and enduring? Researchers at the University of Leeds proposed one enticing explanation in 2008: The years highlighted by the reminiscence bump coincide with “the emergence of a stable and enduring self.” The period between 12 and 22, in other words, is the time when you become you. It makes sense, then, that the memories that contribute to this process become uncommonly important throughout the rest of your life. They didn’t just contribute to the development of your self-image; they became part of your self-image—an integral part of your sense of self.

Music plays two roles in this process. First, some songs become memories in and of themselves, so forcefully do they worm their way into memory. Many of us can vividly remember the first time we heard that one Beatles (or Backstreet Boys) song that, decades later, we still sing at every karaoke night. Second, these songs form the soundtrack to what feel, at the time, like the most vital and momentous years of our lives. The music that plays during our first kiss, our first prom, our first toke, gets attached to that memory and takes on a glimmer of its profundity. We may recognize in retrospect that prom wasn’t really all that profound. But even as the importance of the memory itself fades, the emotional afterglow tagged to the music lingers.

As fun as these theories may be, their logical conclusion—you’ll never love another song the way you loved the music of your youth—is a little depressing. It’s not all bad news, of course: Our adult tastes aren’t really weaker; they’re just more mature, allowing us to appreciate complex aesthetic beauty on an intellectual level. No matter how adult we may become, however, music remains an escape hatch from our adult brains back into the raw, unalloyed passion of our youths. The nostalgia that accompanies our favorite songs isn’t just a fleeting recollection of earlier times; it’s a neurological wormhole that gives us a glimpse into the years when our brains leapt with joy at the music that’s come to define us. Those years may have passed. But each time we hear the songs we loved, the joy they once brought surges anew.

Wednesday, 14 February 2018

MURDERING OUR BRAIN

Is the internet killing our brains?

The web gives us access to endless information. What impact does this have on our memory, and our attention spans?

An article by 
 
(The Guardian, October 2016)


Throughout history, people have always worried about new technologies. The fear that the human brain cannot cope with the onslaught of information made possible by the latest development was first voiced in response to the printing press, back in the sixteenth century. Swap “printing press” for “internet” and you have the exact same concerns today, regularly voiced in the mainstream media, and usually focused on children.
But is there any legitimacy to these claims? Or are they just needless scaremongering? There are several things to bear in mind when considering how our brains deal with the internet.
First, don’t forget that “the internet” is a very vague term, given that it contains so many things across so many formats. You could, for instance, develop a gambling addiction via online casinos or poker sites. This is an example of someone’s brain being negatively affected via the internet, but it would be difficult to argue that the internet is the main culprit, any more than a gambling addiction obtained via a real world casino can be blamed on “buildings”; it’s just the context in which the problem occurred. However, the internet does give us a far more direct, constant and wide ranging access to information than pretty much anything else in human history. So how could, or does, this affect us and our brains?

Information overload
It’s important to remember that the human brain is always dealing with a constant stream of rich information; that’s what the real world is, as far as our senses are concerned. Whether staring at a video being played on a small screen or watching people playing in a park, the brain and visual system still has to do the same amount of work as both provide detailed sensory information.
It’s too detailed, if anything. The brain doesn’t actually process every single thing our senses present to it; for all its power and complexity, it just doesn’t have the capacity for that. So it filters things out and extrapolates what’s important based on experiences, calculation and a sort of “best guess” system. The point is, the brain is already well adapted to prevent damaging information overload, so it’s unlikely that the internet would be able to cause such a thing.
Is Google destroying my memory?
Another concern is that the constant access to information stored online is atrophying or disrupting our memories. Why bother to remember anything when you can just Google it, right?
Memory doesn’t quite work that way. The things we experience that end up as memories do so via unconscious processes. Things that have emotional resonance or significance in other ways tend to be more easily remembered than abstract information or intangible facts. These things have always required more effort to remember in the long term, needing to be rehearsed repeatedly in order to be encoded as memories. Undeniably, the internet often renders this process unnecessary. But whether this is harmful for the development of the brain is another question.
Doing something often and becoming good at it is reflected in the brain’s structure. For example, the motor cortex of an expert musician, proficient in fine hand movements, differs from that of non-musicians. An argument could be made that constantly committing things to memory rather than just looking them up as and when needed would enhance the brain’s memory system. On the other hand, some evidence suggests that a more stimulating, varied environment aides brain development – so maybe the constant, interesting information found online is better for you than rehearsing dry facts and figures. 

But, counter to this, other evidence suggests that the detailed presentation of even simple web pages provides too many features for the human brain’s small-capacity short-term memory to handle, which could have knock-on effects for the memory system. It’s a mixed picture overall.
What about my attention span? 
Does the internet impact on our ability to focus on something, or does having 24/7 access to so many things prove too much of a distraction?
The human attention system is complicated, and so again, it’s an unclear picture. Our two-layer, bottom-up and top-down attention system (meaning there’s a conscious aspect that enables us to direct our attention, and an unconscious aspect that shifts attention towards anything our senses pick up that might be significant) is already something that can make focusing 100% on something quite a challenge. It’s for this reason that a lot of people prefer to have music playing while they work: it occupies part of the attention system that would otherwise look for distractions while we’re trying to do something important.
The internet, however, provides a very quick and effective distraction. We can be looking at something enjoyable within seconds, which is a problem, given that much work in the modern world is done on the same device we use to access the internet. It is such a concern that apps and companies have sprung up specifically to address this.
But it would be unfair to say that the internet is responsible for distracting us from work. The brain’s attention system and preference for novel experiences existed long before the internet did, the internet is just something that makes these aspects particularly irksome. 
Competing for likes
Social interactions with other people are a major factor in how we develop, learn and grow at the neurological level. Humans are a very social species. But now the internet has allowed social interactions and relationships to occur between vast numbers of people over great distances, and for them to occur all day, every day.
This means that everything we do can be shared with others at the press of a button, but this has consequences. The positive feelings gained from social media approval are said to work on the same neurological basis as drugs do; providing rewards via the dopamine system. Thus, social network addiction is slowly becoming an issue. By creating a situation where we’re constantly trying to impress and being judged by others, perhaps the internet isn’t doing our brains much good after all.
But, as with most things, the actual problem comes down to other people, not the net.


Dean Burnett

Dean Burnett is a doctor of neuroscience, but moonlights as a comedy writer and stand-up comedian. He tutors and lectures at Cardiff University, and is the author of The Idiot Brain (2016).

Tuesday, 13 February 2018

UMBERTO ECO ON HUMAN VALUES

What follows is a lecture on Human Values delivered by Umberto Eco at Clare Hall, Cambridge University, on March 7 and 8, 1990
Rubens: Head of Medusa
Interpretation and Overinterpretation:
World, History, Texts

by Umberto Eco


I. INTERPRETATION AND HISTORY
(reference notes omitted)

In 1957 J. M. Castillet wrote a book entitled La hora del lector (The time of the reader). He was a prophet, indeed. In 1962 I wrote my Opera aperta. In that book I advocated the active role of the interpreter in the reading of texts endowed with aesthetic value. When those pages were written, my readers mainly focused on the “open” side of the whole business, underestimating the fact that the open-ended reading I was supporting was an activity elicited by (and aiming at interpreting) a work. In other words, I was studying the dialectics between the rights of texts and the rights of their interpreters. I have the impression that, in the course of the last decades, the rights of the interpreters have been overstressed.

In my more recent writings (A Theory O f Semiotics, The Role of the Reader, Semiotics and the Philosophy of Langauge) I elaborated on the Peircean idea of unlimited semiosis. In my presentation at the Peirce’s International Congress at Harvard University (September 1989) I tried to show that the notion of unlimited semiosis does not lead to the conclusion that interpretation has no criteria. To say that interpretation (as the basic feature of semiosis) is potentially unlimited does not mean that interpretation has no object and that it “riverruns” merely for its own sake. To say that a text has potentially no end does not mean that every act of interpretation can have a happy end.

Some contemporary theories of criticism assert that the only reliable reading of a text is a misreading, that the only existence of a text is given by the chain of responses it elicits, and that, as maliciously suggested by Tzvetan Todorov (quoting Georg Cristoph Lichtenberg apropos of Jakob Boehme), a text is only a picnic where the author brings the words and the reader brings the sense.

Even if that were true, the words brought by the author are a rather embarrassing bunch of material evidences that the reader cannot pass over in silence, or in noise. If I remember correctly, it was in this country [England] that somebody suggested, years ago, that it is possible to do things with words. To interpret a text means to explain why these words can do various things (and not others) through the way they are interpreted. But if Jack the Ripper told us that he did what he did on the grounds of his interpretation of the Gospel according to Saint Luke, I suspect that many reader-oriented critics would be inclined to think that he read Saint Luke in a pretty preposterous way. Non-reader-oriented
critics would say that Jack the Ripper was deadly mad — and I confess that, even though feeling very sympathetic with the reader-oriented paradigm, and even though I read David Cooper, Ronald Laing, and Felix Guattari, much to my regret I would agree that Jack the Ripper needed medical care.

I understand that my example is rather farfetched and that even the most radical deconstructionist would agree (I hope, but who knows?) with me. Nevertheless I think that even such a paradoxical argument must be taken seriously. It proves that there is at least one case in which it is possible to say that a given interpretation is a bad one. In terms of Karl Popper’s theory of scientific research, this is enough to disprove the hypothesis that interpretation has no public criteria (at least statistically speaking).

One can object that the only alternative to a radical readeroriented theory of interpretation is the one extolled by those who say that the only valid interpretation aims at finding the original intention of the author. In some of my recent writings I have suggested that between the intention of the author (very difficult to find out and frequently irrelevant for the interpretation of a text) and the intention of the interpreter who (to quote Richard Rorty) simply “beats the text into a shape which will serve his own purpose,” there is a third possibility. There is an intention of the text.

In the course of my second and third lectures I shall try to make clear what I mean by intention of the text (or intentio operis, as opposed to — or interacting with — the intentio auctoris and the intentio lectoris). During the present lecture I would like, on the contrary, to revisit the archaic roots of the contemporary debate on the meaning (or the plurality of meanings, or the absence of  any transcendental meaning) of a text. Let me, for the moment, blur the distinction between literary and everyday texts, as well as the difference between texts as images of the world and the natural world as (according to a venerable tradition) a Great Text to be deciphered.

Let me, for the moment, start an archaeological trip which, at first glance, would lead us very far away from contemporary theories of textual interpretation. You will see at the end that, on the contrary, most so-called postmodern thought will look very pre-antique.

In 1987 I was invited by the directors of the Frankfort Bookfair to give an introductory lecture, and the directors of the Bookfair proposed to me (probably believing that this was a really up-to-date subject) a reflection on modern irrationalism. I started by remarking that it is difficult to define irrationalism without having some philosophical concept of reason. Unfortunately, the whole history of Western philosophy serves to prove that such a definition is rather controversial. Any way of thinking is always seen as irrational by the historical model of another way of thinking, which views itself as rational. Aristotle’s logic is not the same as Hegel’s; ratio, ragione, raison, reason, and Vernuft do not mean the same thing.

One way of understanding philosophical concepts is often to come back to the common sense of dictionaries. In German I find that the synonyms of irrational are unsinnig, unlogisch, unvernuftig,
sinnlos; in English they are senseless, absurd, nonsensical, incoherent, delirious, farfetched, inconsequential, disconnected, illogic, exorbitant, extravagant, skimble-skamble. These meanings
seem too strong or too weak to define respectable philosophical standpoints. Nonetheless, they indicate something going beyond a limit set by a standard. One of the antonyms of unreasonableness
(according to Roget’s Thesaurus) is moderateness. Being moderate means being within the modus — that is, within limits and within measure.

The word reminds us of two rules we have inherited from the ancient Greek and Latin civilizations : the logic principle of modus ponens and the ethical principle formulated by Horace: “est modus
in rebus, sunt certi denique fines quos ultra citraque nequit consistere rectum [There is a measure for everything. There are precise limits one cannot cross].”

At this point I understood that the Latin notion of modus was rather important, if not for determining the difference between rationalism and irrationalism, at least for isolating two basic interpretative
attitudes, that is, two ways of deciphering either a text as a world or the world as a text.

For Greek rationalism, from Plato to Aristotle and others, knowledge meant understanding causes. In this way, defining God meant defining a cause, beyond which there could be no further cause.

To be able to define the world in terms of causes, it is essential to develop the idea of a unilinear chain: if a movement goes from A to B, then there is no force on earth that will be able to make it
go from B to A. In order to be able to justify the unilinear nature of the causal chain, it is first necessary to assume a number of principles: the principle of identity (A=A) , the principle of noncontradiction (it is impossible for something both to be A and not to be A at the same time) and the principle of the excluded middle (either A is true or A is false and tertium non datur). From these
principles we derive the typical pattern of thinking of Western rationalism, the modus ponens: “if p then q; but p: therefore q.”

Even if these principles do not provide for the recognition of a physical order to the world, they do at least provide for a social contract. Latin rationalism adopts the principles of Greek rationalism
but transforms and enriches them in a legal and contractual sense. The legal standard is modus, but the modus is also the limit, the boundaries.

The Latin obsession with spatial limits goes right back to the legend of the foundation of Rome: Romulus draws a boundary line and kills his brother for failing to respect it. If boundaries are
not recognized, then there can be no civitas.

Horatius becomes a hero because he manages to hold the enemy on the border — a bridge thrown up between the Romans and the Others. Bridges are sacrilegious because they span the sulcus, the moat of water delineating the city boundaries: for this reason, they may be built only under the close, ritual control of the Pontifex. The ideology of the Pax Romana and Caesar Augustus’s political design are based on a precise definition of boundaries: the force of the empire is in knowing on which borderline, between which limen, or threshold, the defensive line should be set up. If the time ever comes when there is no longer a clear definition of boundaries, and the barbarians (nomads who have abandoned their original territory and who move about on any territory as if it were their own, ready to abandon that too) succeed in imposing their nomadic view, then Rome will be finished and the capital of the empire can just as well be somewhere else.

Julius Caesar, in crossing the Rubicon, not only knows that he is committing sacrilege but knows that, once he has committed it, then he can never turn back. Alea iacta est. In point of fact, there
are also limits in time. What has been done can never be erased. Time is irreversible. This principle was to govern Latin syntax. The direction and sequence of tenses, which is cosmological linearity,
makes itself a system of logical subordinations in the consecutio temporum. That masterpiece of factual realism which is the absolute ablative establishes that, once something has been done, or presupposed, then it may never again be called into question.

In a Quaestio quodlibetalis, Thomas Aquinas (5.2.3) wonders whether “utrum Deus possit virginem reparare” — in other words, whether a woman who has lost her virginity can be returned to her
original undefiled condition. Thomas’s answer is clear. God may forgive and thus return the virgin to a state of grace and may, by performing a miracle, give her back her bodily integrity. But even
God cannot cause what has been not to have been, because such a violation of the laws of time would be contrary to his very nature. God cannot violate the logical principle whereby “p has occurred” and “p has not occurred” would appear to be in contradiction. Alea iacta est.

This model of Greek and Latin rationalism is the one that still dominates mathematics, logic, science, and computer programming. But it is not the whole story of what we call the Greek legacy. Aristotle was Greek but so were the Eleusinian mysteries. The Greek world is continuously attracted by apeiron (infinity). Infinity is that which has no modus. It escapes the norm.

Fascinated by infinity, Greek civilization, alongside the concept of identity and noncontradiction, constructs the idea of continuous metamorphosis, symbolized by Hermes. Hermes is volatile and
ambiguous, he is father of all the arts but also God of robbers — young and old at the same time. In the myth of Hermes we find the negation of the principle of identity, of noncontradiction, and of the excluded middle, and the causal chains wind back on themselves in spirals, the after precedes the before, the god knows no spatial limits and may, in different shapes, be in different places at the same time.

Hermes is triumphant in the second century after Christ. The second century is a period of political order and peace, and all the peoples of the empire are apparently united by a common language
and culture. The order is such that no one can any longer hope to change it with any form of military or political operation. It is the time when the concept of enkyklios paideia, of general education, is defined, the aim of which is to produce a type of complete man, versed in all the disciplines. This knowledge, however, describes a perfect, coherent world, whereas the world of the second century is a melting pot of races and languages, a crossroad of peoples and ideas, one where all gods are tolerated. These gods had formerly had a deep meaning for the people worshiping them, but when the empire swallowed up their countries, it also dissolved their identity: there are no longer any differences between Isis, Astartes, Demetra, Cybele, Anaitis, and Maia.

We have all heard the legend of the caliph who ordered the destruction of the library in Alexandria, arguing that either the books said the same thing as the Koran, in which case they were superfluous, or they said something different, in which case, they were wrong and harmful. The caliph knew and possessed the truth and he judged the books on the basis of that truth. Second-century Hermetism, on the other hand, is looking for a truth it does not know, and all it possesses is books. Therefore, it imagines or hopes that each book will contain a spark of truth and that they will serve to confirm each other. In this syncretistic dimension, one of the principles of Greek rationalist models, that of the excluded middle, enters a crisis. It is possible for many things to be true at the same time, even if they contradict each other.

But if books tell the truth, even when they contradict each other, then their each and every word must be an allusion, an allegory. They are saying something other than what they appear to be saying. Each one of them contains a message that none of them will ever be able to reveal alone. In order to be able to understand the mysterious message contained in books, it was necessary to look for a revelation beyond human utterances, one which would come announced by divinity itself, using the vehicle of vision, dream, or oracle. But such an unprecedented revelation, never heard before, would have to speak of an as yet unknown god and of a still-secret truth. Secret knowledge is deep knowledge (because only what is lying under the surface can remain unknown for long). Thus truth becomes identified with what is not said or what is said obscurely and must be understood beyond or beneath the surface of a text. The gods speak (today we would say: the Being is speaking) through hieroglyphic and enigmatic messages.

By the way, if the search for a different truth is born of a mistrust of the classical Greek heritage, then any true knowledge will have to be more archaic. It lies among the remains of civilizations that the fathers of Greek rationalism had ignored. Truth is something we have been living with from the beginning of time, except that we have forgotten it. If we have forgotten it, then someone must have saved it for us and it must be someone whose words we are no longer capable of understanding, So this knowledge must be exotic. Carl Jung has explained how it is that once any divine image has become too familiar to us and has lost its mystery, we then need to turn to images of other civilizations, because only exotic symbols are capable of maintaining an “aura” of sacredness.
For the second century, this secret knowledge would thus have been in the hands either of the Druids, the Celtic priests, or wise men from the East, who spoke incomprehensible tongues.

Classical rationalism identified barbarians with those who could not even speak properly (that is actually the etymology of barbaros — one who stutters). Now, turning things around, it is the supposed stuttering of the foreigner that becomes the sacred language, full of promises and silent revelations. Whereas for Greek rationalism a thing was true if it could be explained, a true thing was now mainly something that could not be explained.

But what was this mysterious knowledge possessed by the barbarians’ priests? The widespread opinion was that they knew the secret links that connected the spiritual world to the astral world
and the latter to the sublunar world, which meant that by acting on a plant it was possible to influence the course of the stars, that the course of the stars affected the fate of terrestrial beings, and that the magic operations performed about the image of a god would force that god to follow our volition. As here below, so in heaven above. The universe becomes one big hall of mirrors, where any one individual object both reflects and signifies all the others.

It is possible to speak of universal sympathy and likeness only if, at the same time, the principle of noncontradiction is rejected. Universal sympathy is brought about by a godly emanation in the
world, but at the origin of the emanation there is an unknowable One, who is the very seat of the contradiction itself. Neoplatonic Christian thought will try to explain that we cannot define God in
clear-cut terms on account of the inadequacy of our language. Hermetic thought states that our language, the more ambiguous and multivalent it is, and the more it uses symbols and metaphors,
the more it is particularly appropriate for naming a Oneness in which the coincidence of opposites occurs. But where the coincidence of opposites triumphs, the principle of identity collapses.

As a consequence, interpretation is infinite. The attempt to look for a final, unattainable meaning leads to the acceptance of a never-ending drift or sliding of meaning. A plant is not defined in
terms of its morphological and functional characteristics but on the basis of its resemblance, albeit only partial, to another element in the cosmos. It is vaguely like part of the human body; it has
meaning because it refers to the body. But that part of the body has meaning because it refers to a star, and the latter has meaning because it refers to a musical scale, and this in turn because it
refers to a hierarchy of angels, and so on ad infinitum.

(continues...)

Umberto Eco in his studio