Aveneu Park, Starling, Australia

Reading the visual aspect of the word;

Reading begins with the knowledge and understanding of
words; it begins with word recognition and the ability to apply sound and
meaning to the word.  Understanding of
words then can contribute to the comprehension of text. Many outside factors
can influence the reading style of word recognition. One factor that will be
examined in this study is musical training. 
Although music and language have several differences, there are equally
a number of similarities (Slevc & Okada, 2015; McMullen & Saffran,
2004).  These similarities have led to
the relationship between music and language to be examined and it is well
documented that musical training affects reading (Benz, Sellaro, Hommel,
Colzato, 2016).  Although previous
research has suggested that music affects reading, there is still a gap in the
research that examines what music is specifically influencing in terms of word
recognition and its neural underpinning.

Three components of word reading include: phonology, which
are the sounds; orthography, which is the visual aspect of the word; and
semantics, which is the meaning (Price, 1998). 
There are thought to be two different routes to identifying a word,
lexical and sublexical.  The lexical
route is thought to access a mental lexicon of whole words, without the need to
sound out words phonologically.  When the
whole word is accessed in the lexicon, this may or may not access the semantic
representations as well.  In contrast,
the sublexical route involves the activation of the pronunciation of a word
(either regular words or nonwords) by using grapheme-to-phoneme conversion
(Joubert et al., 2004), and activation of sub-word units (i.e., letters and
sound components). 

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Cross-linguistic studies have also suggested that there
might be differences in reading strategies (e.g., lexical vs. sublexical)
across orthographies because of different design features of the writing
systems: one example of this is grain sizes (Ziegler & Goswami, 2005).  It is thought that different sized
phonological units may influence phonological skills (Muter, Hulme, Snowling,
& Stevenson, 2004) or influence the ability to manipulate both larger
phonological units and individual sound units (Dege & Schwarzer,
2011).  Grain size refers to the size of
the phonological unit that maps onto the smallest orthographic unit in a
writing system (Ziegler & Goswami, 2005). 
For example, Chinese has a larger grain size, has a more direct
symbol-to-sound representation and is processed at the whole word level (Yum,
Law, Su, Lau & Mo, 2014).  Chinese,
as a morphosyllabic language, is composed of characters that do not map onto
smaller sound units, but usually represent a whole word (Nelson, Liu, Fiez,
& Perfetti, 2009).  This whole word
processing is thought to be at the lexical-level processing (Joubert et al.,
2004; Kim, Taft, & Davis, 2004; Monsell, Patterson, Graham, Hughes, &
Milroy, 1992).  In a syllabic writing
system, the orthographic units map onto syllables (medium grain size).  In contrast, in an alphabetic writing system
like English, the orthographic units (e.g., letters) map onto phonemes (the
smallest meaningful unit of sound in language), which would be considered a
small grain size.  In English, a reader
can learn a word as a whole or by sounding it out (Joubert et al., 2004).  Thus, English can be processed either lexically
or sublexically.

Cross-linguistic studies have found support for differences
in reading styles across writing systems in neural pathways of processing
different languages (Nelson, Liu, Fiez, & Perfetti, 2009).  In English, the left hemisphere, specifically
the visual word form area (VWFA) is activated when reading words, whereas in
Chinese, there is more bilateral activation in the VWFA (Nelson, Liu, Fiez,
& Perfetti, 2009).  One study created
two artificial orthographies to examine the laterality between a syllabic
writing system and an alphabetic writing system; results suggested that the
artificial syllabic writing system showed a more bilateral activity compared to
an alphabetic writing system (Hirshorn et al., 2016).  Another study created artificial
orthographies to examine the difference in laterality of the N170, a component
in event-related potentials (ERPs) that responds to visual stimuli,
particularly words (Maurer, Zevin, & McCandliss, 2008).  They examined the difference between words
that were trained as whole words and through grapheme-to-phonemes rules;
results suggested that there is a stronger left-lateralization for training
with a focus on phonemes and a stronger right-lateralization for training with
a focus on whole words (Yoncheva, Blau, Maurer, & McCandliss, 2010).

While it is clear that there are differences in reading
styles between writing systems, it’s also possible for there to be individual
differences within English readers, due to the flexibility of the different
grain sizes within English.  However,
there is currently no widely used behavioral marker for reading style to
distinguish between English readers that use one reading style vs. the
other.  One measure that has been
proposed to identify a more lexical-level (or holistic) way of reading is word
inversion sensitivity.  Inversion
sensitivity has roots in the face inversion effect, positing that faces are
processed holistically (Farah, Tanaka, & Drain, 1995).  The face inversion effect describes that
faces, as compared to other objects, are not processed part-based
(holistically) and that face recognition is orientation sensitive (Farah,
Tanaka, & Drain, 1995).   This effect
has been extended to words to examine the differences in reading styles (Pae et
al., 2016; Pae & Lee, 2014; Hirshorn et al., in prep).  Word inversion sensitivity has been shown to
distinguish between reading styles across writing systems, such as Chinese vs.
Korean (Pae et al., 2016), Chinese-English bilinguals (Ben-Yehuda, Hirshorn,
Simcox, Perfetti, & Fiez, submitted), and individual differences within
English (Hirshorn et al., in prep). 
Furthermore, there is evidence that inversion sensitivity is related to
VWFA laterality (Carlos, Hirshorn, Durisko, Fiez, & Coutanch, submitted).

An important outstanding question remains: Where do
individual differences in reading style, in English, come from?  One intriguing potential source of individual
differences is musical training.  It is
well documented that music positively influences reading (Proverbio, Manfredi,
Zani, & Adorni, 2013; Benz, Sellaro, Hommel, & Colzato, 2016; Besson,
Schon, Moreno, Santos, & Magne, 2007; Anvari, Trainor, Woodside, &
Levy, 2002; Magne, Schon, & Besson, 2006; Gromko, 2004; Moritz, Yampolsky,
Papadeli, Thomson, & Wolf, 2013; Marie, Besson, Magne, 2011; Mongelli,
Dehaene, Vinckier, Peretz, Bartolomeo, & Cohen, 2017; Bouhali, Mongelli,
& Cohen, 2017).  This is thought to
be because of the similarities between music and language and their underlying
cognitive processes.    Music and language alike are both complex
processes: music encompasses pitch, melody, rhythm, and harmony; language
encompasses phonology, semantics, syntax, morphology, and pragmatics (Besson,
Chobert, & Marie, 2011).  Some
similar properties include: an array of different pitches and sound structures
(prosodic cues); language has single phonemes and music has single notes
(McMullen & Saffran, 2004). 
Furthermore, both music and language are hierarchical and governed by
rules (Slevc & Okada, 2015). 
Finally, the individual units in both music (notes) and language (phonemes)
can be identified even if they change in pitch, volume, tempo, or timbre,
indicating a perceptual constancy (Anvari et al., 2002, as cited in Dowling
& Harwood, 1986). 

Finally, there has been evidence of effects of musical
training on neural underpinning of reading (Bouhali, Mongelli, & Cohen,
2017; Mongelli, et al., 2017; Provebio et al., 2013).  In one study, examining the differences of
musicians and nonmusicians, the N170 was recorded during tasks of note
selection and letter selection (Proverbio et al., 2013).  The results suggested that musicians had
bilateral activation during both tasks, whereas nonmusicians, for the letter
selection task, had left lateralalized activity.  In two other studies, examining the
differences in musicians and nonmusicians, fMRI was used during a picture
repetition task (faces, houses, and music were a few); the results of these
studies showed that musicians had more left hemispheric activations when
looking at words than the nonmusicians (Mongelli et al., 2017; Bouhali,
Mongelli, & Cohen, 2017).  The
conflicting evidence of laterality seen in musicians compared to nonmusicians
opens up further questions of how music expertise could be affecting reading
styles in English readers.


I'm Simon!

Would you like to get a custom essay? How about receiving a customized one?

Check it out