How Information Gets Processed In Long Term Memory With Example Of Baddeley's Working Memory Model

Categories: Short Term Memory

Baddeley once said that humans are a species that can only survive by learning (Baddeley, 2009). Without learning, there will be no spoken language, no education, and we would have a very limited way in society. Learning and memory are inextricably intertwined. The learning capacity necessitates the ability to retain the knowledge acquired through experience, while memory works to store background knowledge against which new learning takes place (Kihlstrom, Dorfman & Park, 2007). This is reiterated by Sternberg (1999), who defined memory as to how we draw on our past experiences to use this information in the present.

Learning is a complex activity that involves many processes like attention and organization to properly encode in our long-term memory. Therefore, in this essay, we will focus on long-term memory and how different information gets processed, and the best ways to learn and encode them so that we can use them in the future.

Long-term memory (LTM) is a 'repository for more permanent knowledge and skills and includes all things in memory that are not currently being used but which are needed to enable understanding' (Bruyckere, Kirschner, Hulshof, 2015).

Get quality help now
KarrieWrites
KarrieWrites
checked Verified writer

Proficient in: Short Term Memory

star star star star 5 (339)

“ KarrieWrites did such a phenomenal job on this assignment! He completed it prior to its deadline and was thorough and informative. ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

To understand how information can become encoded in the long term, we first need to know how the information gets transferred from the short-term memory (STM) to the LTM. Baddeley and Hitch’s (1974) multicomponent model played an important role in cognition by providing a structure and a set of techniques which is used for the wide range of activities, in which working memory (WM) might be important (Baddeley, 2010) Working memory is the set of mechanisms capable of retaining a small amount of information in an active state for use in an ongoing test like reasoning and learning (Baddeley & Hitch, 1974).

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

Verbal information from the phonological loop and visual information from the visuospatial sketchpad is managed and manipulated with the help of the central executive, an attentional system. The episodic buffer acts as a link connecting the various subsystems to LTM and perception (Baddeley, 2009). It is capable of holding multidimensional chunks which may combine visual and auditory information, possibly also with smell and taste (Baddeley, 2010). Some reachers (Cowan, 1999; Oberaurer, 2002) believe that WM is the activated part of LTM, with a subset of this activated memory that correlates to the focus of attention (Camos & Barrouillet, 2014). The importance of attention can be seen throughout the transferring of information through the multicomponent model. According to Pashler (1998), central processes can only take place in succession in a way that the subsequent processes are postponed. Therefore, as soon as the focus of attention leaves the memory traces, their activation suffers from a time-related decay (Camos & Barrouillet, 2014). Neisser's (1964) experiment showed that if stimuli were only partially processed, their record in memory is extremely fleeting. This is reiterated by other experiments (Baddeley, Chincotta & Adlam, 2001; Saeki & Saito, 2004) which show that true learning something not only takes rehearsal but also attention.

Squire’s (1992) LTM classification proposed that there were two types of LTM — explicit memory, which is open to intentional retrieval, and implicit memory, which is the retrieval of information through performance (Baddeley, 2009). The learning techniques for explicit learning is quite different from implicit learning as it involves learning with a learning intention. Attention, as discussed earlier, is essential to explicit learning. Other obvious factors that affect our rate of learning are Ebbinghaus's (1885) total time hypothesis and the importance of practice (Astin, 1993). The total time hypothesis proved correct that when more time is invested in learning, a greater amount of material is stored. ‘Practice makes perfect is heard countless times, and it has some truth behind it. The more distributed the practice (Ebbinghaus, 1885), and the more deliberate the practice (Erricsson et al, 1993), the more efficiently information would be stored. The learning technique of expanding retrieval (Landaeu & Bjork, 1978) combines different types of learning methods, believing that as an item becomes better learned, the practice-test interval will be gradually extended to the longest interval in which it can reliably be recalled. The idea of repeated studying and testing was also shown by Roediger & Karpicke (2006) which found that repeating helps enhanced organizational processes and supports the use of effective mediators during the recall process as persisting errors will be eliminated when direct feedback is given (Pashler et al. 2007). All these types of learning methods and techniques are useful for helping one learn information that will be stored in our LTM. However, if this information is stored in our LTM, why is that many we can remember pieces of information better than others?

Many of the learning techniques involve the repetition of information which is what Craik and Lockhart (1972) describe as Type 1 processing but Type 2 processing involves deeper analysis of the stimulus. Massaro (1970) suggested that memory for an item is related to the amount of perceptual processing of the item and deeper analysis usually involves longer processing times (Craik & Lockhart, 1972). This substantiates Ebbinghaus’ Total time hypothesis as the subject utilizes Type 2 processing when the brain’s cognitive function takes more time to comprehend and understand the information. However, the rehearsal of the information must be elaborative rehearsal instead of maintenance rehearsal which just processes information at the same level (Baddeley, 2009). On the other hand, elaborative rehearsal involves linking the rehearsed material to other material in memory, which enhances long-term learning (Craik & Lockhart, 1972; Glenberg, Smith & Green, 1977). An example is when the biology fact ‘Mitochondria is the powerhouse of the cell’ was made into a joke and was everywhere online in songs and posts, therefore, many people will always remember that phrase as it is associated with other memories, making it into semantic memory.

This can be explained by the Craik & Lockhart (1972) Levels of Processing (LOP) framework in which stimuli go through different depths of processing during encoding to provide different levels of memory performance (Ekuni, Vaz & Bueno, 2011). Depth of processing is where greater 'depth' implies a greater degree of semantic or cognitive analysis and the better the subsequent memory. Therefore, after the stimulus has been recognized, it may undergo further processing by enrichment or elaboration (Craik & Lockhart, 1972), during which the information can be matched against stored extraction from previous learnings. Tulving & Madigan (1970) defined that as ‘elaboration coding’ in which the information is recognized, it may trigger associations, images, or stories based on one’s experience with the word, semantically coding the information. Given that, Barlett (1932) “Effort after meaning” experiment found that participants’ remembered stories were always shorter, more coherent, and tended to fit in more closely with the participant’s viewpoint more than the original story (Baddeley, 2009). This is due to the socially and culturally developed schemas which are used to help people make sense of new material and eventually store and recall it. Schemas help our knowledge of words and concepts to interact successfully and flexibly with the world around us (Baddeley, 2009). For instance, researchers (Deese 1950; Jenkins & Russell, 1952) found that words and lists that are highly associated were easier to recall as some words have interword associations like bread and butter and this could be due to our schemas. Another explanation could be seen as Collin and Loftus's (1975) concept of the Spreading activation Model showing that semantic memory must be organized based on semantic relatedness, which is measured by the semantic rating of word pairs. Therefore, bread and butter are always associated together as they are very semantically related. Bower et al. (1975) reinforce this by stating that memory is aided whenever contextual cues arouse appreciate schematics, this activating a concept in the model which spreads through the network, and the most semantically closely related concepts will be activated the strongest. It was also found that words that evoke a visual image will be more well remembered because there would be two routes to retrieval for imageable words, visual and verbal, increasing the chances of recall (Paivio, 1969; Bourassa & Besner 1994). Therefore, semantically coded information is advantageous to learning as it allows a richer and more elaborate code, which becomes more readily retrievable (Craik & Tulvig, 1975). Another study conducted (Morris, Bransford & Franks, 1977) showed that retrieval task is best with semantic processing but only when it requires the remembrance of meaning, proving that deep and meaningful learning usually leads to long-term encoding.

However, one of the criticisms of the LOP framework is the principle of transfer-appropriate processing. It helps prove that deeper processing does not always lead to better performances. This is because for a test to reveal prior learning, the processing requirements should match the processing conditions at encoding (Baddeley, 2009), and the LOP model does not make references to retrieval conditions (Ekuni, Vaz & Bueno, 2011). A student might study hard for a test but still do badly on retrieving the information because they studied for the incorrect type of knowledge. Therefore, in the recall experiments discussed earlier, participants are shown to be quite poor at recalling words that they had made visual or phonological judgments about but are good at remembering words in which they made a meaning-based judgment, which might reflect a bias in a way items are tested. As a result, it should be that the efficiency of a learning method should only be judged based on the context of how memory is subsequently tested (Morris et al., 1977; Fisher & Craik, 1977).

As discussed earlier, when we remember past information and experiences, we typically invoke a precious conscious awareness of the events and knowledge. However, memory for some aspects of the past can be expressed without any awareness that we are ‘remembering’, which is what is called implicit memory (Schacter, 1998). Like explicit memory, implicit memory also has some form of learning, however, it is reflected in performance rather than through overt remembering (Baddeley, 2009). Another difference is that it is non-episodic learning of complex material in an incidental manner, without awareness of what has been learned (Seger, 1994). This is shown in many of our daily activities like walking properly even without knowing the rules of mechanics our body must follow or talking without making grammatical errors even though many do not know the many grammatical rules to follow. Implicit learning is an example of how deep and meaningful learning is not the only way that leads to the best long-term memory encoding. One of our most deeply encoded memory is our motor skills, which we learn through procedural learning, like how can ride a bicycle. Our fast ability to acquire motor skills was shown by Nissen and Bullemer (1987) through decreasing reaction times after implicitly picking up the sequence. These type of Serial reaction time task (SRTT) experiments also show that implicit learning is less affected by age than learning based on explicit tasks (Frensch & Rünger, 2003). However, on occasion, our motor skills can fail due to respond stress (Masters, 1992), like when skilled sportsmen ‘choke’ under pressure or when we have a ‘brain fart’ during an exam, which hinders our retrieval of information from LTM for a period of time. Priming, another category of implicit learning, helps to influence subsequent perceptions or processing of an item that was presented. Performances on certain tasks are better after the relevant prior experience than in the absence of the experience which can be seen in experiments done by many researchers (Graf et.al, 1984; Srinivas & Roediger, 1990). In fact, Tulvig, Schacter, and Stark (1982) study the durability of explicit and implicit verbal learning and found that the information recalled without priming was lost after an hour, whereas, the information was retained when given a fragment of the word.

In some cases of brain injuries, deep and meaningful learning will not affect how well the information will be encoded in the long term. The hippocampus is known to be important for episodic memory (Baddeley, 2009). Experiments (Morris et al., 1982; Davis & Butcher, 1990) that show damages to the hippocampus resulted in the inhibition of long-term potentiation (LTP), a process in which long-term learning involves the establishment of links in the cell assemblies (Hebb, 1949). N-methyl-D-aspartate (NMDA), a neurotransmitter also plays an important role in LTP and is necessary for the synaptic change that is assumed to underpin learning (Shors & Matzel, 1997). Therefore, any damages or inhibitions to these areas will cause a rift in learning, no matter how deep and meaningful it is. Amnesic patients have severe deficits in recall and recognition but can exhibit normal priming effects (Shimamura & Squire, 1984). Experiments (Milner, 1962; Cohen, 1981) have shown that their skill learning is still intact but their explicit memory is impaired, therefore, they cannot make any conscious recollection. Hence, it again depends on what type of test the participants take to determine whether or not they have deeply encoded learned material as amnesiac patients are shown to be good in paired-associate learning for related word pairs (Shimamura & Squire, 1984) but might not be good in free recall. These are just a few of the areas that we know impact some types of learning and retrieval of information which shows that long-term encoding can be impacted by injuries to the brain, therefore, deep and meaningful learning will not help in these cases. Many other factors affect long-term encodings like the role of sleep in memory consolidation (Baars & Gage, 2010), stress (Leuthi, Meier & Sandi, 2009), depression (Roediger & McDermott, 1992), and even our diet (Kanoski, et al., 2010).

Updated: Oct 11, 2024
Cite this page

How Information Gets Processed In Long Term Memory With Example Of Baddeley's Working Memory Model. (2024, Feb 21). Retrieved from https://studymoose.com/how-information-gets-processed-in-long-term-memory-with-example-of-baddeleys-working-memory-model-essay

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment