Tag Archives: behaviorism

Behaviorism – 1. Historical Progress


A number of discoveries and theoretical establishments have contributed to the progress of behaviorism from its dawn to nowadays. The following are the best recognized milestones.



Pavlov’s experiments with dogs are very well-known in the science history. They formed a foundation for the concept of CONDITIONING, which is now a significant method in behavioral research and applied areas which adopt behaviorism.

The discovery occurred in the course of Pavlov’s examinations about dog’s reflex actions, such as drooling at food. Pavlov noticed that there were times the dog drooled just as he entered the room, even without any food presented.


Image 1. A simplified visualization of Pavlov’s conditioning experiment using the sound of                        the tuning fork as the CS.                                                                                                                     a) Food was fed, to familiarize the dog with it.                                                                                  b) Food was presented, to ensure its sight evoked the UR (salivation).                                              c) Food was presented to the dog, while the tuning fork was struck. UR occurred.                              This combined acts were repeated several times.                                                                            d) The tuning fork was struck without the food presented. UR occurred.                                             Conditioning was achieved.                                                                                                              e) After several repetitions of (d), the dog no longer salivated at the sound.                                          Extinction was achieved.

To investigate the phenomenon, Pavlov brought food to the dog while at the same time presenting a factor which the dog did not particularly respond at (eg. a light, the sound of a tuning fork…). After several repetitions of this combination, at one point Pavlov presented only the additional factor, and the dog drooled at it as well. This process was defined by Pavlov as conditioning. The food was called an unconditional stimulus (US), the salivation at the food an unconditional response (UR), the additional factor an conditional/conditioned stimulus (CS), the dog’s original attitude at the CS an neutral response (NR), and the salivation at the mere presentation of the CS a conditioned response (CR).

The basic procedure of Pavlov’s conditioning can be formulated as followed:

(1)       US               ———->    UR
(2)       CS + US     ———->    UR         (× times)
(3)       CS               ———->    CR          (condition achieved)

It was noticed that, at a later point following repetitions of Step (3),  the dog stopped drooling at the sound, resuming its NR at the CS. This phenomenon is termed extinction. 

Pavlov’s conditioning process is now called classical conditioning,  as distinguished from the operant conditioning coined and demonstrated by Skinner (see I.5).


2. PUZZLE BOX (Thorndike, 1898)

Puzzle Box was an instrument designed by Thorndike for studying learning in animals, namely cats and dogs. The box was built as a cage with a simple locking mechanism (see Image 2).


Image 2. A simplified visualization of the Puzzle Box based on Thorndike’s model (1898).


   Image 3. A simplified visualization of Thorndike’s Puzzle Box Study on a Cat.                                                                                a) A fish was used to tempt the cat into escape.                                                                                  b) The cat pushed the bar, unlocking the gate on its side.                                                                  c) The cat pulled the string, unlocking the gate on its top.                                                                d) The cat pushed down the gate and escaped the box.

The test animal was to be locked in the box, while a tempting piece of food was presented to it beyond its reach. While trying to get out of the box for food, the animal would accidentally remove the locks. After every escape, the animal was awarded the food and then placed into the cage again for another attempt. According to Thorndike’s report (1898), the amount of time it took for the animal to escape gradually decreased and the movements the animal made tended to be more focused over attempts, suggesting that the animal actually learned how to get out of the cage by and by.


3. LAW OF EXERCISE & LAW OF EFFECT (Thorndike, 1905; Thorndike, 1912)

Drawing from his studies of animal learning, Thorndike introduced the two laws for habit formation in the learning process, Law of Exercise and Law of Effect.

In simple words, the Law of Exercise states that, the more repetitively or strongly a behavior is connected to a situation, the more likely that behavior is to be performed in response to the same situation in the future. Rather common sense to us, this law refers to practice in learning. Take the animal’s improvement in unlocking the Puzzle Box by Thorndike (1898) over many attempts as an example: the animal was familiarized with movements that linked more consistently to the escape, and concentrated on such movements in later attempts, which led to its being quicker and quicker in escaping.

The Law of Effect, meanwhile, refers to consequences: the more satisfying a consequence associated with a behavior is, the more likely that behavior is to be performed again; likewise, the more dissatisfying a consequence associated with a behavior is, the more unlikely that behavior is to be performed again. Briefly speaking, pleasant consequences to a behavior encourage it, while unpleasant ones discourage it. This law was reflected in Thorndike’s studies with cats (1898) as in how the animals stopped making ineffective movements for escape, such as trying to squeeze themselves through the gap between the bars, after these movements left them frustrated. As emphasized by Thorndike (1912), Law of Effect is significant in education as a fundamental principle for building desired habits and eliminating undesired behaviors.


       Image 4. A simplified visualization of Watson and Rayner’s Little Albert Experiment                                    a) The rat was presented to Little Albert.                                                                                            b) As the baby moved to touch the rat, the loud noise was made.                                                      c) The baby jumped and hid his face into the mattress.                                                                      d) The repeated combination of the rat and noise eventually made Albert cry.                                e) At one point, the baby cried at the mere sight of the rat.


4. LITTLE ALBERT (Watson & Rayner, 1920)

Watson and Rayner’s s experiments with Little Albert was one of the earliest demonstrations of classical conditioning. The experiment further explored the effect of associating a neutral factor with an response-evoking stimulus. While Pavlov’s conditioning dealt with a reaction of desire (the dog’s salivation), Watson and Rayner’s looked into a reaction of fear.

Little Albert was the alias that researchers used to address the participant in this experiment, an 11-month-old boy at the hospital where the experiment laboratory located. Prior to the study, when Little Albert was 9 months old, the experiment administered emotional tests to him to see whether he would react in fear when confronted certain stimuli, including a white rat, a rabbit, a dog, a monkey, hairy and non-hairy masks, cotton wool, burning newspaper, and a loud noise. It was reported by his mother and hospital attendants from their casual observations that the boy had never displayed fear or anger or cried. Test results confirmed that the boy show no reaction of fear against the presented stimuli – except a loud noise created by striking a hammer against a steel bar. On hearing the sound for the first time, Little Albert startled and raised his arms, and on the third time, he burst into crying.

As Little Albert reached 11 months old, the experiment was started, with a white rat as the CS and the loud noise as the US.

First of all, the rat was shown to Little Albert and he moved his hand to touch it. At that moment, the hammer was struck against the bar. Little Albert’s response was jumping up and hiding his face into the mattress he was seated on.

One week after the first trial, the rat was shown to Little Albert again. The baby withdrew his hand from the rat as it touched him. The combination of rat and noise was presented again and again, following certain intervals. After every trial the baby’s fear response intensified a little – falling to another side, whimpering, and eventually, crying. At the end of the experiment with the rat, Little Albert was seen to cry just at the mere sight of the rat.

The experiment continued until Little Albert reached almost two, revealing that he appeared to also develop a fear for things which resemble the rat: a rabbit, a dog, a fur coat, a cotton wool and a Santa Claus mask. Response intensity varied between these stimuli, but the general reaction was trying to avoid them, which the boy had never displayed in the emotional tests. Non-similar objects such as toy blocks or newspaper, whereas, did not provoke Little Albert at all. The boy touched and played with them normally.

Little Albert was discharged from the hospital before an extinction process could be executed to his conditioned fear. Because of this, further examination on Little Albert’s case was not completed and the study remains controversial till this day for its ethical issues.



In 1938, B. F. Skinner introduced the concept of operant conditioning (sometimes referred to as instrumental conditioning). In his book The Behavior of Organism: An Experimental Analysis, he detailed the distinctions between Pavlovian-type conditioning – now classical conditioning – and operant conditioning. Basically, they can be distinguished by the type of behavior they manipulate, the reinforcing stimuli they employ, and the law of conditioning and law of extinction they follow.


   Image 5. A simplified visualization of Skinner Box Study with rats.

Firstly, Pavlovian-type conditioning works with respondent behaviors, which are elicited by prior, specific stimuli and ruled by static laws. For instance, the dog’s salivation at the sight of food is driven by the natural instinct of eating. The behaviors operant conditioning is concerned with (termed by Skinner as operantbehaviors), whereas, appear spontaneous and not fixed to any known stimuli.

The second difference between two types of conditioning – the reinforcing stimuli – refers to the events associated with the occurrence of the conditioned behavior. In Pavlovian-type conditioning, reinforcing stimuli take place prior to the behavior. In operant conditioning, reinforcing stimuli is introduced following the behavior.

Thirdly, classical conditioning increases the strength of the reflex underlying the behavior, causing the behavior to be produced in response to other stimuli beside the reinforcing one, while operant conditioning increases the strength of the behavior itself, causing it to be repeated for the reward of the reinforcing stimuli.

The theory and application of operant conditioning has been expanded throughout a history of behavioral research, including many studies by Skinner himself

6. SKINNER BOX (Skinner, 1938)

The operant conditioning chamber, also known as Skinner Box, was a device designed by Skinner for testing the learning capacity of rats, as part of his research in operant conditioning. It had an overall form of a cage (much like the Thorndike’s Puzzle Box) with an integrated mechanism which supplied the caged animal with food when they managed to perform a certain behavior.

In a study, a rat placed in the Skinner Box (see Image 5), after running around to find a way to escape the box, would accidentally push a level fixed on a wall. The level pushed down would operate the food mechanism set outside the box, which subsequently delivered food pellets through a hole into a small tray inside the box. After repeated occurrences of food delivering, the rat gradually formed the habit of lever pushing. This habit formation was the result of operant conditioning with the use of food as a reinforcing stimulus.


7. BOBO DOLL EXPERIMENT (Bandura, 1965)

The 1965 study by Albert Bandura shed light to another typical behavior, which is also significant in learning, among animals and human alike: imitation. The hypothesis was that one would imitate a behavior better if they were shown or given positive reinforcement for that behavior.


          Image 6. A simplified visualization of Bobo Doll Experiment

66 kindergarten children were selected as participants for the experiment. Randomly divided into three separate rooms, they all watched a clip which showed an adult punching violently a Bobo doll. At the end of the clip in Room 1, the adult was praised to be a “strong champion” and given sweets and drinks by another adult. At the end of the clip in Room 2, the adult was reprimanded and spanked by another adult for being “a bully”. The clip in Room 3 ended without no consequence to the adult.

Each of the children were then led into a playroom where a range of toys were displayed, including a Bobo doll, balls, cars, dolls, plastic animals, and so on. They were instructed to play with the toys as they pleased, and then left alone in the room. Observed by the experimenters through a one-way mirror, the child’s activities were assessed in respect of how much they matched the adult’s behaviors in the clip.

After that, the experimenter entered the room again, bringing attractive-looking drinks and sticker books as incentives, and told the children that they would be given a sticker and drink treat every time they successfully imitate one of the adult’s behaviors in the clip. The child’s responses was also observed and assessed.

Calculated results revealed that Group 1 and Group 3, in both the incentive-session and the non-incentive playroom, performed significantly more behaviors which matched the adult’s in the clip, than children in Group 2. This suggested that the consequence to the model was influential to the watchers’ imitation: those who saw the model receive awards or no consequence seemed to imitate them more than those who saw them receive punishment. The fact that the levels of imitation recorded were higher in the incentive sessions than in the non-incentive playroom also fitted the prediction that rewards would encouraged imitation.


8. SOCIAL LEARNING THEORY (Bandura, 1969; Bandura, 1971)

Along with classical conditioning and operant conditioning, social learning theory makes an important theoretical underpinning in behaviorism. Generalized from studies which examined modeling and imitation, including Bandura’s Bobo Doll experiment, the theory postulates that learning can take place in social contexts in which the learner picks up a modeled behavior by attentively observing, remembering and reproducing it. Unlike conditioning, social learning is not resulted from the association between the behavior and reinforcing stimuli, but rather from the guidance of symbolic representations which the learners detect and absorb from the model. An example is how some participants from the Bobo Doll experiment imitated the violent behaviors of the model adult even before they were encouraged to do so with incentives, or more realistically, how teenagers emulate celebrity idols they see on television.

Although specific reinforcing stimuli are not essential for the occurrence of social learning, such stimuli – namely, rewards and motivations – can help boost the learned behavior into overt expression while discouraging factors inhibit it. A teenager who enjoys copying the styles of celebrity idols may restrain such behaviors at home in the presence of her disapproving parents, but will boldly exhibit them when she is among like-minded peers.



Bandura, A. (1965). Influences of models’ reinforcement contingencies on the acquisition of imitative responses. Journal of Personality and Social Psychology, 1 (6), 589 – 595.
Bandura, A. (1969). Social learning theory of identificatory processes. In D. A. Goslin (Ed.), Handbook of Socialization Theory and Research (pp.213-262). Chicago, IL: Rand McNally.
Bandura, A. (1971). Social Learning Theory. New York, NY: General Learning Press.
Pavlov, I.P. (1927). Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex (G. V. Anrep, Trans.). London, UK: Oxford University.
Skinner, B. F. (1938). The Behavior of organisms: An experimental analysis. New York, NY: Appleton-Century.
Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Monographs: General and Applied, 2 (4), i-109.
Thorndike, E. L. (1905). The Elements of Psychology. New York, NY: A. G. Seiler.
Thorndike, E. L. (1912). Education, a First Book. New York, NY: The MacMillan Company.
Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3(1), pp. 1–14.