Current applications of behaviorism are based on the three discoveries mentioned in Part 1: classical conditioning, operant conditioning, and social learning.
1. CLASSICAL CONDITIONING
1.1. Basic Concepts
- Classical conditioning is defined as a learning process where an involuntary response (i.e., fear) is formed by the association of two stimuli. Between two stimuli, one is able to elicit the target response before conditioning (hence called unconditional stimulus – US), while the other is not (hence called conditioned stimulus – CS). The target response is referred to as unconditional response (UR) when coming with the US, and as conditioned response (CR) when coming with the CS after conditioning. The process is formulated simply as followed:
(1) US —–> UR
(2) CS —–> no response
(3) US + CS —–> UR (x times)
(4) CS —–> CR
- Extinction is the reverse process of conditioning: When the CS is repetitively and consistently presented without the company of the US, the formed association between the two stimuli weakens, until eventually the presence of the CS no longer elicits the CR.
1.2. Temporal Paradigms
Temporal paradigm in classical conditioning involves the order and the timing of the stimulus presentation. There are five paradigms: simultaneous (1-s delay), simultaneous (exact), delayed, trace, and backward, as illustrated in Figure 1.
- Simultaneous (1-s delay): The US is presented around one second after the CS and while the CS still remains. This paradigm is considered the most efficient to produce an excitatory effect (the UR is elicited at the sight of the CS).
- Simultaneous (exact): The US and CS are presented at exact the same time. According to Pavlov (1927), the US would be gradually taken as a signal for the end of the CS, and hence cause an inhibitory effect (the UR stopped at the sight of the CS). Some other studies, however, reported that this paradigm caused a weak excitatory effect.
- Delayed: The US is presented seconds or even minutes after the CS and while the CS is still presented. In this case, the CR occurred only after the CS had been presented for a while, and continued with increasing intensity until the US appeared.
- Trace: The CS appears then disappears, and a while later, the US is presented. Like in delayed conditioning, the CR occurred only after the CS disappeared for a while, and continued with increasing intensity until the US appeared. In this case, the CR was associated with a memory trace of the CS.
- Backward: The US is presented before the CS. This paradigm was generally found to cause inhibition.
- Aversion therapy is an application of classical conditioning in eliminating unwanted behaviors by associating the target behavior with an unpleasant experience. The approach is typically employed in treating addictions (alcoholism, smoking, drug dependence, gambling), bad habits (overeating, self-harm behaviors) and violent behaviors. Its methods include chemical aversion therapy (administering nausea-inducing drugs to the patients together with their addictive substances), covert sensitizations (showing patients aversive images along with pictures of the unwanted behaviors), and even electric shock.
- Counterconditioning is the application of inhibition and extinction in eliminating a negative feeling associated with a stimulus. It is typically used in treating phobias and anxiety disorders. There are two techniques of counterconditioning.
- Systematic desensitization. Developed by Wolpe (1958, 1961), the technique comprises three steps. First, the patient and the psychologist work together to define all the stimuli which provoke the anxiety. Second, the patient is instructed methods of relaxation to exercise, including deep breathing and muscle relaxation. Third, the patient is exposed to each of the anxiety-provoking stimuli, from the least to the most severe one, while exercising relaxation. Relaxation helps inhibit the anxiety, which in turn distinguishes the sense of fear.
- Flooding. An employment of extinction, flooding is simply executed by introducing the anxiety-provoking stimulus to the subject in step-by-step trials, from the least to the most severe one, with no actual unpleasant experience occurred. The exposure can be either realistic (Kimble and Randall, 1953; Polin, 1959; as cited in Rachman, 1965) or through images (Rachman, 1965).
2. OPERANT CONDITIONING
Operant conditioning, or instrumental conditioning, is defined as a learning process where a voluntary behavior is shaped by the association of that behavior with a consequence.
2.1. Basic Concepts
- Types of consequences.
- Positive reinforcement: the act of rewarding a behavior by presenting a pleasant stimulus to encourage that behavior to be repeated. The typical reinforcing stimulus in animal studies such as Skinner’s (1938) is food, and in realistic situations is compliments or material gifts.
- Negative reinforcement: the act of rewarding a behavior by removing a discomfort that the individual has had to endure, to encourage that behavior to be repeated. In Skinner’s studies (1938), the rats were electrically shocked after a light signal in the cage and as they accidentally pressed the lever jumping around, the electric current was turned off. This reinforced the rats’ response of pressing the lever to avoid the shock every time they saw the light signal. The switch of electric current is an example of negative reinforcement.
- Punishment: the act of presenting an unpleasant stimulus after a behavior is done to discourage the behavior from being repeated. Punishment is common in reality – we all experienced it in childhood when we were caught with bad behaviors. Skinner (1950) considered it to be less effective than the other types of consequence, and advised to use it only intermittently.
- Shaping. In operant conditioning, shaping particularly refers to the process of helping an individual form a desired behavior through step-by-step leading and reinforcements. A striking example of this is Skinner’s experiment with pigeons (1938). In this case, Skinner wanted to train the pigeons to peck a certain spot in the cage. Obviously the researcher would not wait for the pigeon to initiate the behavior on its own, which may take forever to occur, but had to create a calculated procedure to guide the pigeon toward the exact behavior. On the first step, he administered food to the pigeon whenever it turned slightly toward the chosen spot, to encourage the bird to turn in that direction again and again. After a while, the reinforcement was withheld, until the pigeon made a slight movement toward the spot. On this second step, food was administered only when the the pigeon moved its head closer to the spot. As it moved closer and closer, its beak would eventually touch the spot, and from then on, reinforcement was made only when the bird stroke its beak at the spot.
- Extinction: the process when the association between learned behavior and given consequence weakens, until eventually the subject stops responding to the consequence. Extinction typically occurs after a long time the behavior does not meet the expected consequence.
2.2. Schedules of Reinforcement
Schedules of reinforcement refers to the planning of reinforcing stimuli in specific order and timings for operant conditioning. Following are some of the various schedules discussed in Skinner (1950) and Ferster and Skinner (1957).
- Continuous reinforcement. Reinforcement is administered every time the subject makes a desired response.
- Fixed ratio. Reinforcement is administered every time the subject completes a fixed number of responses, counted from the last reinforcement. The timing and frequency of the reinforcing stimuli vary in accordance to the timing and frequency of the subject’s response.
- Variable ratio. Reinforcement also depends on the number of responses completed; however, this number varies from time to time and is randomized from a series of values.
- Fixed interval. Reinforcement is administered to the first desired response made after a certain interval of time.
- Variable interval. Reinforcement also depends on the first response made after an interval of time; however, the length of this interval varies from time to time and is randomized from a series of values.
- Alternative. Reinforcement is scheduled both in accordance to ratio (the required number of responses) and interval of time, depending on which is fulfilled first.
- Conjunctive. Reinforcement is administered only after both a required ratio and an interval of time are met.
- Interlocking. Reinforcement depends on the number of responses completed and this number varies from time to time. However, it is not randomized but accorded to the response rate. For instance, if the subject responds every quickly, they will have to complete a large number of responses to be reinforced; but if they have no response for a long interval, their first response after that will be reinforced right away.
- Tandem. Reinforcement is administered only after both an required ratio and an interval of time are met, but in a fixed order. Giving a required ratio of 10 responses and an interval of 10 minutes, for instance, conjunctive reinforcement is administered every time both these requirements are met no matter which occurred first, while tandem reinforcement is done only when the ratio is completed first and then the interval passes.
- Chained. Reinforcement depends on both ratio and interval of time, but the reinforcing stimulus changes after each of the required component.
- Adjusting. The value of the interval or ratio is modified systematically after the reinforcement, depending on the most recent performance of the subject.
- Multiple. The schedule consists numerous scheduling types which occur in random order, and each of which is associated with a certain, different reinforcing stimulus.
- Mixed. The schedule consists numerous scheduling types which occur in random order, similarly to multiple reinforcement, but the reinforcing stimulus is randomized.
- Interpolated. A short schedule is inserted into another, longer schedule. For example, a set of fixed ratio of 2 reinforcements for 20 responses is placed within a four-hour fixed interval.
While being the most simple paradigm, continuous schedule is found to be less effective than other schedule as it leads to slow response rates and quick extinction. The pros and cons of all the mentioned reinforcement schedules in accordance to research findings would be elaborated in a coming entry of PsychPics.
- Behavior modification. Behavior modification is the generic term for the application of operant conditioning concepts – reinforcement, punishment, shaping, and extinction – into altering behaviors or forming new habits. Behavior modification can be designed in multitudinous ways, based on the variety of choices of schedules and techniques one may use as well as combine. It can be applied in numerous settings, from formal educational and clinical programs to casual personal training and domestic management.
Behavior modification in mental health settings is referred to as contingency management. Due to the complexity and severity of its target behaviors, the practice of contingency management is more stringent and systematic compared to behavior modification in other contexts.
A similarity between behavior modification and classical-conditioning-based therapies is that they work on forming outward responses without intervening in the person’s thoughts – which is in line with the principles of behaviorism. These interventions are grouped as Analysis Behavioral Analysis (ABA).
- Cognitive behavior therapy (CBT) is the common term for psychotherapeutic techniques which work on practical problem-solving by – different from the ABA – changing both behaviors and patterns of thinking underlying the problem. The CBTs combine the operant conditioning methods of behaviorism and introspective methods of cognitive psychology, creating the connection between the two branches of approach in psychology.
3. SOCIAL LEARNING THEORY
Social learning theory establishes the third learning process – learning though observing and imitating others. As formulated by Bandura (1969, 1971), social learning is completed by four interlaced sub-processes:
- Attention. The learner attends to and recognizes the significant details of the model’s behavior. This process is greatly influenced by how much the learner relates themselves to the model, how functional the value of the observed behavior is manifested, and how attractively the model presents themselves.
- Retention. The learned retains the observed behavior in their memory. The memory does not record the entire event, but condenses it into essential elements – significant imaginal and verbal patterns – to preserve only.
- Motoric reproduction. When the time comes to repeat what is learned, the learner does not recall the exact observed behavior. Instead, patterns of the behavior are brought upon their mind, directing their actions.
- Reinforcement and motivation. This process strengthens the likelihood that a learned behavior is reproduced in overt performance.
The application of social learning theory in behavioral intervention is called social modeling. A traditional featured method in education, it is now increasingly applied in treating mental disorders and criminal behaviors as well. In recent decades, social modeling has become an integral part of computer science and engineering, providing fundamental principles for the building of software and information systems which require ergonomic knowledge and techniques.
Figure 2 summarizes the methods and applications of behaviorism and their connections with other fields of psychology.
Bandura, A. (1969). Social learning theory of identificatory processes. In D. A. Goslin (Ed.), Handbook of Socialization Theory and Research (pp.213-262). Chicago, IL: Rand McNally.
Bandura, A. (1971). Social Learning Theory. New York, NY: General Learning Press.
Ferster, C. B., & Skinner, B. F. (1957). Schedules of Reinforcement. Englewood Cliffs, NJ: Prentice-Hall, Inc.
Higgins, S. T. & Petry, N. M. (1999). Contingency management. Incentives for sobriety. Alcohol Research and Health, 23 (2). 122 – 127. Retrieved from http://pubs.niaaa.nih.gov/publications on January 11, 2016.
Kellogg, S. H., Stitzer, M. L., Petry, N. M., & Kreek, M. J. (2007). Contingency management: Foundation and principles. Unpublished manuscript. Retrieved from http://nattc.org on January 2, 2016.
Pavlov, I.P. (1927). Conditioned Reflexes: An Investigation of the Physiological Activity of the Cerebral Cortex (G. V. Anrep, Trans.). London, UK: Oxford University.
Rachman, S. (1965). Studies in desensitization – II: Flooding. Behaviour Research and Therapy, 4(1). 1-6.
Skinner, B. F. (1938). The Behavior of organisms: An experimental analysis. New York, NY: Appleton-Century.
Skinner, B. F. (1950). Are theories of learning necessary? Psychological Review, 57. 193-216. Retrieved from http://psychclassics.yorku.ca/Skinner/Theories/
Skinner, B. F. (1953). Science and Human Behavior. New York, NY: Macmillan.
Skinner, B.F. (1974). About Behaviorism. New York: Alfred A. Knopf
Watson, J. B. (1913). Psychology as the behaviorist views it. Psychological Review, 20, 158–177.
Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3(1). 1–14.
Wolpe, J. (1958). Psychotherapy by Reciprocal Inhibition. Stanford, California: Stanford University Press.
Wolpe, J. (1961). The systematic desensitization treatment of neuroses. Journal of Nervous Mentality. new. ment. Dis. 132, 189-203.
Yu, E. S. (2009). Social modeling and i*. Unpublished manuscript. Retrieved from http://www.cs.toronto.edu/ on January 5, 2016.