Instrumental Conditioning in Psychology: Meaning With Examples
In this article, you will learn:
Instrumental Conditioning (also referred to as Operant Conditioning) and Classical Conditioning are the two types of learning that have been studied intensively.
Learning is nothing but a process of bringing about a relatively permanent change in human behavior or human behavioral potential as a result of experience.

Instrumental Conditioning and Classical Conditioning are the two types of learning that have a powerful impact on human behavior.
Besides this, these types of conditioning also give deeper insights regarding how various learning processes work?
In this article, we are going to discuss what is Instrumental Conditioning and how Instrumental Conditioning helps in Shaping Behavior.
What is Instrumental Conditioning?
Instrumental Conditioning is the other name of operant conditioning. It is a form of learning that associates a behavior with the occurrence of a significant event.
In other words, instrumental conditioning or operant conditioning theory of learning is the process that involves changes in human behavior depending upon the consequences of a significant event.
If the event produced positive outcomes that lead to a positive change in human behavior, then the individual would learn to repeat such behaviors.
However, if an event generated negative outcomes that generated negative changes in human behavior, an individual avoided or escaped such negative behaviors.
Skinner’s Theory
B.F. Skinner, an American Psychologist, proposed the Operant Conditioning Theory, also known as Skinner’s Theory. Thus, Skinner is the Father of Operant Conditioning.
Skinner defined Operant Learning Theory as the process that results in operant behavior. As per Skinner’s Theory, there are two types of behaviors: respondent and operant behavior.
Also Know: The Father of Operant Conditioning – B. F. Skinner
Respondent Behavior refers to a spontaneous response resulting due to the exposure of an individual to a stimulus occurring in the environment.
Operant Conditioning Examples
For example, your eyes shut automatically when you get exposed to too much sunlight. This is a reflexive behavior and it is evoked by the environment directly.
Whereas, many of our behaviors are not generated by the environment. Rather, they are generated by us, humans. By showcasing such behaviors, we operate upon the environment. These behaviors are referred to as Operant Behaviors.
For example, eating, talking, dancing, singing, reading, writing, etc are all behaviors that are not forced upon us by the environment. Instead, these behaviors are emitted by us.
What is an Example of Instrumental Conditioning?
As per B.F. Skinner, we humans learn as a result of punishments and rewards. We learn every single habit and attitude that makes us what we are from the response that the surrounding environment gives.
Thus, we get nothing just because we are born as humans. We have to traverse the path of learning at each and every stage of life. Thus, we humans learn through getting rewards and punishments.
This means Instrumental conditioning is at work all around us. The following are a few of the Instrumental Conditioning examples picked up from our daily life:
- In order to make your child work hard and perform well in exams, you promise your child tickets to a live soccer match if he performs well. The promise of buying tickets to live soccer match would positively reinforce your child to work harder. Moreover, he or she would probably continue working hard in order to receive such rewards in the near future. This is an example of positive reinforcement in operant conditioning.
- You have severe pain in your back and in order to avoid that pain, you take pain-relieving tablets. These tablets provide you relief from the pain and it is likely that you will consume take these tablets in the future as well in case you suffer from backache again. Hence, the tablets act as negative reinforcers in this case as they help you in avoiding or escaping an unpleasant or undesirable condition. This is an example of negative reinforcement in operant conditioning.
History of Operant Conditioning
The early thinking in human psychology focused on the importance of instincts in guiding human behavior. Psychologists like William James proposed that human beings possessed a greater range of instincts that guide their behavior.
However, in the 1920s, the behaviorists viewed the experience as the major determinant of human actions and thus moved away from the instinct-based theory of human behavior.
I. Behaviorism
Behaviorism is an ideology that gives importance to the role of experience in governing human behavior. Therefore, we learn significant processes influencing our behavior.
In other words, we learn the drives leading to certain behaviors. In addition to this, we also learn the specific behaviors stimulated as a result of such drives.
So, the next step after establishing this ideology was to determine the laws governing such a learning process in humans.
One of the very first ideas that contributed to the behavioral view was that of ‘Associationism’ proposed by the Greek philosopher Aristotle.
II. Associationism
Associationism refers to the process of associating two ideas. This means that we humans associate two events. Thus, when we think of one event, we automatically recall the other event.
Aristotle proposed that two events must be temporarily paired and similar to each other or opposite to each other for such association to develop.
Later, John Locke, a British empiricist, proposed that we humans form ideas as a consequence of our experiences. He distinguished between simple ideas and complex ideas.
Simple Ideas refer to the passive impressions received by our senses. And complex ideas refer to the combination of simple ideas or association of ideas.
Then David Hume, a Scottish Philosopher, contemplated that three basic principles combine simple ideas into complex ideas. These include:
- Resemblance
- Contiguity in time or place
- Cause and Effect
Both Locke and Hume were philosophers. Therefore, the very need to evaluate the validity of the principle of Association was left on the later scientists.
III. Law of Effect
One of these scientists, Edward Lee Thorndike, contributed to the behaviorist view further. He published one of his works in 1898 that proposed that animal behavior could change as a result of experience.
Thorndike developed his ideas of motivation and learning based on his research using a Puzzle Box. He used 15 different Puzzle Boxes and tested 13 kittens and young cats.
Thorndike ;Law of Effect Experiment
In his study, Thorndike locked a hungry cat inside a box and placed food outside the box, beyond the cat’s reach. It was possible for the cat to escape the box and grab the food.
However, the cat had to exhibit a sequence of responses in order to trigger a release mechanism. That is to escape from the box.
For instance, pulling the string and pressing a pedal are two of the effective behaviors the cat had to showcase to escape the confinement.
Thus, while conducting this experiment, Thorndike observed that the cat would showcase a number of behaviors like clawing, rubbing, meowing, and biting.
However, over a period of time, the cat would respond in a way that triggered the release mechanism and she could open the door to the puzzle box.
Finally, the cat would escape the puzzle and hence grab the food placed outside the puzzle box.
Thorndike Experiment Observations
Thus, Thorndike concluded that the cats were successful in escaping the puzzle box. Further, he also asserted that the time required to trigger the release mechanism decreased with each successive trial.
In addition to this, Thorndike observed that there was a decline in the time that cats spent in other behaviors. However, the time declined only when the release mechanism triggered and the cats showcased a behavior.
As a result of performing this experiment, Thorndike concluded that the cats formed an association between a stimulus (the box) and an effective response.
He proposed that learning happened when the cats formed an association between stimulus and response.
Thus as a result of learning, the specific stimulus evokes the appropriate response. Thorndike also asserted that the cat was not conscious of such an association. It is in fact showcasing a mechanistic habit in response to a particular habit.
Now, the association between stimulus and response developed when the cat received the reward. That is, an appropriate response would help the cat in escaping the puzzle box and obtaining the food placed outside the box.
Therefore, such a reward generated a feeling of satisfaction in the cat and hence enhanced the association between the Stimulus and the Response.
And Thorndike called this strengthening of association between the Stimulus and Response by a satisfying event or reward as the Law of Effect.
Conclusion
This means that the Law of Effect chooses an appropriate response and associates it with the environment, thus replacing a ‘chance act’ with learned behavior.
Now, Edward Thorndike did not believe that the Law of Effect applied to animals only. He believed that such a law also governs the learning process in humans.
Therefore, in the year 1932, Thorndike presented his human subjects with a concept to learn. Following this, he told them that they had responded correctly. This enabled the subjects to learn the appropriate response.
Also Read: Edward Thorndike Contribution to Psychology
Law of Readiness
So, Thorndike’s views on the nature of the learning process were quite specific. However, his ideas pertaining to motivational processes that determine human behavior were quite vague.
According to Thorndike, an animal or a human needs to be ready when it wants to learn a new behavior or exhibit previously learned behaviors.
This is termed Thorndike’s Law of Readiness. As per this law, something needs to motivate an animal or human to develop an association between two ideas or to exhibit a previously learned habit.
While proposing the Law of Readiness, Thorndike did not take into consideration the nature of the motivation mechanism.
But many psychologists challenged the views of Thorndike pertaining to Law of Effect or Law of Readiness. They challenged these views on the basis of the following observations.
- Trial and Error applied to a very restricted form of problem-solving. The conditions of the experiment restricted the insights as per the Problem-Solving approach.
- The cats in Thorndike’s experiment did not always achieve the solution of the problem gradually by hit and trial. That is, in some cases, the cats reached the solution practically on the very first trial itself.
- Learning was possible without any effect. To support this view, the psychologist conducted an experiment. As per this experiment, the rats received no food reward to run the maze. However, later on, it turned out that rats received the food reward for their runs as they had learned the correct way of running through the maze during their unrewarded explorations.
IV. Operant Conditioning Theory
In the year 1930, the American psychologist Burrhus Frederic Skinner (B.F. Skinner) published a short paper in the Proceedings of the National Academy of Science. It was then that Thorndike received full appreciation for his contribution.
This paper revealed the Eating Behavior of White Rats observed in the so-called Skinner Rat Experiment. In this paper, B.F. Skinner, also referred to as the Father of Operant Conditioning described a slightly modified form of an experiment that involved a mini-laboratory for acquiring the operant behavior of white rats. And, this mini-laboratory was the Skinner Box.
Skinner Box or Operant Conditioning Chamber
Skinner Box is also termed the Operant Conditioning Chamber. It is a simple box in which animals are conditioned for operant learning. This box usually consisted of:
- A lever, which when pushed, opened the swinging door of the food bin at one end of the Skinner Box and released a mechanism to deliver a small pellet of food to the hungry white rat.
- A Cumulative Recorder that generated a graphical record of the subject’s responses. These records acted as primary data that Skinner, as well as his colleagues, used to determine the impact on response rates of various reinforcement schedules.
Thus, rats quickly learned to press the bar or the lever when they received the food as a result of performing such an act.
Operant Conditioning Principles
Thus, the rats experienced an instantaneous learning process relative to the gradual and irregular progress observed in the behavior of Thorndike’s cats.
So, Skinner signified through this experiment that the consequences of behavior determine learning and not the antecedents (stimulus events).
The antecedents or the stimulus events that occur before the operant response just give a context to the whole thing.
But, it is the consequence of specific behavior that actually decides whether a specific behavior would be conditioned or not.
Therefore, Skinner proved via rat experiment that appropriate reward and punishment lead to an increase in the occurrence of conditioned operant behavior. This is Operant Learning which also came to be known as the B.F. Skinner Theory.
Also Read: CBT Principles (Cognitive behavior therapy)
Reinforcement and Punishment: How Operant Conditioning Works?
Operant Conditioning is a process that involves changes in human behavior depending upon the consequences that follow a specific behavior.
That is to say, in B.F. Skinner Theory of Operant Conditioning, the likelihood that a particular behavior will occur changes depending upon the behavior that followed as a result of the outcome of an event.
Thus, if the outcome of an event is positive and it leads to a positive change in human behavior, then there is an increased likelihood that such behavior would be repeated.
However, if the outcome of an event is negative and it leads to a negative change in human behavior, then there is an increased likelihood that an individual would avoid repeating such behaviors.
Now, four basic principles of operant conditioning determine the occurrence of a particular behavior.
Two of these procedures result in strengthening, enhancing, or increasing the rate of behavior. While the other two result in weakening or decreasing the rate of behavior.
Reinforcement refers to the principles of operant conditioning that strengthen or increase the rate of behaviors. Whereas, Punishment refers to the procedures that reduce the rate of a specific behavior.
-
Reinforcement
As per Skinner’s theory, operant conditioning reinforcement refers to the stimulus events that strengthen or increase the rate of behavior occurring before such events or reinforcers. Operant Reinforcement can be of two types: positive reinforcement and negative reinforcement.
-
Positive Reinforcement
Operant conditioning positive reinforcement refers to the consequences or stimulus events that result in strengthening or increasing the rate of behavior that precedes them.
This means if the consequence of a specific behavior leads to an increase in the occurrence of such behavior in the near future, such a consequence or stimulus event acts as a positive reinforcer.
One of the Positive Reinforcement examples maybe you getting reinforced to read more books in the near future as a result of your teacher appreciating your effort in reading books.
-
Negative Reinforcement
Negative Reinforcement in Operant Conditioning refers to the negative reinforcers which are the stimulus events or unpleasant consequences that result in increasing or enhancing specific behaviors that allow an individual or an organism to avoid or escape such stimulus events.
Skinner’s Negative Reinforcement, thus means that the unpleasant consequences reinforce the individual not to exhibit the behavior that resulted in unpleasant consequences.
One of the negative reinforcement in operant conditioning examples include parents surrendering to their children’s demands specifically at public places or events.
Typically, the behavior of parents to buy their children gifts, chocolates, or fulfilling their other demands at a public place or event increases because such behavior helps them to stop their children from shouting. And this, as a result, saves the parents from embarrassment.
In a nutshell, negative reinforcers are unpleasant events that increase the rate of behavior that results in escaping from or avoiding such stimulus events.
-
Primary Reinforcement
Primary reinforcement is also sometimes referred to as Unconditional Reinforcer or Unconditioned Stimulus. Such reinforcers are the ones that occur naturally as a response to the presented stimulus.
Positive reinforcers are associated with our basic needs and are biologically important for our survival. These reinforcers do not require any kind of learning in order to perform the work.
For instance, food, air, water, and sleep are a few of the examples of primary reinforcers.
Thus, Primary Reinforcers help an individual to survive. Furthermore, whether a specific event or a thing acts as a Primary Reinforcer for an individual depends upon his experience as well as his genes.
For instance, while one person may find wheat pleasing to consume whereas another person may not like consuming wheat at all.
-
Secondary Reinforcement
Secondary Reinforcement is also referred to as Conditioned Reinforcement. It acquires its reinforcement value only when it is associated with a primary reinforcer.
Such reinforcement involves using a stimulus that becomes reinforcing after it is paired with a primary reinforcer.
For instance, money is not a primary reinforcer. Rather, it is a secondary reinforcer as you can utilize it to meet basic necessities such as food, shelter, and clothing.
-
Schedules of Reinforcement
It may so happen that specific responses or actions result in rewards each time such actions are performed. However, sometimes, these actions might not follow such rewards.
That is to say, under natural conditions, the occurrence of behavior resulting in a reward is uncertain.
This means that there are situations in which actions followed by a reward or no reward are uncertain.
Say, for instance, you give candy to your child every day after he finishes his homework. After a few days, your child does not get motivated by the candy and hence loses interest in the candy.
He is really not bothered about completing the homework nor does he care for the candy.
Therefore, to cater to such challenges, it is suggested that scheduling of the Reinforcement is important so that Operant Learning is implemented in a proper manner.
Now, Scheduling of Reinforcement can happen basically in two ways:
-
Continuous Reinforcement Schedule
Continuous Reinforcement Schedule refers to the process of reinforcing the target behavior each time such behavior occurs.
This means that under continuous reinforcement in operant conditioning, reinforcement is offered each time after the desired behavior is exhibited.
Since the desired behavior is reinforced each time it occurs, it is easy for the organism to associate the reinforcer with the desired behavior. Thus, it is the quickest way to teach someone the desired behavior.
Furthermore, a Continuous Reinforcement Schedule is quite effective in training a new behavior. This is because it helps a person or an animal to create a strong association between the behavior and the response during the initial stages of learning.
As mentioned in the example above, you give your child candy every day after he finishes his homework.
But the major drawback of this kind of reinforcement schedule is that once the reinforcement is stopped, the desired behavior quickly vanishes.
-
Partial or Intermittent Reinforcement Schedule
Once the desired response gets firmly developed in a person or an animal, one transits from a Continuous Reinforcement Schedule to a Partial Reinforcement Schedule.
Unlike Continuous Reinforcement in Operant Conditioning, Partial or Intermittent Reinforcement Schedule does not reinforce a person or an animal each time they perform the desired behavior.
This means that the Partial Reinforcement Schedules reinforce the desired behavior occasionally rather than on a continuous basis.
Under such type of reinforcement, learning takes place quite slowly since initially, it is quite challenging to associate the reinforcer with the desired behavior.
Furthermore, Partial Reinforcement Schedules also generate behaviors that are quite immune to getting extinct.
Let’s consider the above example where you were trying to develop the habit of doing homework each day in your child.
While you were using a Continuous Reinforcement Schedule initially, reinforcing the behavior on a continuous basis is just unreasonable.
Therefore, when the desired behavior gets established or a considerable period of time passes away, you would switch to a Partial Reinforcement Schedule.
Now, it is important to know that Partial or Intermittent Reinforcement Schedules are further subdivided into Fixed or Variable Scheduling.
-
Fixed Interval Schedule
This is a schedule of reinforcement in which the incidence of reinforcement depends upon the interval of time, that is, the amount of time elapsed.
In other words, the first action or response occurs after a specific amount of time has passed which brings reward or reinforcement.
When on such a schedule, people depict a pattern where they act at lower rates immediately after the reinforcement occurs but eventually act more as the time for the next reward approaches near.
For instance, children hardly study immediately after a big exam. However, the rate at which children study increases once the time for the next exam approaches.
-
Variable Interval Schedule
When on a variable interval schedule, the occurrence of reinforcement or reward also depends upon the amount of time elapsed.
However, the amount of time that must pass before a particular action will again lead to a reinforcement varies.
For instance, the students whose teachers check the homework at irregular intervals perform consistently since their work can be checked anytime.
Such students tend to do their homework consistently so that they can get positive outcomes such as appreciation and not negative outcomes such as scolding.
Thus, this depicts the behavior that is on a variable interval schedule.
Furthermore, unlike fixed-interval schedules, people depicting a behavior on a variable interval schedule perform at a consistent rate without any pauses as seen in the fixed interval schedule.
-
Fixed Ratio Schedule
Under the Fixed Ratio Schedule, reward or reinforcement occurs only after a fixed number of responses.
For instance, a tailor stitching suits get rewarded after each suit gets completed.
-
Variable Ratio Schedule
Under this schedule, reinforcement takes place after a variable number of responses are completed.
Thus, individuals witnessing variable ratio schedules do not know exactly how many responses need to be completed for the reinforcement to occur.
This leads them to perform at consistent and high rates since they cannot predict when the reinforcement would occur.
For instance, people buying lottery tickets do not know exactly how many tickets would result in a reward.
Thus, variable ratio schedules lead to behaviors that are highly resistant to extinction.
That is to say, such behaviors occur even when the reinforcement is not available.
-
-
-
-
Punishments
Under Operant Conditioning Punishment refers to an event or condition that reduces the likelihood of the occurrence of a specific response or behavior.
Provided the punishment is given on a consistent basis following such a specific response or behavior.
For instance, if you are unable to submit an assignment on time to your class teacher, your teacher scolds you or maybe refuses to accept your assignment.
By giving such a punishment, your teacher expects that you would submit your assignments on time in the near future.
So just like Reinforcement, Punishment can also be Positive or Negative.
-
Positive Punishment
Operant Conditioning Positive Punishment refers to a situation where the occurrence of a specific behavior reduces as an unpleasant event, condition, or thing is presented in consequence of exhibiting such a specific behavior.
Say, if your child misbehaves and you scold him so that he doesn’t behave in such a way in the near future, this is what is called positive punishment.
-
Negative Punishment
Operant Conditioning Negative Punishment refers to a situation where the probability of specific behavior to occur reduces as a pleasant event, condition, or a thing is withdrawn or removed in consequence of exhibiting such a specific behavior.
Say, if your child misbehaves and you stop talking to him in your normal cheerful way so that he doesn’t misbehave in near future, this is what is referred to as Negative Punishment.
Now, you might be wondering that both Negative Reinforcement and Negative Punishment in Operant Conditioning involve the use of unpleasant stimuli.
So, does this mean that negative reinforcement and punishment are one and the same?
Well, negative reinforcement in operant conditioning is used to bring out the desired behavior. Whereas, Punishment is used to stop undesirable behavior.
This means that the negative reinforcer is given to make you act in the desired way so that you stop the unpleasant condition.
Whereas, a Punishment is given so that you associate such an unpleasant condition with the undesirable behavior that you exhibited before and thus stop repeating such behavior.
Now, B.F. Skinner considered Positive Reinforcement as the most effective technique for child-rearing over the traditional Punishment based approach.
He was of the view that Punishment had too many side effects, most prominent being a child becoming hardened to beatings and continuing the misbehavior.
-
Behavior Shaping
According to B.F. Skinner, the field of Psychology lacked the preparation to develop theories due to insufficiency of data to justify a theory.
Thus, he suggested undertaking Functional Analysis of Behavior instead of proposing a prefabricated theory of Personality.
Functional Behavioral Analysis
As per Skinner’s Theory, one should deeply observe the behavior of an organism and then conduct repeated experiments in order to develop the relationship of behavior with its antecedents and consequents.
Further, Skinner suggested avoiding hypothesizing any kind of “thinking” or “feeling” during the observation.
This means that one should only observe what is measurable in terms of rate of occurrence.
Therefore, there are only two things that you can observe while undertaking the functional analysis of behavior: (i) The first is the operant which is nothing but the behavior that can be observed, and (ii) Second is the consequence that lies outside the organism in the environment.
So, the possibility of dividing complicated chunks of behavior into smaller and relatively manageable units compelled Skinner to conduct Functional Analysis of Behavior.
Functional Behavioral Analysis Example
For instance, your child is working on a math problem and finds a challenge in solving it. During childhood, he learned numbers and basic mathematical operations like addition, subtraction, multiplication, and division.
Also, he learned all these basics during childhood through the reward and punishment process. First, your child learned numbers.
Then, he learned the international system of numeration. Thus, all the techniques had been conditioned into your child by manipulating the consequences.
So, now if he finds a challenge in solving the given mathematical problem, the child’s mentor would try to analyze what part is problematic for the kid – numbers, numeration system, or basic mathematical operations – based on the functional analysis of behavior.
Then, using appropriate reinforcement, your child’s mentor would correct the problem and shape it in the desired way.
So, this is exactly the therapists or psychologists do with a child who has a learning disorder.
Shaping Behavior
Have you ever observed a sculptor making a sculpture out of a piece of marble?
He first selects the piece of the marble, draws the portrait of the sculpture he wants to make on the marbles, and then makes use of various machines and tools to cut and carve the marble.
Initially, you may not recognize what the sculptor is working out. But eventually, you recognize the sculpture’s face and other specific body parts like legs, arms, hair, and even the crease of the clothes.
Finally, the Sculptor approximates the final shape of the sculpture by carving the small aspects of the sculpture.
Likewise, Skinner believed that the environment shaped the behavior of animals and human beings.
In Skinner Operant Conditioning Theory, Shaping means the process of applying the principles of Operant Conditioning to modify an organism’s behavior to the experimenter’s desired end.
B.F. Skinner’s theory suggests that Shaping does not take place in one go. It happens via successive approximations.
Shaping Behavior Examples
Say, for instance, your child often gets angry and you want to modify this aspect of his behavior. That is, you want to make him calmer and subtler in his behavior.
Now, you make your child showcase other behaviors considered as steps towards the final behavior. You do this before your child displays the final desired behavior, .
And how do you engage him in these numerous other behaviors? By selectively rewarding the response that you desire.
So all these numerous other behaviors that you desire your child to perform are close to the target behavior but are not target per se.
Therefore, if you reward your child for exhibiting these approximate target behaviors, then one can say that Shaping is facilitated.
Principle of Successive Approximation
B.F. Skinner discovered the Principle of Successive Approximation by chance.
Skinner Pigeon Experiment
In one of his experiments, Skinner was conditioning a pigeon to swipe a ball with its beak.
As a result of the Pigeon swiping the ball with its beak, a food pellet would be released.
So, Skinner decided to wait till the Pigeon accidentally succeeds in swiping the ball with its beak and accessing the food.
However, in between, Skinner got bored. So he decided that he would reward the pigeon for exhibiting any behavior that might lead towards the target behavior.
Say, for instance, even if the Pigeon glanced at the ball, he would reward the Pigeon for doing so.
So, as Skinner successively rewarded the Pigeon for showcasing such approximate behaviors, he observed that the entire process of making the Pigeon exhibit the target behavior fastened.
Very soon the Pigeon was sweeping the ball with its beak off the walls of the box.
Thus, it was observed that rewarding the Pigeon for showcasing simpler steps had automatically led the Pigeon to achieve the next higher step and so on.
So, the question that now arises is that how would you know which behaviors are approximating the target?
Why Functional Analysis in Behavior Therapy is Important?
Well, Skinner suggested that you need to undertake functional behavioral analysis in order to understand and control the behavior of the subject.
This analysis would help you to determine not only the components of the ultimate behavior that you desire but also the probable successive steps to such behavior.
Also, for each step, you need to identify the antecedents (the stimulus) and the consequents.
The antecedents, that is, the stimulus would make the organism move on to the next step. Whereas, the consequent would reinforce such a step.
Therefore, the breaking down of behavior into antecedent – behavior – consequent order is the ABC Technique.
So, the learner moves on to the next step automatically when he or she is rewarded or punished at each step.
Thus, via successive approximation to the targeted behavior, the learner finally exhibits the desired behavior.
According to Skinner, this successive approximation to the targeted behavior is the Principle of Shaping.
Also, for shaping behavior in the desired way, one needs to use Reinforcements and Punishments in an intelligent manner.
Educational Applications
B.F. Skinner’s Operant Conditioning Theory has influenced educational practices to a greater extent.
Children showcase behavior at all stages of their life. Both parents and teachers act as behavior modifiers.
This means that at the end of the academic year, in case there is no change in your child’s behavior, then as a teacher or a parent, you have not performed your job well.
Children must undergo a learning process when they are in school.
This means that there must be a permanent change in the behavior or behavior potential of your child as a result of the experiences he or she has in school or classroom setting.
There are certain principles that teachers follow in a classroom setting in order to facilitate both academic and social behavior.
These Principles are as follows:
-
Successive Approximation Principle
The Principle that teaches a child to behave in a way he has never exhibited before by rewarding his successive steps towards the targeted behavior.
-
Continuous Reinforcement Principle
This Principle helps a child in developing a new behavior by giving rewards to the child after every correct performance.
-
Negative Reinforcement Principle
The negative Reinforcement Principle helps in increasing the child’s performance by improving his behavior in such a way that he is capable of avoiding or escaping unpleasant situations.
-
Modeling Principle
This Principle involves teaching a child to develop a new behavior by making him observe an esteemed person performing the desired behavior.
-
Cueing Principle
The Cueing Principle aims to enable a child to remember to behave in a specific way at a specific time.
This is achieved only if the child receives cues to act appropriately or correctly just before the performance is expected.
-
Discrimination Principle
This Principle teaches a child to act in a specific manner under one set of circumstances but not under another set of circumstances.
This means that as a teacher, you have to help the child in identifying the cues that distinguish between the circumstances rewarding him only when his action is suitable to the cues and other circumstances.
-
Decreasing Reinforcement Principle
This Principle motivates a child to continue performing the desired behavior by giving him a reward after a long period of time or after giving more number of correct responses.
-
Variable Reinforcement Principle
The Variable Reinforcement Principle aims at improving or increasing a child’s performance of a specific activity by providing the child with intermittent rewards.
-
Substitution Principle
As the name suggests, the Substitution Principle aims at changing or substituting the rewards that were effective previously but are no longer controlling behavior with the new ones.
-
Satiation Principle
The Satiation Principle is used to stop a child from behaving in a specific manner by allowing him to perform the undesired behavior until he gets tired.
-
Extinction Principle
The Extinction Principle involves giving no rewards to the child following his undesired acts in order to stop him to exhibit such an undesirable act.
-
Incompatible Alternative Principle
This Principle suggests that as a teacher, you should reward an alternative action that is either inconsistent with or the one that cannot be performed at the same time as the undesired act. This is done to stop the child from acting in a specific manner.
-
Punishment Principle
The Punishment Principle involves delivering unpleasant stimuli immediately after the child performs an undesirable action.
Such unpleasant stimuli are presented in order to stop the kid from acting in a certain way.
Now, there is one caveat before using this principle in a classroom setting. As per research, Punishment leads to increasing child hostility and aggression. Therefore, it must be used infrequently and in association with reinforcement.
-
Avoidance Principle
The Avoidance Principle can be used in order to teach a child to avoid a specific type of situation.
This can be achieved by presenting to the child a situation that needs to be avoided or some representation of it and also by presenting some unpleasant condition or some representation of it.
-
Fear Reduction Principle
The Fear Reduction Principle is used in order to teach a child how to overcome the fear of a particular situation.
This can be achieved by gradually increasing the exposure of the child to situations that seem fearful to him.
Also, reward the child for exposing himself to such fearful situations.
Token Economy Psychology
The principles of Operant Conditioning or Instrumental Conditioning apply at two stages in Behavior Therapy.
- The first stage is the one where you are setting and defining the targets of behavior modification.
- The second stage refers to the one that involves implementing the targeted change process itself.
Now, the basic requirement for undertaking Behavior Therapy is to determine the short term as well as the long term targets of behavior modification.
And you do this through figuring out any maladaptive behavior in a person or a child.
Example
Say, for instance, a child is experiencing anger outbursts and as a therapist, you need to treat him.
Therefore, as a part of controlling anger in kids, you need to make a detailed record of his or her behavior by putting questions to the child himself or herself as well as his informants.
As a therapist, you need to understand the underlying context of his or her aggressive behavior.
Besides this, you also need to determine the feedback the child gets from his or her social environment.
Once you determine both the context of anger as well as feedback from the child’s social environment, you need to divide the entire behavior of the child into successive stages.
Separate the child’s total behavior into easy as well as difficult display of anger.
Thus, you first deal with small and easy targets, emphasize the adaptive responses, and with the help of the Principle of Shaping Behavior, approach the final target of changing the behavior of the child.
Now, what type of reinforcements boost the desired behavior in this entire process of behavior modification?
Reinforcements in order to modify human behavior are of different types. These include verbal and non-verbal reinforcements.
However, Token Economy is the reinforcement type that is associated with the Skinnerian Principle.
Token Economy Therapy
Token Economy is a technique in Behavior Therapy that makes use of the Principles of Reinforcement.
Further, this technique is used mostly on children and patients who are under strict supervision in the hospital.
This means that this type of Behavioral Techniques is applicable for those children as well as patients whose reinforcements can be managed within a structured environment.
So how does the Token Economy technique actually work?
In this technique, the child or the patient receives plastic or paper tokens to showcase the desirable behavior.
Token Economy Example
Say, for instance, a person is given 2 tokens waiting for his turn to take shower, 1 token for washing his own clothes, 4 tokens for taking bath regularly, and so on.
Now, if the child or the patient does not show the patience to wait for his or her turn to take a bath, instead he or she rushes for the bathroom straight away, then 4 tokens are taken back.
So, when kids or the patients get tokens they can buy things like candies, spend an hour with a close friend, and do various other things that they love doing.
J.M. Atthowe and L Krasner conducted a study involving the application of Token Economy in an 86-bed closed ward in a Veterans Administration hospital. The patients were either schizophrenics or had their brain-damaged.
Such patients were given tokens for performing specified desirable behaviors such as interaction with others, attending different activities, etc.
These tokens could be exchanged for certain good things in life like movie tickets, passes, and well-located beds.
At the end of the year, this study revealed that there was a significant increase in the performance of reinforced desirable behaviors as a result of implementing the token economy technique.
Furthermore, there was an improvement in the patient’s initiative, responsibility, and social interaction on a generic level.
FAQ’s
What is Instrumental Conditioning in Psychology?
This means, via operant conditioning, an individual develops an association between a particular response and a consequence.
What is the Difference Between Operant and Instrumental Conditioning?
However, under Instrumental Conditioning, the environment restricts reward earning opportunities.
But in the case of Operant Conditioning, there is no limit or restriction on the amount of reinforcement that can be earned via showcasing the desired behavior.
What is Instrumental Learning?
What are the 4 Types of Operant Conditioning?
Positive Reinforcement occurs when a specific behavior is followed by a stimulus that is rewarding thereby increasing the occurrence of such behavior.
Negative Reinforcement occurs when a behavior is followed by the removal of an unpleasant stimulus thereby increasing the occurrence of the original behavior.
Positive Punishment occurs when a specific behavior is followed by an unpleasant stimulus which would further result in a decrease in that behavior.
And Negative Punishment occurs when an undesirable behavior is followed by the removal of a rewarding stimulus so that there is a decrease in undesirable behavior.
What is Operant Conditioning With Examples?
The examples of operant conditioning include a parent rewarding the child with new books after finishing the existing book, a trainer treating a dog given the dog performs the tricks correctly, a teacher praising a child for performing good in exams thus motivating the child to perform all the better in the near future, etc.
What are the Three Principles of Operant Conditioning?
Who is the Father of Operant Conditioning?
What is an Example of Instrumental Conditioning?
What Are The Three Concepts of Albert Bandura?
Is Instrumental Conditioning Voluntary?
Why Operant Conditioning is Called Instrumental Conditioning?
What Type of Operant Conditioning is Most Effective?
Where is Operant Conditioning Used?
Can You Use Operant Conditioning On Yourself?
