When is a Schedule of Reinforcement Considered Fixed?

Key Takeaways:

  • A fixed schedule of reinforcement means the number of responses or time between rewards is constant.
  • Types of fixed schedules include fixed interval (FI) and fixed ratio (FR).
  • FI provides rewards after set time periods, while FR provides rewards after a fixed number of responses.
  • Fixed schedules lead to predictable response patterns like pause-and-burst responding.
  • Fixed reinforcement schedules are contrasted with variable schedules where criteria for reward varies.


Reinforcement schedules are important tools in behavioral psychology and applied behavior analysis. They involve providing rewards or reinforcers contingent on a target behavior in order to modify and strengthen that behavior. An important distinction can be made between fixed and variable schedules of reinforcement. So when exactly is a schedule considered fixed?

This comprehensive guide will analyze the key characteristics and types of fixed reinforcement schedules. It will delineate how they differ from variable schedules and detail the typical response patterns they engender. By evaluating relevant psychological research and principles, this article aims to provide a thorough understanding of when reinforcement criteria are deemed fixed. Gaining this insight allows the effective application of schedules to shape behaviors ranging from education to animal training.

Fixed reinforcement schedules offer a consistent, unchanging criteria for delivery of rewards. This facilitates the prediction of reward timing and helps establish clear associative bonds between target behaviors and specific reinforcers. However, care must be taken to ensure fixed schedules do not inadvertently encourage dysfunctional behavioral patterns. Overall, an appreciation of fixed reinforcement parameters provides invaluable tools for behavior modification.

What Makes a Reinforcement Schedule Fixed?

The Criteria for Reward Does Not Vary

The central defining feature of a fixed reinforcement schedule is that the exact criteria for reward delivery remains constant. This means the number of responses required or the time elapsed before the next reinforcer is provided does not fluctuate. For example, in a fixed interval schedule rewarding a rat with a food pellet every 5 minutes, exactly 5 minutes must pass following the previous reward before the next one is given. Or under a fixed ratio schedule delivering a toy car after every 8 lever presses, precisely 8 presses are necessitated each time.

This differs from variable schedules where the threshold for reinforcement changes unpredictably within prescribed limits. Under a variable interval schedule the time between rewards may range randomly between 1-5 minutes. Or in a variable ratio schedule the response requirement may vary between 4-12 lever presses. So in fixed schedules, the parameters stay rigidly fixed, whereas in variable schedules they shift erratically.

Predictability and Consistency are Core Attributes

The unchanging criteria under fixed schedules allows the prediction of when the next reinforcer will be delivered. For instance, if 20 pecks consistently earns a food pellet, the pigeon can anticipate the reward after every 20 pecks. This contrasts variable schedules where reward timing is unknown, only that it will transpire somewhere within a specified range. The predictability imparted by fixed schedules helps organisms understand the instrumental process connecting their behavior to specific outcomes.

Additionally, fixed schedules deliver consistency and avoid confusion. If the response requirement randomly fluctuated between 5-20 pecks, it would disrupt the pigeon’s ability to establish a clear association between its key pecking behavior and the resultant food pellet. Fixed schedules strengthen behavior-reward contingency learning.

Types of Fixed Reinforcement Schedules

There are two primary types of fixed reinforcement schedules based on either temporal or response-based reward criteria:

Fixed Interval Schedules

In a fixed interval (FI) schedule, the first response is rewarded only after a set time has elapsed since the previous reinforcement. This time period remains fixed and consistent. For example, a child may earn screen time for 30 minutes if they stay quiet for 5 minutes. Each subsequent 5-minute interval of quietness earns another 30 minutes of screen time. The fixed interval is always 5 minutes.

Some key attributes of FI schedules:

  • Rewards are delivered based on time duration, not number of responses
  • The time interval duration is always the same, e.g. 1 min, 5 min, 1 hour
  • The first response after the interval has passed is rewarded
  • Early responses during the interval are not rewarded

Research shows FI schedules often lead to “scalloping” – low response rates after reinforcement, which accelerate as the interval elapses. This pause-and-burst pattern maximizes reward density.

Fixed Ratio Schedules

In a fixed ratio (FR) schedule, a reward is provided after a set number of responses. This response requirement remains fixed and consistent for each reward. For example, a student may earn $5 for every 5 chores completed. Each subsequent $5 requires precisely 5 more chores. The fixed ratio is always 5 chores per reward.

Some key attributes of FR schedules:

  • Rewards are delivered based on number of responses, not time
  • The response requirement is always the same, e.g. 5 lever presses, 10 pecks
  • Reward is earned when the fixed response ratio is completed
  • Earlier incomplete ratios are unrewarded

Research shows FR schedules often lead to high steady response rates until the reward is delivered, followed by a pause in responding. This maximizes reward frequency.

Differences Between Fixed and Variable Schedules

While both can powerfully reinforce behaviors, fixed and variable reinforcement schedules differ considerably:

  • Fixed schedules have consistent, unchanging criteria while variable schedules feature unpredictably fluctuating criteria
  • In fixed schedules, the timing/quantity of responses necessitated for reward are predictable, while in variable schedules they are unpredictable
  • Fixed schedules strengthen behavior-reward contingency learning while variable schedules sustain responding through uncertainty
  • Fixed ratio schedules lead to pause-and-burst responding, while variable ratio engenders steady responding
  • Fixed interval schedules generate scallop patterns, while variable interval generates moderate steady pacing

So fixed schedules offer predictability and consistency, while variable schedules leverage uncertainty and randomness to promote responding. Both can be useful tools for modifying behaviors.

Patterns of Responding Under Fixed Schedules

Due to their consistent criteria, fixed reinforcement schedules lead to signature patterns of responding:

Post-Reinforcement Pause

After a reward is delivered, there is typically a pause in responding under both FI and FR schedules. This reflects the fact that in fixed schedules, additional premature responses will not be reinforced. Hence organisms inhibit further responses temporarily after securing the reward.

High Response Rates Before Reward

Towards the end of the interval in FI schedules, and as the ratio requirement nears completion in FR schedules, response rates accelerate dramatically. This ensures maximum density of rewards under FI, and faster attainment of the next reward under FR.

Burst-and-Pause Cycling

The combination of pauses after reward delivery, and accelerated responses as the criterion approaches, creates a burst-and-pause pattern. Responding cycles from low to high across each interval or ratio. This maximizes overall reward frequency.

Habituation to the Criteria

With sufficient experience, organisms become habituated to the fixed schedule criteria. For instance, rats press levers faster as the FI elapses because the interval duration becomes engrained. Pigeons may complete FR ratios more efficiently by minimizing pausing. Predictability enables optimization.

Applications of Fixed Reinforcement Schedules

Fixed reinforcement schedules are commonly employed in diverse settings:


  • Points earned for correct responses after 5 math problems
  • Sticker awarded for good behavior every half hour
  • Access to video games on Fridays after completing 5 days of homework

Animal Training

  • Dolphin performs trick for fish after every 4th correct performance
  • Guide dogs receive treat after calmly walking for duration of 2 traffic light cycles
  • Horses taught to jump obstacles by providing feed after every 3 successful jumps

Psychology Experiments

  • Skinner box rats/pigeons reinforce lever pressing/pecking on FI or FR schedules
  • Pigeons exhibiting higher rates of key pecking behavior when rewarded on FR vs. FI schedules
  • Humans’ button pressing follows predictable escalating rates under FF schedules

Workplace Performance

  • Employees earn bonuses after reaching weekly sales quotas
  • Commission provided for every 5 clients signed
  • Hourly workers paid each hour, an FI schedule that sustains performance

Key Considerations for Use of Fixed Schedules

While fixed reinforcement schedules offer useful applications for behavior modification, some considerations apply:

  • Fixed ratio requirements shouldn’t be excessively high to avoid frustration
  • Fixed intervals shouldn’t be overly lengthy to sidestep erosion of behavior-reward contingency
  • Can combine FI and FR schedules for optimal results – e.g. hourly pay plus sales commission
  • Avoid rewarding dysfunctional behaviors even on fixed schedules
  • Fade out rewards gradually as self-sustaining habits form
  • Implement mixed or variable schedules once behaviors are learned to maintain performance

So fixed reinforcement schedules provide consistency but should be deployed judiciously and phased out appropriately. Overall, they constitute powerful behavior modification tools when applied strategically.


In conclusion, fixed reinforcement schedules deliver rewards or reinforcers consistently after predetermined time intervals or quantity of responses. This distinguishes them from variable schedules where the criteria fluctuates unpredictably within limits. Because the criteria remains rigidly fixed, fixed schedules allow anticipation of rewards which strengthens behavior-reward causality. However, they can inadvertently encourage dysfunctional pause-and-burst responding. Two main subtypes are fixed interval schedules based on time elapsed, and fixed ratio schedules based on number of responses emitted. Each generates signature response patterns due to their unchanging reward criteria. While fixed schedules effectively instill behaviors, variable schedules are better suited for maintaining them long-term. Used judiciously and thoughtfully, fixed reinforcement schedules provide indispensable tools for behavior modification in humans and animals.


The Editorial Team at AnswerCatch.com brings you insightful and accurate content on a wide range of topics. Our diverse team of talented writers is passionate about providing you with the best possible reading experience.