- How to Solve Sliding Window Problems
- An Intro To Dynamic Programming
- How do you identify them?
- Maximize number of 0s by flipping a subarray — GeeksforGeeks
- Given a binary array, find the maximum number zeros in an array with one flip of a subarray allowed. A flip operation…
- Minimum Window Substring — LeetCode
- Level up your coding skills and quickly land a job. This is the best place to expand your knowledge and get prepared…
- Why is this dynamic programming?
- Optimal Substructure Property in Dynamic Programming | DP-2 — GeeksforGeeks
- As we discussed in Set 1, following are the two main properties of a problem that suggest that the given problem can be…
- Overlapping Subproblems Property in Dynamic Programming | DP-1 — GeeksforGeeks
- Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and…
- Different Kinds of Windows
- House Robber — LeetCode
- You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed…
- Look for “Key Insights”
- Wrapping up Our Example
- Sliding Window Algorithm
- 1. Overview
- 2. Theoretical Idea
- 3. Fixed-Size Sliding Window
- 3.1. The Problem
- 3.2. Naive Approach
- 3.3. Sliding Window Algorithm
- 4. Flexible-Size Sliding Window
- 4.1. Problem
- 4.2. Naive Approach
- 4.3. Sliding Window Algorithm
- 5. Differences
- 6. Conclusion
How to Solve Sliding Window Problems
An Intro To Dynamic Programming
Sliding Window problems are a type of problem that frequently gets asked during software engineering interviews and one we teach at Outco. They are a subset of dynamic programming problems, though the approach to solving them is quite different from the one used in solving tabulation or memoization problems. So different in fact, that to a lot of engineers it isn’t immediately clear that there even is a connection between the two at all.
This blog post aims to clear up a lot of confusion around solving this kind of problem and answer some common questions engineers typically have. Hopefully, it will show that the approach is actually relatively straightforward if you have the right thinking, and once you solve a few of these problems you should be able to solve any variation of them that gets thrown your way.
How do you identify them?
So the first thing you want to be able to do is to identify a problem that uses a sliding window paradigm. Luckily, there are some common giveaways:
- The problem will involve a data structure that is ordered and iterable like an array or a string
- You are looking for some subrange in that array/string, like a longest, shortest or target value.
- There is an apparent naive or brute force solution that runs in O(N²), O(2^N) or some other large time complexity.
But the biggest giveaway is that the thing you are looking for is often some kind of optimal, like the longest sequence or shortest sequence of something that satisfies a given condition exactly.
And the amazing thing about sliding window problems is that most of the time they can be solved in O(N) time and O(1) space complexity.
For example, in Bit Flip, you are looking for the longest continuous sequence of 1s that you can form in a given array of 0s and 1s, if you have the ability to flip some number of those 0s to 1s.
Maximize number of 0s by flipping a subarray — GeeksforGeeks
Given a binary array, find the maximum number zeros in an array with one flip of a subarray allowed. A flip operation…
In Minimum Window Substring, you are looking for the shortest sequence of characters in a string that contains all of the characters in a given set.
Minimum Window Substring — LeetCode
Level up your coding skills and quickly land a job. This is the best place to expand your knowledge and get prepared…
Why is this dynamic programming?
This search for an optimum hints at the relationship between sliding window problems and other dynamic problems. You are using the optimal substructure property of the problem to guarantee that an optimal solution to a subproblem can be reused to help you find the optimal solution to a larger problem.
Optimal Substructure Property in Dynamic Programming | DP-2 — GeeksforGeeks
As we discussed in Set 1, following are the two main properties of a problem that suggest that the given problem can be…
You are also using the fact that there are overlapping subproblems in the naive approach, to reduce the amount of work you have to do. Take the Minimum Window Substring problem. You are given a string, and a set of characters you need to look for in that string. There might be multiple overlapping substrings that contain all the characters you are looking for, but you only want the shortest one. These characters can also be in any order.
Overlapping Subproblems Property in Dynamic Programming | DP-1 — GeeksforGeeks
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and…
Here’s an example:
The naive way to approach this would be to first, scan through the string, looking at ALL the substrings of length 3, and check to see if they contain the characters you’re looking for. If you can’t find any of length 3, then try all substrings of length 4, and 5, and 6, and 7 and so on until you reach the length of the string.
If you reach that point, you know that those characters are not in there.
This is really inefficient and runs in O(N²) time. And what’s happening is that you are missing out a lot of good information on each pass by constraining yourself to look at fixed length windows, and you’re re-examining a lot of parts of the string that don’t need to be re-examined.
You’re throwing out a lot of good work, and you’re redoing a lot of useless work .
This is where the idea of a window comes in.
Your window represents the current section of the string/array that you are “looking at” and usually there is some information stored along with it in the form of constants. At the very least it will have 2 pointers, one indicating the index corresponding beginning of the window, and one indicating the end of the window.
You usually want to keep track of the previous best solution you’ve found if any, and some other current information about the window that takes up O(1) space. I see a lot of engineers get tripped up by O(1) space, but all this means the amount of memory you use doesn’t scale with the input size. So things like a current_sum variable, or number of bit flips (in the bit flip problem) remaining, or even the number of characters that you still need to find (since there is a fixed number of ASCII characters).
But once you have what variables you want to store figured out, all you have to think about then are two things: When do I grow this window? And when do I shrink it?
Different Kinds of Windows
There are several kinds of sliding window problems. The main one we’ll talk about today is the first kind, but there are a few others worth mentioning that will make their way into a different post.
These ones have a fast pointer that grows your window under a certain condition. So for Minimum Window Substring, you want to grow your window until you have a window that contains all the characters you’re looking for.
It will also have a slow pointer, that shrinks the window. Once you find a valid window with the fast pointer, you want to start sliding the slow pointer up until you no longer have a valid window.
So in the Minimum Window Substring problem, once you have a substring that contains all the characters you’re looking for, then you want to start shrinking it by moving the slow pointer up until you no longer have a valid substring (meaning you no longer have all the characters you’re looking for)
This is very similar to the first kind, except, instead of incrementing the slow pointer up, you simply move it up the fast pointer’s location and then keep moving the fast pointer up. It sort of “jumps” to the index of the fast pointer when a certain condition is met.
This is apparent in the Max Consecutive Sum problem. Here you’re given a list of integers, positive and negative, and you are looking for a consecutive sequence that sums to the largest amount. Key insight: The slow pointer “jumps” to the fast pointer’s index when the current sum ends up being negative. More on how this works later.
For example, in the array: [1, 2, 3, -7, 7, 2, -12, 6] the result would be: 9 (7 + 2)
Again, you’re looking for some kind of optimum (ie the maximum sum).
This one is a little different, but essentially the slow pointer is simply referencing one or two indices behind the fast pointer and it’s keeping track of some choice you’ve made.
In the House Robber problem you are trying to see what the maximum amount of gold you can steal from houses that are not next door to each other. Here the choice is whether or not you should steal from the current house, given that you could instead have stolen from the *previous* house.
House Robber — LeetCode
You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed…
The optimum you are looking for is the maximum amount of gold you can steal.
These ones are different because instead of having both pointers traveling from the front, you have one from the front, and the other from the back. An example of this is the Rainwater Problem where you are calculating the maximum amount of rainwater you can capture in a given terrain. Again, you are looking for a maximum value, though the logic is slightly different, you are still optimizing a brute force O(N²) solution.
These four patterns should come as no surprise. After all, there are only so many ways you can move two pointers through an array or string in linear time.
Look for “Key Insights”
One final thing to think about with these problems is the key insight that “unlocks” the problem. I talk about it bit more in my other post on how to approach algorithm problems in general. It usually involves deducing some fact based on the constraints of the problem that helps you look at it in a different way.
For example, in the House Robber problem, you can’t rob adjacent houses, but all houses have a positive amount of gold (meaning you can’t rob a house and have less gold after robbing it). The key insight here is that the maximum distance between any two houses you rob will be two. If you had three houses between robberies, you could just rob the one in the center of the three and you will be guaranteed to increase the amount of gold you steal.
For the Bit Flip problem, you don’t need to actually mutate the array in the problem, you just need to keep track of how many flips you have remaining. As you grow your window, you subtract from that number until you’ve exhausted all your flips, and then you shrink your window until you encounter a zero and gain a flip back.
Wrapping up Our Example
So let’s wrap up the Minimum Window Substring problem.
We established the that we need to grow our window until we have a valid substring that contains all the letters we are looking for, and then shrink that window until we no longer have a valid substring.
The key insight here is that the smallest window will always be bounded by letters that we are searching for. If it weren’t, we could always shorten our window but lopping off unused characters at the start or end.
Some other insights to consider: there may be repeats of certain characters within our window, and that’s okay. But it does hint that we need some kind of way of keeping track of the number of repeats we’ve seen within our window, and not just whether we’ve seen a character we’re looking for.
This should immediately imply the use of a hash map, where the keys are the characters, and the values are the number of times we’ve seen a character in our window.
We also need an integer to keep track of how many characters we’re missing to complete our set.
This would only decrement when we see a character in our window that belongs to the set, but that hasn’t been seen in that particular window.
So let’s summarize the algorithm:
1) A result tuple (or two-element array) that represents the start and end index of the shortest substring that contains all the characters. Initialized to the largest possible range (for example, [-Infinity, Infinity] .
2) A hashMap to keep track of how letters in the set you’ve seen in the current window, initialized with all the characters in the set as keys, and all the values as 0 .
3) A counter to keep track of any time we see a new letter from the set when we grow the window, or lose a letter from the set when we shrink the window. Initialized to the number of characters we are looking for.
4) A fast and slow pointer, both initialized to 0 .
Then all we do is have a for loop where the fast pointer increments every round.
Within that for loop, if we see a character from the hashMap , we increment its value in the map.
If its value was a 0 in the hash map before, then we decrement the number of characters missing. But if we have repeats of a character we’re searching for we don’t decrement the counter.
Once you’ve seen all the characters you’re looking for, that counter will reach 0 and
Then we have a while loop within the for loop that only runs while the number of counter is 0 .
Within that while loop, if the difference between our fast and slow pointer is less than the difference between what is stored in our result tuple, then we can replace that tuple with a new smallest window. By default, the first time we find a valid window, it will update.
Then all we do is increment our slow pointer. If we see a character in our set, then we need to decrement its value in the hashMap by 1 , as it is moving out of our window.
If its value in the hashMap reaches 0 , then the number of characters we are missing now increments to 1 , and we will break out of the while loop next round.
And that’s the entire algorithm. Here are the slides to the next steps in our example:
Sliding Window Algorithm
Last modified: October 19, 2020
We’re starting a new Computer Science area. If you have a few years of experience in Computer Science or research, and you’re interested in sharing that experience with the community (and getting paid for your work, of course), have a look at the «Write for Us» page. Cheers, Eugen
1. Overview
When dealing with problems that require checking the answer of some ranges inside a given array, the sliding window algorithm can be a very powerful technique.
In this tutorial, we’ll explain the sliding window technique with both its variants, the fixed and flexible window sizes. Also, we’ll provide an example of both variants for better understanding.
2. Theoretical Idea
The main idea behind the sliding window technique is to convert two nested loops into a single loop. Usually, the technique helps us to reduce the time complexity from to
.
The condition to use the sliding window technique is that the problem asks to find the maximum (or minimum) value for a function that calculates the answer repeatedly for a set of ranges from the array. Namely, if these ranges can be sorted based on their start, and their end becomes sorted as well, then we can use the sliding window technique.
In other words, the following must hold:
If then
, where
and
are the left side of some ranges, and
and
are the left ends of the same ranges.
Basically, the technique lets us iterate over the array holding two pointers and
. These pointers indicate the left and right ends of the current range. In each step, we either move
,
, or both of them to the next range.
In order to do this, we must be able to add elements to our current range when we move forward. Also, we must be able to delete elements from our current range when moving
forward. Each time we reach a range, we calculate its answer from the elements we have inside the current range.
In case the length of the ranges is fixed, we call this the fixed-size sliding window technique. However, if the lengths of the ranges are changed, we call this the flexible window size technique. We’ll provide examples of both of these options.
3. Fixed-Size Sliding Window
Let’s look at an example to better understand this idea.
3.1. The Problem
Suppose the problem gives us an array of length and a number
. The problem asks us to find the maximum sum of
consecutive elements inside the array.
In other words, first, we need to calculate the sum of all ranges of length inside the array. After that, we must return the maximum sum among all the calculated sums.
3.2. Naive Approach
Let’s take a look at the naive approach to solving this problem:
First, we iterate over all the possible beginnings of the ranges. For each range, we iterate over its elements from to
and calculate their sum. After each step, we update the best answer so far. Finally, the answer becomes the maximum between the old answer and the currently calculated sum.
In the end, we return the best answer we managed to find among all ranges.
The time complexity is in the worst case, where
is the length of the array.
3.3. Sliding Window Algorithm
Let’s try to improve on our naive approach to achieve a better complexity.
First, let’s find the relation between every two consecutive ranges. The first range is obviously . However, the second range will be
.
We perform two operations to move from the first range to the second one: The first operation is adding the element with index to the answer. The second operation is removing the element with index 1 from the answer.
Every time, after we calculate the answer to the corresponding range, we just maximize our calculated total answer.
Let’s take a look at the solution to the described problem:
Firstly, we calculate the sum for the first range which is . Secondly, we store its sum as the answer so far.
After that, we iterate over the possible ends of the ranges that are inside the range . In each step, we update the sum of the current range. Hence, we add the value of the element at index
and delete the value of the element at index
.
Every time, we update the best answer we found so far to become the maximum between the original answer and the newly calculated sum. In the end, we return the best answer we found among all the ranges we tested.
The time complexity of the described approach is , where
is the length of the array.
4. Flexible-Size Sliding Window
We refer to the flexible-size sliding window technique as the two-pointers technique. We’ll take an example of this technique to better explain it too.
4.1. Problem
Suppose we have books aligned in a row. For each book, we know the number of minutes needed to read it. However, we only have
free minutes to read.
Also, we should read some consecutive books from the row. In other words, we can choose a range from the books in the row and read them. Of course, the condition is that the sum of time needed to read the books mustn’t exceed .
Therefore, the problem asks us to find the maximum number of books we can read. Namely, we need to find a range from the array whose sum is at most such that this range’s length is the maximum possible.
4.2. Naive Approach
Take a look at the naive approach for solving the problem:
First, we initialize the best answer so far with zero. Next, we iterate over all the possible beginnings of the range. For each beginning, we iterate forward as long as we can read more books. Once we can’t read any more books, we update the best answer so far as the maximum between the old one and the length of the range we found.
In the end, we return the best answer we managed to find.
The complexity of this approach is , where
is the length of the array of books.
4.3. Sliding Window Algorithm
We’ll try to improve the naive approach, in order to get a linear complexity.
First, let’s assume we managed to find the answer for the range that starts at the beginning of the array. The next range starts from the second index inside the array. However, the end of the second range is surely after the end of the first range.
The reason for this is that the second range doesn’t use the first element. Therefore, the second range can further extend its end since it has more free time now to use.
Therefore, when moving from one range to the other, we first delete the old beginning from the current answer. Also, we try to extend the end of the current range as far as we can.
Hence, by the end, we’ll iterate over all possible ranges and store the best answer we found.
The following algorithm corresponds to the explained idea:
Just as with the naive approach, we iterate over all the possible beginnings of the range. For each beginning, we’ll first subtract the value of the index from the current sum.
After that, we’ll try to move as far as possible. Therefore, we continue to move
as long as the sum is still at most
. Finally, we update the best answer so far. Since the length of the current range is
, we maximize the best answer with this value.
Although the algorithm may seem to have a complexity, let’s examine the algorithm carefully. The variable
always keeps its value. Therefore, it only moves forward until it reaches the value of
. Therefore, the number of times we execute the while loop in total is at most
times.
Hence, the complexity of the described approach is , where
is the length of the array.
5. Differences
The main difference comes from the fact that in some problems we are asked to check a certain property among all range of the same size. On the other hand, on some other problems, we are asked to check a certain property among all ranges who satisfy a certain condition. In these cases, this condition could make the ranges vary in their length.
In case these ranges had an already known size (like our consecutive elements problem), we’ll certainly go with the fixed-size sliding window technique. However, if the sizes of the ranges were different (like our book-length problem), we’ll certainly go with the flexible-size sliding window technique.
Also, always keep in mind the following condition to use the sliding window technique that we covered in the beginning: We must guarantee that moving the pointer forward will certainly make us either keep
in its place or move it forward as well.
6. Conclusion
In this tutorial, we explained the sliding window approach. We provided the theoretical idea for the technique. Also, we described two examples of the fixed-size and flexible-size sliding window technique. Finally, we explained when to use each technique.