Hello and welcome. My name is William and today we're going to probe even further into network flow. We're going to be talking about a specific implementation of the Ford Fulkerson method, which is the Edmonds Karp algorithm. And mins Karp is another maximum flow algorithm which uses a different technique to find augmenting paths through the flow graph. Before we get started, let me give you a refresher on what we're trying to do. We are trying to find the maximum flow on a flow graph because we know that finding the maximum flow is really useful for finding bipartite matchings and also to solve a whole host of problems.
So far, we've looked at one other technique to find the maximum score. Which is to use the portfolio method with a depth first search. At a high level it says that all we want to do is repeatedly find augmenting paths from the source to the sink argument in the flow and then repeat this process until no more paths exist. The key takeaway here is that the portable course method does not specify how to actually find these augmenting paths. So this is where we can optimize the algorithm. A few videos ago, we saw that the Ford Fulkerson method can be implemented with a depth first search to find the maximum flow however, the pitfall with that technique was that the time complexity depended on the capacity values of the edges in the graph.
This is because the depth per search picks edges to traverse in such a way that we might only ever be able to push one unit of flow in each iteration. This is really The bad and can kill the time complexity even though it's highly unlikely to happen in practice, but it's absolutely something we want to avoid should it happen right now the time complexity of course Fulkerson with a depth for search is big O of E times f, where he is the number of edges and f is the maximum flow. The idea behind Edmonds Karp says that instead of using a depth first search to find augmenting paths, we should use a breadth first search instead, to get a better time complexity. big O of V times East squared may not look like a better time complexity, but it actually is. What's different is that the time complexity while it might not look great, does not depend on the capacity value of any edge in the flow graph, which is crucial.
We call such an algorithm that doesn't depend on the actual input values a strong polynomial algorithm and that's exactly what Edmonds Karp is and why it was so revolutionary at the time. Edmonds Karp can also be thought of as an algorithm which finds the shortest augmenting path from s to t that is in terms of the number of edges used in each iteration. Using a breadth first search during Edmonds Karp ensures that we find the shortest path. This is a consequence of each edge being unweighted. When I say unweighted, I mean that as long as the edge has a positive capacity, we don't distinguish it between one edge being any better or worse than any other edge. Now, let's look at why we might care about using Edmonds Karp.
Suppose we have this flow graph and we want to find what the maximum flow is. If we're using a duck for search, we might do something like this. Start at the source and do a random depth first search for words. So after a while of zigzagging through the flow graph, we are able to find the sync. As we just saw a depth first search has the chance to cause long augmenting paths and longer paths are generally undesirable because the longer the path, the higher the chance for a small bottleneck value which results in a longer runtime. Finding the shortest path from s to t, again in terms of number of edges is a great approach to avoid the depth first search worst case scenario and reduce the length of augmenting paths to find the shortest path from s to t. Do a breadth first search starting at the source and ending at the sink while exploring the flow graph.
Remember that we can only take an edge if the remaining capacity of that edge is greater than zero. In this example, all edges outwards from Have a remaining capacity greater than zero. So we can add all the neighbors to the queue when we're doing the breadth first search step. And then we keep going forwards, so add all reachable neighbors to the queue and continue. And now the breadth first search has reached the sink so we can stop. In the real algorithm, we would stop as soon as any of the edges reach the sink.
But just for symmetry, I show three edges here entering the sink, while in reality, we would stop as soon as one of them reaches the sink. If we assume that the bottom edge made it to the sink first and we retrace the path, we get the following augmenting path. But we didn't just find any recommended path we found a shortest length augmenting path. So to augment the flow, do the usual find the bottleneck value by finding the smallest remaining capacity of all the edges along the path, then augment the flow values along the path by the bottleneck. So that was the first path. However, we're not done yet.
Let's continue finding paths until the entire graph is saturated. Recall that while exploring the flow graph, we can only reach a node if the remaining capacity of the edge to get to that node is greater than zero. For instance, all the reachable neighbors of the source node in this case does not include the bottom left node because the edge from the source to the bottom left node has a remaining capacity of zero. All right, keep exploring until the sink is reached. And now we've reached the sink once more. So find the bottleneck value along this path.
Then use the bottleneck value to update the flow along the augmenting path. Don't forget to update the residual edges. And we're still not done because there still exists another augmenting path. So now there only exists one edge outwards from the source with a capacity greater than zero, so it's the only edge we can take, so we follow it. There's also only one edge to follow from the second node because the other edges have a remaining capacity to zero. And now the breadth first search has reached the sink, we can trace back the edges that were used.
We can find the bottleneck by finding the minimum capacity along the path and also augment the flow. And now you can see that there are no more augmenting paths left to be found because all the edges leading outwards from the source have a remaining capacity of zero. However, more generally, we know to stop Edmonds Karp when there are no more augmenting paths from stt Because we know we cannot increase the flow anymore. If this is the case, the maximum flow we get from running Edmonds Karp is the sum of the bottleneck values. If you recall, in the first iteration, we were able to push five units of flow in the second iteration, 10 units, and then the last iteration five units for a total of 20 units of flow. Another way to find the maximum flow is the sum the capacity values going into the sink, which I have circled in red.
In summary, this is what we learned. Using depth first search on a flow graph can sometimes find long, windy paths from the source to the sink. This is usually undesirable because the longer the path, the smaller the bottleneck value, and the longer the runtime. Edmonds Karp tries to resolve this problem by finding the shortest length or augmenting paths from the source to the sink using a breadth first search. However, more importantly, the big achievement of Edmonds Karp is that it's time complexity of big O of V times v squared is independent of the max flow. So it doesn't depend on the capacity values of the flow graph.
And that's Edmonds Karp in a nutshell. Thank you for watching. next video, we'll cover some source code for now. Please Like this video if you learned something and subscribe for more mathematics and computer science videos. Thank you