Travelling Salesman problem

Graph Theory Algorithms Graph Theory Algorithms
20 minutes
Share the link to this page
Copied
  Completed
You need to have access to the item to view this lesson.
This is a free item
$0.00
د.إ0.00
Kz0.00
ARS$0.00
A$0.00
৳0.00
Лв0.00
Bs0.00
B$0.00
P0.00
CA$0.00
CHF 0.00
CLP$0.00
CN¥0.00
COP$0.00
₡0.00
Kč0.00
DKK kr0.00
RD$0.00
DA0.00
E£0.00
ብር0.00
€0.00
FJ$0.00
£0.00
Q0.00
GY$0.00
HK$0.00
L0.00
Ft0.00
₪0.00
₹0.00
ISK kr0.00
¥0.00
KSh0.00
₩0.00
DH0.00
L0.00
ден0.00
MOP$0.00
MX$0.00
RM0.00
N$0.00
₦0.00
C$0.00
NOK kr0.00
रु0.00
NZ$0.00
S/0.00
K0.00
₱0.00
₨0.00
zł0.00
₲0.00
L0.00
QR0.00
SAR0.00
SEK kr0.00
S$0.00
฿0.00
₺0.00
$U0.00
R0.00
ZK0.00
Already have an account? Log In

Transcript

Hello, and welcome to this tutorial on how to solve the Traveling Salesman Problem with dynamic programming. Today we're going to look at two things. First is how to find the cost of the best tour, and then how to actually find that tour. Alright, right. So let's get started. What is the Traveling Salesman Problem?

In a nutshell, it's when you're given a list of cities and the distances between each pair of cities and you want to find the shortest possible route that visits each city exactly once and then returns to the city of origin. In some other words, we can say that the problem is given a complete graph with weighted edges, what is the Hamiltonian cycle of minimum cost? A Hamiltonian cycle is simply a path which visits each node exactly once in practice. This you will probably want to represent whatever graph you have as an adjacency matrix for simplicity, if an edge between two nodes does not exist, simply set the edges value to be positive infinity. So, in the graph I had, you can see that one optimal tour consists of going from A to D to C to B and then finally, back to a, with a minimum cost of nine.

Note that it is entirely possible that there are many possible valid optimal tours, but they will all have the same minimum cost. As it turns out, solving the Traveling Salesman Problem is extremely difficult. In fact, the problem has been proven to be NP complete, meaning it's very difficult to find an optimal solution for large inputs. However, numerous approximation error rhythms exists, if you want to get an algorithm that runs very quickly, even for large inputs. So the brute force way to solve this problem is to actually compute all possible tours. And this means we have to try all permutation of node orderings, which will take big O of n factorial time, which is very slow.

But, as you can see, I've listed all the permutation of nodes and highlighted the ones which yield the optimal solution. The dynamic programming solution we're going to develop today is able to improve on this naive approach by reducing the complexity to big O of n squared times to the end. At first glance, this may not seem like a substantial improvement however, it now makes graphs with roughly 23 nodes, give or take for modern home computers, here's a table of n factorial versus n square to the N. At first, you notice that n factorial is optimal for small numbers. But this quickly changes favor to n square to the n, which can give a significant improvement over n factorial. You can always see that how large the numbers get for n factorial when we hit n equals 15 versus the n square to the N. All right time to talk about how to solve this problem using dynamic programming.

The main idea is going to be to compute the optimal solution for paths of length and we will have to reuse information from paths of length and minus one. But before we get started, there's some setup information we need to talk about the First thing we're going to need to do is pick a starting node s, it doesn't matter which note is picked, just make sure that this nodes index is between zero and n non inclusive. Suppose we have this graph with four nodes, and we choose our starting node to be node zero. The next thing we need to do is store the optimal value from the starting node to every other node. This will solve the Traveling Salesman Problem for all paths with exactly two notes. The optimal value for paths with two nodes is given in the input through the adjacency matrix.

And this is all the setup we need to do. Visually if you want to look at it, we can see that we store the value from zero to one, zero to two, and finally zero to three. In the last slide, I talked about storing the solution, or n equals two. But what is it we really need to store there are two key things. The first is obvious. And that's the set of visited nodes and the partially completed tour.

The other is the index of the last visited node in the path. For each partially completed tour, we need to save which node was the last node we were on so that we can continue extending that partially completed tour from that node we were on and not from some other node. This is very important. So together these two things, the set of visited nodes and the index of the last visit node forum, what I call the Dynamic Programming state. Since there are n possible last nodes and to the power of n node subsets, our storage space is bounded by big O of n times to the end. An issue we're going to face when trying to store the DP state is representing the set of visited nodes.

And the way I mean the way to do this is to use a single 32 bit integer. The main idea is that if the eye node has been visited, we flip on the eyes bit to one in the binary representation of the integer. The advantage to this representation is that a 32 bit integer is compact, quick and allows for easy caching in a memo table. For example, on the leftmost graph We have visited the zeros and first nodes. So the binary representation is 0011 if the least significant bit is on the right. Similarly, the binary representation of the middle graph is 1001, or just the number nine decimal since nodes zero and three have been visited.

Now suppose we're trying to expand on our previous state, one particular instance of a two node partial tour as shown below. What we want to do from our last node, which in this graph is no three is expand to visit all other unvisited nodes. These are the gray nodes one and two, to make our partial tour a little longer. With three notes for this particular stage, we were able to To generate an additional two states. But we would also need to do this for all states with two nodes, not just this one with zero, and three. In total, this process would result in six new states for partial tours with three notes.

This process continues with gradually longer and longer paths until all paths are fine. And the last thing we need to do to wrap up the Traveling Salesman Problem is to reconnect the tour to the designated starting note s. To do this loop over the end state in the memo table for all possible end positions, excluding the start node, and minimize the lookup value plus the cost of going back to s. Note that the end state is the one Where the binary representation is composed of all ones, meaning each node has been visited. It's finally time to look at some pseudocode. For the Traveling Salesman Problem, just a heads up to everyone who's still a beginner. The following slides make use of advanced the bit manipulation techniques. So make sure you're comfortable with how binary shifts and ORS and x ORS work.

Here's the function that solves the Traveling Salesman Problem. It takes two inputs. The first is a two dimensional adjacency matrix representing the input graph and is the index of the starting node. The first thing we do is get the size of the matrix and store it in a variable called n, which tells us how many nodes there are. Then we initialize the two dimensional memo table. The table Should have size n by n to the power of n, I recommend filling the table with null values.

So that programmatic errors, throw runtime exceptions. Then we're going to call four functions set up, solved, find min cost and find optimal tour. Let's begin by looking at what happens inside the setup method. The setup method is very easy, it simply does what I illustrated a few slides ago. by storing the optimal value from the start node to every other node. You loop through each node skipping over the start node.

And then you cache the optimal value from S to i, which can be found in the distance matrix. The DB state you store is the end note as I and the mask with bits Yes, and I set to one, hence the double bit shift. Visually, the green node is the start node and the orange node is node i, which changes with every iteration. You notice now that the orange node is never on top of the green node, which is why I have a continue statement to skip that case. Now let's look at how the salt method works. The solid method is by far the most complicated, but I've broken it down to be easy to understand.

The first line in the method loops over r equals three up to an inclusive, think of R as the number of nodes in a partial tour. So we're increasing this number one at a time. The next line says for a subset in combinations combinations function generates all bits sets of size and with exactly our bits set to one. For example, as seen in the comments, when calling the combinations function with R equals three and n equals four, we get four different bits sets, each distinct and with three ones turned on. These are meant to represent a subset of visited nodes. Moving on, notice that I enforce the node s to be part of the generated subset.

Otherwise the subset of nodes is not valid since it could not have started at our designated starting node. Notice that this if statement calls the not in function defined at the bottom of this slide. All it does is it checks if the bit in the subset is zero. Then we loop over a variable called next, which represents the index of the next node. The next node must be part of the current subset. This may sound strange, but know that the subset variable generated by the combinations function has a bit which is meant for the next node.

This is why the variable state on the next line represents the subset excluding the next node. This so we can look up in our memo table to figure out what the best partial tour value is when the next node was not yet in a subset. Being able to look back and reuse parts of other partially completed tours is essential to the dynamic programming aspect of this algorithm. Following variables to consider is E shorts for end node. Because I ran out of room this variable is quite important, because while the next node is temporarily fixed in the scope of the inner loop, we try all possible end nodes of the current subset and try to see which end node best optimizes this partial tour. Of course, the end node cannot be any of the start node, the next node or not be part of the current subset that we're considering.

So, we skip all those possibilities. So, we compute the new distance and compare it to the minimum distance. If the new distance is better than the minimum distance, then we update the best minimum distance afterwards. Once we've considered all possible end nodes to connect to the next node We store the best partial tour in the memo table. And this concludes the solve method. The only unanswered question in this slide is how the combinations method works.

I do not mean to leave this unanswered. So let's see how this gets done. This method is actually far simpler than you might imagine for what it does. What the first combinations method does is it fills up the subsets array using the second combinations method, and then returns that result. So what does this like in the combinations method? Do I already covered this in more detail in my tutorial, backtracking the power set if you want more detail, but I'll give you a quick rundown of what this recursive method does, basically starting with the empty set Which is zero, you want to set are out of n bits to be one for all possible combinations.

So you keep track of which index position you're currently at, and then try and set the bit position to a one and keep moving forward, hoping that at the end, you have exactly our bits. But if you didn't, you backtrack flip off the bit you've clicked on and then move to the next position. This is a classic backtracking problem, you might want to research. This is how you solve it, but I don't want to focus on this in this video per se. So I want to get back the Traveling Salesman Problem. Watch my backtracking video on power set for more guidance if you're lost.

I'll try and remember to put a link in the description. So right now in our memo table, we have optimal value for each Partial Tor with a nose. So let's see how we can reuse that information to find the minimum Torah value. The trick is going to be to construct a bit mask for the end state and use that to do a look up in our memory table. The end state is the bit mask with n bits set to one which we can obtain by doing a bit shift and then subtracting one. Then what we do is look at each end node candidate and minimize over the Tor costs by looking at what's in our memo table and the distance from the end node back to the start node s. The last method we need to look at is the find optimal tour function because what good is our algorithm if it cannot actually find you what the optimal tour is, of course, this method what we're going to do to find the actual tour is work backwards from the end state and do lookups.

In our memo table to find the next optimal node, we will keep track the last index we were at. And the current state which begins with all visited nodes. Then we loop over AI from n minus one to one which tracks the index position for the tour. to actually find the next optimal node going backwards, we're going to use a variable called index which will track the best node, the inner loop loops over j which represents all possible candidates for the next node. We must ensure that j is not the starting node that is part of the state meaning it has not yet been visited. If this is the first valid iteration, the variable index will be set to minus one.

So set it to J. Otherwise compare the optimal values of the best distances between nodes index and J and update index if node j is better. Once the optimal index is found a, store that as part of the tour and flip off the bit in the state, which represents the index note. Finally, set the person last nodes of the tour to be asked the starting node because the tour needs to start and end on that note, then simply return the tour. And that is how you solve the Traveling Salesman Problem with dynamic programming. Of course, I might have accidentally fudged a detail or two unintentionally in the pseudocode.

But there's no getting around that in the true source code. So catch me in the next video for complete code. limitation of the Traveling Salesman Problem. If you're eager to see an implementation, check out my GitHub repo github.com slash William pizazz slash algorithms under the graph theory or dynamic programming section. Thank you for watching. I hope you enjoyed this video and learn something, give it a thumbs up and subscribe for more mathematics and computer science videos.

Thank you

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.