BranchAndBound (2)

Views:
 
Category: Entertainment
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Branch and Bound Searching Strategies:

1 Branch and Bound Searching Strategies

Feasible Solution vs. Optimal Solution:

2 Feasible Solution vs. Optimal Solution DFS, BFS, hill climbing and best-first search can be used to solve some searching problem for searching a feasible solution. However, they cannot be used to solve the optimization problems for searching an (the) optimal solution.

The branch-and-bound strategy :

3 The branch-and-bound strategy This strategy can be used to solve optimization problems without an exhaustive search in the average case .

Branch-and-bound strategy:

4 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches when searching the solution space A mechanism to generate a bound so that many braches can be terminated

Branch-and-bound strategy:

5 Branch-and-bound strategy It is efficient in the average case because many branches can be terminated very early. Although it is usually very efficient, a very large tree may be generated in the worst case. Many NP-hard problem can be solved by B&B efficiently in the average case; however, the worst case time complexity is still exponential .

A Multi-Stage Graph Searching Problem.:

6 A Multi-Stage Graph Searching Problem. Find the shortest path from V 0 to V 3

E.G.:A Multi-Stage Graph Searching Problem :

7 E.G.:A Multi-Stage Graph Searching Problem

Solved by branch-and-bound (hill-climbing with bounds):

8 Solved by branch-and-bound (hill-climbing with bounds) A feasible solution is found whose cost is equal to 5. An upper bound of the optimal solution is first found here.

Slide 9:

Usually, LB<UB. If LB UB, the expanding node can be terminated. 9 Upper Bound (for feasible solutions) Lower Bound (for expanding nods) 0 0 Optimal

The traveling salesperson optimization problem:

10 The traveling salesperson optimization problem Given a graph, the TSP Optimization problem is to find a tour, starting from any vertex, visiting every other vertex and returning to the starting vertex, with minimal cost. It is NP-hard We try to avoid n! exhaustive search by the branch-and-bound technique. (Recall that O(n!)>O(2 n ).)

The traveling salesperson optimization problem:

11 The traveling salesperson optimization problem E.g. A Cost Matrix for a Traveling Salesperson Problem. j i 1 2 3 4 5 6 7 1 ∞ 3 93 13 33 9 57 2 4 ∞ 77 42 21 16 34 3 45 17 ∞ 36 16 28 25 4 39 90 80 ∞ 56 7 91 5 28 46 88 33 ∞ 25 57 6 3 88 18 46 92 ∞ 7 7 44 26 33 27 84 39 ∞

The basic idea :

12 The basic idea There is a way to split the solution space (branch) There is a way to predict a lower bound for a class of solutions. There is also a way to find a upper bound of an optimal solution. If the lower bound of a solution exceeds the upper bound, this solution cannot be optimal and thus we should terminate the branching associated with this solution.

Splitting:

13 Splitting We split a solution into two groups: One group including a particular arc The other excluding the arc Each splitting incurs a lower bound and we shall traverse the searching tree with the “ lower ” lower bound.

The traveling salesperson optimization problem:

14 The traveling salesperson optimization problem Reduced cost matrix: A Reduced Cost Matrix. j i 1 2 3 4 5 6 7 1 ∞ 0 90 10 30 6 54 (-3) 2 0 ∞ 73 38 17 12 30 (-4) 3 29 1 ∞ 20 0 12 9 (-16) 4 32 83 73 ∞ 49 0 84 (-7) 5 3 21 63 8 ∞ 0 32 (-25) 6 0 85 15 43 89 ∞ 4 (-3) 7 18 0 7 1 58 13 ∞ (-26) reduced:84

The traveling salesperson optimization problem:

15 Table 6-5 Another Reduced Cost Matrix. j i 1 2 3 4 5 6 7 1 ∞ 0 83 9 30 6 50 2 0 ∞ 66 37 17 12 26 3 29 1 ∞ 19 0 12 5 4 32 83 66 ∞ 49 0 80 5 3 21 56 7 ∞ 0 28 6 0 85 8 42 89 ∞ 0 7 18 0 0 0 58 13 ∞ (-7) (-1) (-4) The traveling salesperson optimization problem

Lower bound:

16 Lower bound The total cost of 84+12=96 is subtracted. Thus, we know the lower bound of this TSP problem is 96.

The traveling salesperson optimization problem:

17 Total cost reduced: 84+7+1+4 = 96 (lower bound) decision tree: The Highest Level of a Decision Tree. If we use arc 3-5 to split, the difference on the lower bounds is 17+1 = 18. The traveling salesperson optimization problem

For the right subtree (Arc 4-6 is excluded):

18 For the right subtree ( Arc 4-6 is excluded ) We only have to set c4-6 to be .

For the left subtree (Arc 4-6 is included):

19 For the left subtree ( Arc 4-6 is included ) A Reduced Cost Matrix if Arc 4-6 is included . 4 th row is deleted. 6 th column is deleted. We must set c6-4 to be .

For the left subtree:

20 For the left subtree The cost matrix for all solution with arc 4-6: A Reduced Cost Matrix for that in Table 6-6. Total cost reduced: 96+3 = 99 (new lower bound) j i 1 2 3 4 5 7 1 ∞ 0 83 9 30 50 2 0 ∞ 66 37 17 26 3 29 1 ∞ 19 0 5 5 0 18 53 4 ∞ 25 (-3) 6 0 85 8 ∞ 89 0 7 18 0 0 0 58 ∞

Upper bound:

21 Upper bound We follow the best-first search scheme and can obtain a feasible solution with cost C. C serves as an upper bound of the optimal solution and many branches may be terminated if their lower bounds are equal to or larger than C .

Slide 22:

22 Fig 6-26 A Branch-and-Bound Solution of a Traveling Salesperson Problem.

Preventing an arc :

23 Preventing an arc In general, if paths i 1 -i 2 - … -i m and j 1 -j 2 - … -j n have already been included and a path from i m to j 1 is to be added, then path from j n to i 1 must be prevented (by assigning the cost of j n to i 1 to be ) For example, if 4-6, 2-1 are included and 1-4 is to be added, we must prevent 6-2 from being used by setting c6-2=. If 6-2 is used, there will be a loop which is forbidden.

The 0/1 knapsack problem :

24 The 0/1 knapsack problem Positive integer P 1 , P 2 , … , P n (profit) W 1 , W 2 , … , W n (weight) M (capacity)

The 0/1 knapsack problem:

25 The 0/1 knapsack problem Fig. 6-27 The Branching Mechanism in the Branch-and-Bound Strategy to Solve 0/1 Knapsack Problem.

How to find the upper bound?:

26 How to find the upper bound? Ans: by quickly finding a feasible solution in a greedy manner : starting from the smallest available i, scanning towards the largest i ’ s until M is exceeded. The upper bound can be calculated.

The 0/1 knapsack problem:

27 The 0/1 knapsack problem E.g. n = 6, M = 34 A feasible solution: X 1 = 1, X 2 = 1, X 3 = 0, X 4 = 0, X 5 = 0, X 6 = 0 -(P 1 +P 2 ) = -16 ( upper bound ) Any solution higher than -16 can not be an optimal solution. i 1 2 3 4 5 6 P i 6 10 4 5 6 4 W i 10 19 8 10 12 8 (P i /W i  P i+1 /W i+1 )

How to find the lower bound?:

28 How to find the lower bound? Ans: by relaxing our restriction from X i = 0 or 1 to 0  X i  1 (knapsack problem)

The knapsack problem:

29 The knapsack problem We can use the greedy method to find an optimal solution for knapsack problem. For example, for the state of X 1 =1 and X 2 =1, we have X 1 = 1, X 2 =1, X 3 = (34-6-10)/8=5/8, X 4 = 0, X 5 = 0, X 6 =0 -(P 1 +P 2 +5/8P 3 ) = -18.5 (lower bound) -18 is our lower bound. (only consider integers)

How to expand the tree?:

30 How to expand the tree? By the best-first search scheme That is, by expanding the node with the best lower bound. If two nodes have the same lower bounds, expand the node with the lower upper bound.

Slide 31:

31 0/1 Knapsack Problem Solved by Branch-and-Bound Strategy

Slide 32:

32 Node 2 is terminated because its lower bound is equal to the upper bound of node 14. Nodes 16, 18 and others are terminated because the local lower bound is equal to the local upper bound. (lower bound  optimal solution  upper bound)

The A* algorithm :

33 The A * algorithm Used to solve optimization problems. Using the best-first strategy . If a feasible solution (goal node) is to be expanded , then it is optimal and we can stop. Estimated cost function of a node n : f(n) f(n) = g(n) + h(n) g(n): cost from root to node n. h(n): estimated cost from node n to a goal node. h * (n): “ real ” cost from node n to a goal node. f*(n): “ real ” cost of node n h(n)  h*(n)  f(n) = g(n) + h(n)  g(n)+h * (n) = f * (n) Estimated further cost should never exceed the real further cost.

Reasoning:

34 Reasoning Let t be the selected goal node. Let n denote a node already generated but not selected (i.e., a node already in heap). ∵best-first search ∴ f (t) f(n) ∴ f(t) f(n) f*(n)………………..(a) Assume t is not the optimal node . Let s be an already generated but not selected node and let s be the node leading to the optimal node. ∴ f*(s) is the optimal value. By (a), we have f(t) f*(s) ………… (b)

Reasoning:

35 Reasoning Since t is a goal node, we have h(t)=0. Thus, f(t)=g(t)+h(t)=g(t)=f*(t). By Eq. (b), we have f(t)=g(t)=f*(t) f*(s), which means that s is not a node leading to the optimal node. Contradiction occurs. Therefore, t is the optimal node.

The A* algorithm:

36 The A * algorithm Stop when the selected node is also a goal node. It is optimal iff h(n) h*(n) E.g.

The A* algorithm:

37 The A * algorithm Step 1.

The A* algorithm:

38 The A * algorithm Step 2. Expand A

The A* algorithm:

39 The A * algorithm Step 3. Expand C

The A* algorithm:

40 The A * algorithm Step 4. Expand D

The A* algorithm:

41 The A * algorithm Step 5. Expand B

The A* algorithm:

42 The A * algorithm Step 6. Expand F

The A* Algorithm:

43 The A* Algorithm Can be considered as a special type of branch-and-bound algorithm. When the first feasible solution is found, all nodes in the heap (priority queue) are terminated. * stands for “real” “A* algorithm” stands for “real good algorithm”