Dynamic Programming

Views:
 
Category: Education
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

Dynamic Programming : 

Dynamic Programming Is a powerful design technique that can be used to solve problems of a computational nature Is typically applied to optimization problems Note:- Programming in these term refers to a tabular method.

Counting coins : 

Counting coins To find the minimum number of US coins to make any amount, the greedy method always works At each step, just choose the largest coin that does not overshoot the desired amount: 31¢=25 The greedy method would not work if we did not have 5¢ coins For 31 cents, the greedy method gives seven coins (25+1+1+1+1+1+1), but we can do it with four (10+10+10+1) The greedy method also would not work if we had a 21¢ coin For 63 cents, the greedy method gives six coins (25+25+10+1+1+1), but we can do it with three (21+21+21) How can we find the minimum number of coins for any given coin set?

Coin set for examples : 

Coin set for examples For the following examples, we will assume coins in the following denominations: 1¢ 5¢ 10¢ 21¢ 25¢ We’ll use 63¢ as our goal

A simple solution : 

A simple solution We always need a 1¢ coin, otherwise no solution exists for making one cent To make K cents: If there is a K-cent coin, then that one coin is the minimum Otherwise, for each value i < K, Find the minimum number of coins needed to make i cents Find the minimum number of coins needed to make K - i cents Choose the i that minimizes this sum This algorithm can be viewed as divide-and-conquer, or as brute force This solution is very recursive It requires exponential work It is infeasible to solve for 63¢

Another solution : 

Another solution We can reduce the problem recursively by choosing the first coin, and solving for the amount that is left For 63¢: One 1¢ coin plus the best solution for 62¢ One 5¢ coin plus the best solution for 58¢ One 10¢ coin plus the best solution for 53¢ One 21¢ coin plus the best solution for 42¢ One 25¢ coin plus the best solution for 38¢ Choose the best solution from among the 5 given above Instead of solving 62 recursive problems, we solve 5 This is still a very expensive algorithm

A dynamic programming solution : 

A dynamic programming solution Idea: Solve first for one cent, then two cents, then three cents, etc., up to the desired amount Save each answer in an array ! For each new amount N, compute all the possible pairs of previous answers which sum to N For example, to find the solution for 13¢, First, solve for all of 1¢, 2¢, 3¢, ..., 12¢ Next, choose the best solution among: Solution for 1¢ + solution for 12¢ Solution for 2¢ + solution for 11¢ Solution for 3¢ + solution for 10¢ Solution for 4¢ + solution for 9¢ Solution for 5¢ + solution for 8¢ Solution for 6¢ + solution for 7¢

Example : 

Example Suppose coins are 1¢, 3¢, and 4¢ There’s only one way to make 1¢ (one coin) To make 2¢, try 1¢+1¢ (one coin + one coin = 2 coins) To make 3¢, just use the 3¢ coin (one coin) To make 4¢, just use the 4¢ coin (one coin) To make 5¢, try 1¢ + 4¢ (1 coin + 1 coin = 2 coins) 2¢ + 3¢ (2 coins + 1 coin = 3 coins) The first solution is better, so best solution is 2 coins To make 6¢, try 1¢ + 5¢ (1 coin + 2 coins = 3 coins) 2¢ + 4¢ (2 coins + 1 coin = 3 coins) 3¢ + 3¢ (1 coin + 1 coin = 2 coins) – best solution Etc.

The algorithm : 

8 The algorithm public static void makeChange(int[] coins, int differentCoins, int maxChange, int[] coinsUsed, int[] lastCoin) { coinsUsed[0] = 0; lastCoin[0] = 1; for (int cents = 1; cents < maxChange; cents++) { int minCoins = cents; int newCoin = 1; for (int j = 0; j < differentCoins; j++) { if (coins[j] > cents) continue; // cannot use coin if (coinsUsed[cents – coins[j]] + 1 < minCoins) { minCoins = coinsUsed[cents – coins[j]] + 1; newCoin = coins[j]; } } coinsUsed[cents] = minCoins; lastCoin[cents] = newCoin; }}

How good is the algorithm? : 

9 How good is the algorithm? The first algorithm is recursive, with a branching factor of up to 62 Possibly the average branching factor is somewhere around half of that (31) The algorithm takes exponential time, with a large base The second algorithm is much better—it has a branching factor of 5 This is exponential time, with base 5 The dynamic programming algorithm is O(N*K), where N is the desired amount and K is the number of different kinds of coins

Comparison with divide-and-conquer : 

10 Comparison with divide-and-conquer Divide-and-conquer algorithms split a problem into separate subproblems, solve the subproblems, and combine the results for a solution to the original problem Example: Quicksort Example: Mergesort Example: Binary search Divide-and-conquer algorithms can be thought of as top-down algorithms In contrast, a dynamic programming algorithm proceeds by solving small problems, then combining them to find the solution to larger problems Dynamic programming can be thought of as bottom-up

Example 2: Binomial Coefficients : 

11 Example 2: Binomial Coefficients (x + y)2 = x2 + 2xy + y2, coefficients are 1,2,1 (x + y)3 = x3 + 3x2y + 3xy2 + y3, coefficients are 1,3,3,1 (x + y)4 = x4 + 4x3y + 6x2y2 + 4xy3 + y4,coefficients are 1,4,6,4,1 (x + y)5 = x5 + 5x4y + 10x3y2 + 10x2y3 + 5xy4 + y5,coefficients are 1,5,10,10,5,1 The n+1 coefficients can be computed for (x + y)n according to the formula c(n, i) = n! / (i! * (n – i)!)for each of i = 0..n The repeated computation of all the factorials gets to be expensive We can use dynamic programming to save the factorials as we go

Solution by dynamic programming : 

12 Solution by dynamic programming n c(n,0) c(n,1) c(n,2) c(n,3) c(n,4) c(n,5) c(n,6) 0 1 1 1 1 2 1 2 1 3 1 3 3 1 4 1 4 6 4 1 5 1 5 10 10 5 1 6 1 6 15 20 15 6 1 Each row depends only on the preceding row Only linear space and quadratic time are needed This algorithm is known as Pascal’s Triangle

The algorithm : 

13 The algorithm public static int binom(int n, int m) { int[ ] b = new int[n + 1]; b[0] = 1; for (int i = 1; i <= n; i++) { b[i] = 1; for (int j = i – 1; j > 0; j--) { b[j] += b[j – 1]; } } return b[m];}

Dynamic Programming : 

Dynamic Programming RULE 1 of Dynamic Programing

Dynamic Programming : 

Dynamic Programming RULE 2 of Dynamic Programing

Dynamic Programming : 

Dynamic Programming

Dynamic Programming : 

Dynamic Programming RULE 3 of Dynamic Programing

Dynamic Programming : 

Dynamic Programming Recursive Approach Intermediate results are saved in a matrix to be used later DP is processor and RAM intensive Computationally feasible

Dynamic Programming : 

Dynamic Programming General optimization method Proposed by Richard Bellman of Princeton University in 1950s Extensively used in sequence alignment and other computational problems

Dynamic Programming : 

Dynamic Programming Original problem is broken into smaller subproblems and then solved Pieces of larger problem have a sequential dependency 4th piece can be solved using solution of the 3rd piece, the 3rd piece can be solved by using solution of the 2nd piece and so on…

Steps in Dynamic Programming : 

Steps in Dynamic Programming Characterize structure of an optimal solution. Define value of optimal solution recursively. Compute optimal solution values either top-down with caching or bottom-up in a table. Construct an optimal solution from computed values. We’ll study these with the help of examples.

Longest Common Subsequence : 

Longest Common Subsequence Problem: Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, find a common subsequence whose length is maximum. springtime ncaa tournament basketball printing north carolina snoeyink Subsequence need not be consecutive, but must be in order.

Naïve Algorithm : 

Naïve Algorithm For every subsequence of X, check whether it’s a subsequence of Y . Time: Θ(n2m). 2m subsequences of X to check. Each subsequence takes Θ(n) time to check: scan Y for first letter, for second, and so on.

Optimal Substructure : 

Optimal Substructure Notation: prefix Xi = x1,...,xi is the first i letters of X. Theorem Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xm  yn, then either zk  xm and Z is an LCS of Xm-1 and Y . 3. or zk  yn and Z is an LCS of X and Yn-1.

Optimal Substructure : 

Optimal Substructure Proof: (case 1: xm = yn) Any sequence Z’ that does not end in xm = yn can be made longer by adding xm = yn to the end. Therefore, longest common subsequence (LCS) Z must end in xm = yn. Zk-1 is a common subsequence of Xm-1 and Yn-1, and there is no longer CS of Xm-1 and Yn-1, or Z would not be an LCS. Theorem Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xm  yn, then either zk  xm and Z is an LCS of Xm-1 and Y . 3. or zk  yn and Z is an LCS of X and Yn-1.

Optimal Substructure : 

Optimal Substructure Proof: (case 2: xm  yn, and zk  xm) Since Z does not end in xm, Z is a common subsequence of Xm-1 and Y, and there is no longer CS of Xm-1 and Y, or Z would not be an LCS. Theorem Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xm  yn, then either zk  xm and Z is an LCS of Xm-1 and Y . 3. or zk  yn and Z is an LCS of X and Yn-1.

Recursive Solution : 

Recursive Solution Define c[i, j] = length of LCS of Xi and Yj . We want c[m,n]. This gives a recursive algorithm and solves the problem.But does it solve it well?

Recursive Solution : 

Recursive Solution c[springtime, printing] c[springtim, printing] c[springtime, printin] [springti, printing] [springtim, printin] [springtim, printin] [springtime, printi] [springt, printing] [springti, printin] [springtim, printi] [springtime, print]

Recursive Solution : 

Recursive Solution Keep track of c[a,b] in a table of nm entries: top/down bottom/up

Computing the length of an LCS : 

Comp 122, Spring 2004 Computing the length of an LCS LCS-LENGTH (X, Y) m ← length[X] n ← length[Y] for i ← 1 to m do c[i, 0] ← 0 for j ← 0 to n do c[0, j ] ← 0 for i ← 1 to m do for j ← 1 to n do if xi = yj then c[i, j ] ← c[i1, j1] + 1 b[i, j ] ← “ ” else if c[i1, j ] ≥ c[i, j1] then c[i, j ] ← c[i 1, j ] b[i, j ] ← “↑” else c[i, j ] ← c[i, j1] b[i, j ] ← “←” return c and b b[i, j ] points to table entry whose subproblem we used in solving LCS of Xi and Yj. c[m,n] contains the length of an LCS of X and Y. Time: O(mn)

Constructing an LCS : 

Constructing an LCS PRINT-LCS (b, X, i, j) if i = 0 or j = 0 then return if b[i, j ] = “ ” then PRINT-LCS(b, X, i1, j1) print xi elseif b[i, j ] = “↑” then PRINT-LCS(b, X, i1, j) else PRINT-LCS(b, X, i, j1) Initial call is PRINT-LCS (b, X,m, n). When b[i, j ] = , we have extended LCS by one character. So LCS = entries with in them. Time: O(m+n)

Optimal Binary Search Trees : 

Optimal Binary Search Trees Problem Given sequence K = k1 < k2 <··· < kn of n sorted keys, with a search probability pi for each key ki. Want to build a binary search tree (BST) with minimum expected search cost. Actual cost = # of items examined. For key ki, cost = depthT(ki)+1, where depthT(ki) = depth of ki in BST T .

Expected Search Cost : 

Expected Search Cost Sum of probabilities is 1. (15.16)

Example : 

Example Consider 5 keys with these search probabilities:p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3. k2 k1 k4 k3 k5 i depthT(ki) depthT(ki)·pi 1 1 0.25 2 0 0 3 2 0.1 4 1 0.2 5 2 0.6 1.15 Therefore, E[search cost] = 2.15.

Example : 

Example p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3. i depthT(ki) depthT(ki)·pi 1 1 0.25 2 0 0 3 3 0.15 4 2 0.4 5 1 0.3 1.10 Therefore, E[search cost] = 2.10. This tree turns out to be optimal for this set of keys.

Example : 

Example Observations: Optimal BST may not have smallest height. Optimal BST may not have highest-probability key at root. Build by exhaustive checking? Construct each n-node BST. For each, assign keys and compute expected search cost. But there are (4n/n3/2) different BSTs with n nodes.

Optimal Substructure : 

Optimal Substructure Any subtree of a BST contains keys in a contiguous range ki, ..., kj for some 1 ≤ i ≤ j ≤ n. If T is an optimal BST and T contains subtree T with keys ki, ... ,kj , then T must be an optimal BST for keys ki, ..., kj. T T

Optimal Substructure : 

Optimal Substructure One of the keys in ki, …,kj, say kr, where i ≤ r ≤ j, must be the root of an optimal subtree for these keys. Left subtree of kr contains ki,...,kr1. Right subtree of kr contains kr+1, ...,kj. To find an optimal BST: Examine all candidate roots kr , for i ≤ r ≤ j Determine all optimal BSTs containing ki,...,kr1 and containing kr+1,...,kj kr ki kr-1 kr+1 kj

Recursive Solution : 

Recursive Solution Find optimal BST for ki,...,kj, where i ≥ 1, j ≤ n, j ≥ i1. When j = i1, the tree is empty. Define e[i, j ] = expected search cost of optimal BST for ki,...,kj. If j = i1, then e[i, j ] = 0. If j ≥ i, Select a root kr, for some i ≤ r ≤ j . Recursively make an optimal BSTs for ki,..,kr1 as the left subtree, and for kr+1,..,kj as the right subtree.

Recursive Solution : 

Recursive Solution When the OPT subtree becomes a subtree of a node: Depth of every node in OPT subtree goes up by 1. Expected search cost increases by If kr is the root of an optimal BST for ki,..,kj : e[i, j ] = pr + (e[i, r1] + w(i, r1))+(e[r+1, j] + w(r+1, j)) = e[i, r1] + e[r+1, j] + w(i, j). But, we don’t know kr. Hence, from (15.16) (because w(i, j)=w(i,r1) + pr + w(r + 1, j))

Computing an Optimal Solution : 

Computing an Optimal Solution For each subproblem (i,j), store: expected search cost in a table e[1 ..n+1 , 0 ..n] Will use only entries e[i, j ], where j ≥ i1. root[i, j ] = root of subtree with keys ki,..,kj, for 1 ≤ i ≤ j ≤ n. w[1..n+1, 0..n] = sum of probabilities w[i, i1] = 0 for 1 ≤ i ≤ n. w[i, j ] = w[i, j-1] + pj for 1 ≤ i ≤ j ≤ n.

Pseudo-code : 

Pseudo-code OPTIMAL-BST(p, q, n) for i ← 1 to n + 1 do e[i, i 1] ← 0 w[i, i 1] ← 0 for l ← 1 to n do for i ← 1 to nl + 1 do j ←i + l1 e[i, j ]←∞ w[i, j ] ← w[i, j1] + pj for r ←i to j do t ← e[i, r1] + e[r + 1, j ] + w[i, j ] if t < e[i, j ] then e[i, j ] ← t root[i, j ] ←r return e and root Time: O(n3) Consider all trees with l keys. Fix the first key. Fix the last key Determine the root of the optimal (sub)tree

Elements of Dynamic Programming : 

Elements of Dynamic Programming Optimal substructure Overlapping sub problems

Optimal Substructure : 

Optimal Substructure Show that a solution to a problem consists of making a choice, which leaves one or more subproblems to solve. Suppose that you are given this last choice that leads to an optimal solution. Given this choice, determine which subproblems arise and how to characterize the resulting space of subproblems. Show that the solutions to the subproblems used within the optimal solution must themselves be optimal. Usually use cut-and-paste. Need to ensure that a wide enough range of choices and subproblems are considered.

Optimal Substructure : 

Optimal Substructure Optimal substructure varies across problem domains: 1. How many subproblems are used in an optimal solution. 2. How many choices in determining which subproblem(s) to use. Informally, running time depends on (# of subproblems overall)  (# of choices). How many subproblems and choices do the examples considered contain? Dynamic programming uses optimal substructure bottom up. First find optimal solutions to subproblems. Then choose which to use in optimal solution to the problem.

Optimal Substucture : 

Optimal Substucture Does optimal substructure apply to all optimization problems? No. Applies to determining the shortest path but NOT the longest simple path of an unweighted directed graph. Why? Shortest path has independent subproblems. Solution to one subproblem does not affect solution to another subproblem of the same problem. Subproblems are not independent in longest simple path. Solution to one subproblem affects the solutions to other subproblems.

Overlapping Subproblems : 

Overlapping Subproblems The space of subproblems must be “small”. The total number of distinct subproblems is a polynomial in the input size. A recursive algorithm is exponential because it solves the same problems repeatedly. If divide-and-conquer is applicable, then each problem solved will be brand new.

Dynamic Programming : 

Dynamic Programming First solve all the subproblems Store each intermediate solution in a table along with a score Uses an mxn matrix of scores where m and n are the lengths of seqs. being aligned Can be used for Local Alignment (Smith-Waterman Algorithm) Global Alignment (Needleman-Wunsch Algorithm)

Slide 50: 

Dynamic Programming Sequence B Sequence A Best previous alignment New best alignment = previous best + local best ... ... ... ...

Example : 

Example START FINISH A 6 paths from start to A 6 paths from A to finish Total paths from Start to finish= 36 DO we need to check all 36path to find the optimal path? NO HINT: Choice of the best path from A to finish is independent of the choice of the path from start to A NOT more than 12 paths need to be examined Best path from start to finish is the best path from start to A followed by best path from A to finish Greater simplifications is possible by systematically resubdividing the problem

Optimal Alignment of Sequences : 

Optimal Alignment of Sequences Suppose X & Z are two seqs with lengths m and n If gaps are allowed, then the length of each sequence could be m+n 2m+n subsequences with spaces for the sequence X and same for Z 4m+n comparisons using brute force method

Dynamic Programming : 

Dynamic Programming Sequence alignment has an optimal-substructure property As a result DP makes it easier to consider all possible alignments DP algorithms solve optimization problems by dividing the problem into independent subproblems. Each subproblem is then only solved once, and the answer is stored in a table, thus avoiding the work of recomputing the solution.

Dynamic Programming : 

Dynamic Programming With sequence alignment, the subproblems can be thought of as the alignment of the “prefixes” of the two sequences to a certain point. DP matrix is computed. The optimal alignment score for any particular point in the matrix is built upon the optimal alignment that has been computed to that point.

The principle of optimality, I : 

The principle of optimality, I Dynamic programming is a technique for finding an optimal solution The principle of optimality applies if the optimal solution to a problem always contains optimal solutions to all subproblems Example: Consider the problem of making N¢ with the fewest number of coins Either there is an N¢ coin, or The set of coins making up an optimal solution for N¢ can be divided into two nonempty subsets, n1¢ and n2¢ If either subset, n1¢ or n2¢, can be made with fewer coins, then clearly N¢ can be made with fewer coins, hence solution was not optimal

The principle of optimality, II : 

The principle of optimality, II The principle of optimality holds if Every optimal solution to a problem contains... ...optimal solutions to all subproblems The principle of optimality does not say If you have optimal solutions to all subproblems... ...then you can combine them to get an optimal solution Example: In US coinage, The optimal solution to 7¢ is 5¢ + 1¢ + 1¢, and The optimal solution to 6¢ is 5¢ + 1¢, but The optimal solution to 13¢ is not 5¢ + 1¢ + 1¢ + 5¢ + 1¢ But there is some way of dividing up 13¢ into subsets with optimal solutions (say, 11¢ + 2¢) that will give an optimal solution for 13¢ Hence, the principle of optimality holds for this problem

Longest simple path : 

Longest simple path Consider the following graph: The longest simple path (path not containing a cycle) from A to D is A B C D However, the subpath A B is not the longest simple path from A to B (A C B is longer) The principle of optimality is not satisfied for this problem Hence, the longest simple path problem cannot be solved by a dynamic programming approach

The 0-1 knapsack problem : 

The 0-1 knapsack problem A greedy algorithm does not find an optimal solution A dynamic programming algorithm works well This is similar to, but not identical to, the coins problem In the coins problem, we had to make an exact amount of change In the 0-1 knapsack problem, we can’t exceed the weight limit, but the optimal solution may be less than the weight limit The dynamic programming solution is similar to that of the coins problem

Comments : 

Comments Dynamic programming relies on working “from the bottom up” and saving the results of solving simpler problems These solutions to simpler problems are then used to compute the solution to more complex problems Dynamic programming solutions can often be quite complex and tricky Dynamic programming is used for optimization problems, especially ones that would otherwise take exponential time Only problems that satisfy the principle of optimality are suitable for dynamic programming solutions

Review: Dynamic Programming : 

Review: Dynamic Programming Running time/space: O(mn) Can actually reduce space to O(min(m,n)) by keeping only two rows/columns around Since exponential time is unacceptable for all but the smallest problems, dynamic programming is sometimes essential

Principle of Optimality for 0/1 Knapsack problem : 

Greedy vs Dynamic 61 Principle of Optimality for 0/1 Knapsack problem Theorem: 0/1 knapsack satisfies the principle of optimality Proof: Assume that itemi is in the most beneficial subset that weighs at most W. If we remove itemi from the subset the remaining subset must be the most beneficial subset weighing at most W - wi of the n -1 remaining items after excluding itemi. If the remaining subset after excluding itemi was not the most beneficial one weighing at most W - wi of the n -1 remaining items, we could find a better solution for this problem and improve the optimal solution. This is impossible.

Dynamic Programming Approach : 

Greedy vs Dynamic 62 Dynamic Programming Approach Given a knapsack problem with n items and knapsack weight of W. We will first compute the maximum benefit, and then determine the subset. To use dynamic programming we solve smaller problems and use the optimal solutions of these problems to find the solution to larger ones.

Dynamic Programming Approach : 

Greedy vs Dynamic 63 Dynamic Programming Approach What are the smaller problem? Assume a subproblem in which the set of items is restricted to {1,…, i } where i  n, and the weight of the knapsack is w, where 0 w  W. Let B [i, w] denote the maximum benefit achieved for this problem. Our goal is to compute the maximum benefit of the original problem B[n, W] We solve the original problem by computing B[i, w] for i = 0, 1,…, n and for w = 0,1,…,W. We need to specify the solution to a larger problem in terms of a smaller one

Recursive formula for the “smaller” 0/1Knapsack ProblemUsing only item1 to itemi and knapsack weight at most w : 

Greedy vs Dynamic 64 Recursive formula for the “smaller” 0/1Knapsack ProblemUsing only item1 to itemi and knapsack weight at most w 3 cases: 1. There are no items in the knapsack, or the weight of the knapsack is 0 - the benefit is 0 2. The weight of itemi exceeds the weight w of the knapsack - itemi cannot be included in the knapsack and the maximum benefit is B[i-1, w] 3. Otherwise, the benefit is the maximum achieved by either not including itemi ( i.e., B[i-1, w]), or by including itemi (i.e., B[i-1, w-wi]+bi)

Pseudo-code:0/1 Knapsack (n+1)*(W+1) Matrix : 

Greedy vs Dynamic 65 Pseudo-code:0/1 Knapsack (n+1)*(W+1) Matrix Input: {w1 ,w2, . . . wn }, W , {b1 ,b2, . . . bn } Output: B[n, W], for w  0 to W do // row 0 B[0,w]  0 for k  1 to n do // rows 1 to n B[k, 0]  0 // element in column 0 for w  1 to W do // elements in columns 1 to W if (wk  w) and (B[k-1, w- wk ] + bk > B[k-1, w]) then B[k, w]  B[k-1, w- wk ] + bk else B[k, w]  B[k-1, w]

Example: W = 30, S = { ( i1 , 5, $50 ), (i2 ,10, $60 ), ( i3, 20, $140 ) } : 

Greedy vs Dynamic 66 Example: W = 30, S = { ( i1 , 5, $50 ), (i2 ,10, $60 ), ( i3, 20, $140 ) } Weight: 0 1 2 3 … 30 MaxProfit { } 0 0 0 0 … 0 Weight: 0 1 2 3 4 5 … 30 MaxProfit { } 0 0 0 0 0 0 … 0 MaxProfit{i1} 0 0 0 0 0 50 … 50

Example continuedW = 30, S = { ( i1 , 5, $50 ), (i2 ,10, $60 ), ( i3, 20, $140 ) } : 

Greedy vs Dynamic 67 Example continuedW = 30, S = { ( i1 , 5, $50 ), (i2 ,10, $60 ), ( i3, 20, $140 ) } Weight: 0 ... 4 5 … 9 10 … 14 15 ... 30 MaxProfit { } 0 ... 0 0 … 0 0 … 0 0 … 0 MaxProfit{i1} 0 ... 0 50 … 50 50 ... 50 50 … 50 MaxProfit{i1, i2} 0 … 0 50 … 50 60 … 60 110 … 110 B[2,10] = max { B[1,10], B[1,10-10] + b2 } = 60 B[2,15] = max { B[1,15], B[1,15-10] + b2 } = max {50, 50+60} = 110

Example continued W = 30, S = { ( i1 , 5, $50 ), (i2 ,10, $60 ), ( i3, 20, $140 ) } : 

Greedy vs Dynamic 68 Example continued W = 30, S = { ( i1 , 5, $50 ), (i2 ,10, $60 ), ( i3, 20, $140 ) } B[3,20] = max { B[2,20], B[2,20-20] + b3 } = 140 B[3,25] = max { B[2,25], B[2,25-20] + 140 } = max {110, 50+140} = 190 B[3,30] = max { B[2,30], B[2,30-20] + 140 } = 200 Wt: 0...4 5 … 9 10…14 15… 19 20… 24 25…29 30 MaxP{ } 0...0 0 … 0 0 …0 0 … 0 0…0 0… 0 0 MaxP{i1} 0...0 50…50 50…50 50… 50 50…50 50…50 50 MaxP{i1, i2} 0…0 50…50 60…60 110...110 110… 110 ... 110 MaxP{i1,i2,i3} 0…0 50…50 60…60 110...110 140…140 190…190 200

Slide 69: 

Greedy vs Dynamic 69 It is straightforward to fill in the array using the expression on the previous slide. SO What is the size of the array?? The array is the (number of items+1)  (W+1). So the algorithm will run in ( n W ). It appears to be linear BUT the weight is not a function of only the number of items. What if W= n ! ? Then this algorithm is worst than the brute force method. One can show algorithms for 0/1 knapsack with worst case time complexity of O(min ( 2n, n W )) (N169) No one has ever found a 0/1 knapsack algorithm whose worst case time is better than exponential AND no one has proven that such an algorithm is not possible. Analysis

Pseudo-code Using (W +1) *2 matrix : 

Greedy vs Dynamic 70 Pseudo-code Using (W +1) *2 matrix Knapsack01(W, w1, ... wn , b1 ...bn ) B  0 // Initialize W by 2 matrix for k  1 to n do for w  1 to W do // w is not wi if (wk  w) and (B[mod(k,2), w- wk ] + bk > B[mod(k,2), w ]) then B[w,mod(k+1, 2) ]  B[mod(k+1,2), w- wk ] + bk Note: Calculating B[k,w] depends only on B[k-1,w] or B[k-1, w- wk ]. B[n,W] = B[0,W] or B[1, W] depending on whether n is even or odd.

authorStream Live Help