DIVIDE And CONQUER

Views:
 
Category: Education
     
 

Presentation Description

No description available.

Comments

Presentation Transcript

DIVIDE AND CONQUER(DAC) : 

1 DIVIDE AND CONQUER(DAC) DAC is a recursive procedure which divides a problem into sub-problems, whose solution is combined to get the solution for the original problem. The sub-problems are divided till they are small enough to be solved .

DIVIDE AND CONQUER(DAC) (Contd..) : 

2 DIVIDE AND CONQUER(DAC) (Contd..) CONTROL ABSTRACTION It refers to a general procedure whose flow of control is clear but whose primary operations are specified by procedures which are not defined or they are different for different problems.

DIVIDE AND CONQUER(DAC) (Contd..) : 

3 DIVIDE AND CONQUER(DAC) (Contd..) Control Abstraction for DAC is as follows : Procedure DANDC(p,q)// solves a problem with input A(p,q) // global n, A(1:n); integer m,p,q ; If SMALL(p,q) //Boolean function to determine input set is small // then return G(p,q) // G(p,q) is computed when no splitting is required// else m?DIVIDE(p,q) // p<=m<q // return(COMBINE(DANDC(p,m),DANDC(m+1,q))) endif // DIVIDE (p,q) specifies where the input is split by // End DANDC // combine (x,y) combines the solutions x and y//

DIVIDE AND CONQUER(DAC) (Contd..) : 

4 DIVIDE AND CONQUER(DAC) (Contd..) Analysis of DANDC The computing time of DANDC is described by the recurrence relation T(n) = { g(n) if n is small = { 2T(n/2) + f(n) otherwise g(n) is the time to compute answer directly. f(n) is the time for divide and combine

converting recursive algorithm into iterative one : 

5 converting recursive algorithm into iterative one Procedure DANDC (p,q) // iterative version // // declare a stack of appropriate size // local s,t // s,t are used to store function values // Top ? 0 //initially function value is 0// L1: While not SMALL(p,q) do m ? DIVIDE (p,q) // determine how to split input // Store p,q,m,0,2 in stack // 2,3 are return addresses // q ? m // stored with function values // repeat // ret = 2 means you need to store in stack // t ? G(p,q) // ret = 3 means you can compute/combine // While top ? 0 do p,q,m,s, ret removed from stack if ret = 2

DIVIDE AND CONQUER(DAC) (Contd..) : 

6 DIVIDE AND CONQUER(DAC) (Contd..) then stack gets p,q,m,t,3 // process the second recursive call // p?m+1 goto L1 else t ? COMBINE(s,t) endif Repeat return (t) end DANDC/

DIVIDE AND CONQUER(DAC) (Contd..) : 

7 DIVIDE AND CONQUER(DAC) (Contd..) Example Suppose n = 12 then p =1, q = 12 Let m = [(12+1)/2] = 6 2 2 2 2 0 0 0 0 6 3 2 1 12 6 3 2 1 1 1 1

DIVIDE AND CONQUER(DAC) (Contd..) : 

8 DIVIDE AND CONQUER(DAC) (Contd..) Example p,q,m,0,2 are in stack small(1,6)? No small(1,3)? No small(1,1)? Yes q?6 q?3 q?2 Compute G(1,1) m?3 m?2 m?1 t ? G(1,1) 3 top ? 0 so p,q,m,s, t G(1,1) i.e 1,2,1,0,2 1 removed from stack 2 p,q,m,t, 3 are 1 stored

DIVIDE AND CONQUER(DAC) (Contd..) : 

9 DIVIDE AND CONQUER(DAC) (Contd..) Now Second recursive call starts so, p? m + 1 = 2 small (2,2)? Yes t ? g (2,2) top ? 0 so, p,q,m,s,ret i.e 1,2,1, G(1,1), 3 are removed from the stack ret ? 2 so t? COMBINE (G(1,1), G(2,2)) 1 cycle completed. G(1,1) G(2,2) are combined to give G(1,2). Then G(1,2) G(3,3) will be combined to give G(1,3). G(1,2) G(1,6) G(1,3) G(1,2) G41,6) G(1,6) G(7,12) G(1,2) G(3,3) G(4,5) G(6,6) G(1,3) G(4,6) G(1,1) G(2,2)

BINARY SEARCH : 

10 BINARY SEARCH Let ai 1?i?n be a list of n elements sorted in non-decreasing order. The problem is of determining whether a given element x is present in the list. If x is present j is returned such that aj = x, otherwise j = 0. If the given problem is I=(n,a1………,an,x), the subproblems to be solved are I1=(k-1,a1,……….ak-1, x). I2=(1,ak,x), and I3=(n-k,ak+1,….an, x) for some k. If x = ak, I1 and I3 need not be solved

BINARY SEARCH (Contd..) : 

11 BINARY SEARCH (Contd..) If x < ak then I1 remained to be solved and j = 0 for I2 and I3. If x > ak then I3 needs to be solved and j=0 for I1 and I2. If ak is the middle element, i.e k=[(n+1)/2] then the search algorithm is known as binary search.

BINARY SEARCH (Contd..) : 

12 BINARY SEARCH (Contd..) Procedure BINSCRCH(A,n,x,j) // determine j such that A(j)=x in A(1:n) // low?1; high?n while low <= high do mid?[(low+high)/2] case : x<A(mid):high?mid-1 :x>A(mid);low?mid+1 else:j?mid;return endcase repeat j?0 end BINSRCH

BINARY SEARCH (Contd..) : 

13 BINARY SEARCH (Contd..) Example: n = 9 A(j:n) are -15,-6,0,7,9,23,54,82,101. x is 101, -14, 82 Case 1 Case 2 Case3 x=101 x=-14 x=82 low high mid low mid high low mid high 1 9 5 1 9 5 7 9 5 6 9 7 1 4 2 6 9 7 8 9 8 1 1 1 5 9 8 9 9 9 2 1 found at 9 not found found

BINARY SEARCH (Contd..) : 

14 BINARY SEARCH (Contd..) The Binary decision tree is a tree with value of mid index at the nodes. For n=14 it looks as below 1 – 14 mid 7

BINARY SEARCH (Contd..) : 

15 BINARY SEARCH (Contd..)

BINARY DECISION TREE : 

16 BINARY DECISION TREE Binary decision tree is a tree with value of mid index at the nodes If x is present the algorithm will end at one of circular nodes or internal nodes; otherwise at square nodes or external nodes. The worst case time for binary search with n elements (n in [2k-1,2k) for some integer k) is O(logn) for successful search and ?(logn) for unsuccessful search. O(logn) – lower limit and ?(logn) – Lower and upper limit

worst case time for binary search with n elements : 

17 worst case time for binary search with n elements Proof: In a binary decision tree on n elements (2k-1<=n<2k) all circular nodes are at levels 1,2,….k while all square nodes are at levels k and k+1. The number of element comparisons required to terminate at a circular node on level i is I; while those required to terminate at a square node at level i is only i-1. (? there is no need of comparing with the middle element i.e., values <A(1), values between A(1) and A(2) etc. are square nodes).

The average time for binary search : 

18 The average time for binary search As i can be 1,2,….k and n<2k or k<logn2, the worst case time for successful search is O(logn), and for unsuccessful search is ?(logn). The average time for binary search is O(logn)). Proof: Let S(n) be the average number of comparisons for successful search. Let U(n) be the average number of comparisons for unsuccessful search.

The average time for binary search(Contd..) : 

19 The average time for binary search(Contd..) The number of comparisons on any path from root to a node is equal to the distance between the root and the node. Let I= the sum of the distances of all internal nodes from root. Let E= the sum of the distances of all external nodes from root. I is internal path length, and E is the external path length.

The average time for binary search(Contd..) : 

20 The average time for binary search(Contd..) For any binary tree with n internal nodes. E=I+2n (can be proved by induction)----(1) ( Let n=1 then I=0 E=I+2*1=2 Let E=I+2n is true for n, and consider n+1 nodes. With the increase of one node two external nodes are added).

The average time for binary search(Contd..) : 

21 The average time for binary search(Contd..) As every binary tree with n internal nodes has n+1 external nodes U(n) = E/(n+1) …………..(2) S(n)=1+I/n S(n) is equal to average path length + one comparison) =1+(E-2n)/n from (1) =1+(U(n)(n+1)-2n)/n from (2) =(n+U(n)(n+1)-2n)/n =(U(n)(n+1))/n-1=U(n)(1+1/n)-1 So S(n) and U(n) are directly related.

The average time for binary search(Contd..) : 

22 The average time for binary search(Contd..) But E is proportional to nlogn (using worst case time and definition of E) So U(n)=E ~ nlogn ~ log n n+1 n+1 ? S(n) ~ logn Average time = O(logn).

Complexity of Binary Search : 

23 Complexity of Binary Search Successful search – only one comparison Unsuccessful search +log n+ comparisons Successful Searches Unsuccessful Searches Best average worst best average worst ?(1) ?(log n) ?(log n) ?(log n) There is no better algorithm than binary search for searching in the worst case.

Variation of binary search useful in programming : 

24 Variation of binary search useful in programming Procedure BINSCRCH1 (A,n,x,j) Integer low, high, mid, j, n low ?1; high?n+1 while Low < high-1 do mid?+low+high+ 2 if x < A(mid) then high?mid else low?mid endif repeat if x=A(low) then j?low else j?0 endif end BINSCRCH1 Only one comparison between x and A(mid) in contrast to three in BINSCRCH. As x is A(low) is included in else part J?low or 0 There is no need of A(mid) separately.

Binary Search for Best Case (Contd..) : 

25 Binary Search for Best Case (Contd..) Example: Consider 10 15 20 25 30 ¦ ¦ ¦ ¦ ¦ 1 2 3 4 5 Let x=20 low?1 high?5+1=6 1< (6-1=5) so mid?+7/2+=3 x=20 < A (mid)= A(3) =20 no so else part low?3 3<5 yes so mid?+3+6+/2=4 20<25 yes so high?4 low =3< high-1=3 no while loop is exited 20=A(low)=A(3)=20 yes j?3 [1 6] [3 6] [3 4]

Finding the Maximum and Minimum : 

26 Finding the Maximum and Minimum 2. Procedure straightmaxmin(A,n,Max,min) Integer i,n max?min?A(1) A(i)<min is entered only for i?2 to n do if A(i)> max is false if A(i)>max then max?A(i) endif so two of statements if A(i) <min can be combined then min?A(i) endif with else. repeat end straightmaxmin

Analysis of straightmaxmin : 

27 Analysis of straightmaxmin Best case It occurs when elements are in increasing order. So element comparisons are (n-1). Worst case It occurs when elements are in decreasing order. So element comparisons are ? 2(n-1)=2n-2<2n-2+1<2n-1. Average case occurs when A(i) will be greater than max half the time, so comparisons <2n -1 – (n/2) = 3n/2 - 1.

Divide and Conquer strategy for MAX and MIN : 

28 Divide and Conquer strategy for MAX and MIN I =(n,A(1),…..,A(n)), I1=(+n/2+, A(1),…..A+n/2+, I2=(n-+n/2+,A+n/2++1,….A(n)).

Divide and Conquer strategy for MAX and MIN : 

29 Divide and Conquer strategy for MAX and MIN Procedure MAXMIN (i,j,fmax,fmin) global A(1:n),n;integer i,j Case : i=j : fmax?fmin?A(i) : i=j-1 : if A(i)<A(j) then fmax?A(j); fmin?A(i) else fmax?A(i);fmin?A(j) endif :else : mid?+(i+j)/2+ // i<j-1// call MAXMIN (i,mid,gmax,gmin) call MAXMIN (mid+1,j,hmax,hmin) fmax ? max (gmax,hmax) fmin ? min (gmin,hmin) endcase end MAXMIN

Divide and Conquer strategy for MAX and MIN : 

30 Divide and Conquer strategy for MAX and MIN Let us construct a tree of recursive calls for the example: A: 22 13 -5 -8 15 60 17 31 47 ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ 1 2 3 4 5 6 7 8 9 A node is added when a new call is made. Each node has four items i,j,fmax,fmin. The circled numbers represent the order in which fmax, fmin are assigned a value.

tree of recursive calls for the MAX and MIN : 

31 tree of recursive calls for the MAX and MIN 9 1 9 60 -8 5 8 1 5 22 -8 6 9 60 17 3 4 6 7 1 3 22 -5 4 5 15 -8 6 7 60 17 8 9 47 31 1 2 1 2 22 13 3 3 -5 -5

Analysis of MAXMIN : 

32 Analysis of MAXMIN Let T(n) be the number of comparisons for n elements (excluding Index comparisons). T(n) = T+n/2+ + T+n/2++2 for n>2 (Two recursive calls + computation of fmax, fmin)   1    for n=2 --------- one comparison if A(i)<A(j) 0 for n=1 --------- only Index comparison if n is a power of 2 then n=2k for some positive integer k.

Analysis of MAXMIN (Contd..) : 

33 Analysis of MAXMIN (Contd..) T(n) = 2T(n/2)+2 = 2(2T(n/2)+2)+2 = 4T(n/4)+4+2 : = 2k-1T(n/2k-1)+2k-1+2k-2+-----4+2 = 2k-1T(2)+ ? 2i 1<=i<=k-1 = 2k-1x1+2k-2 = n/2+n-2 = 3n/2-2

Analysis of MAXMIN (Contd..) : 

34 Analysis of MAXMIN (Contd..) Result: ? 2i = 2k-2 1<=i<=k-1 Proof by induction : k=2 L.H.S. = 21 = 2 R.H.S. = 22-2 = 2 Suppose for k-1 it is true : ?2i = 2k-2 1<=i<=k-1 Consider 2+4 +… 2k-1+2k = 2k-2+2k = 2k+1-2. Hence proved.

Analysis of MAXMIN (Contd..) : 

35 Analysis of MAXMIN (Contd..) Straightmaxmin requires 2n-2 MAXMIN requires 3n/2 - 2 Difference is 2n-2 - 3n/2 + 2 = n/2 n/2 x 1/(2( n-1)) x 100 ~ 25% savings in comparisons ( no algorithm uses less than 3n/2-2 comparisons). But MAXMIN requires more storage as stack space for i, j, fmax, fmin and return address is needed. Even if recursion is removed a stack of size log n is still required.

Analysis of maxmin when index comparisons are included : 

36 Analysis of maxmin when index comparisons are included The two case statements in maxmin ( i>=j-1 and i=j-1 ) can be replaced by one as below Case : i>=j-1 : if A(i)<A(j) then fmax?A(j); fmin?A(i) else fmax?A(i) ; fmin?A(j) endif : else mid?+i+j+/2 (i = j case is included in i>j-1 as i = j = 1 for 1>1-1)

Analysis of maxmin when index comparisons are included : 

37 Analysis of maxmin when index comparisons are included Assuming n=2k for some positive integer k C(n) = 2C (n/2)+3 for n>2 (two recursive calls + I >= j-1 + computation of fmax, fmin) 2 for n=2 ( i>=j-1 + A(i)<A(j) ) 1 for n=1 ( i>=j-1 )

Analysis of maxmin when index comparisons are included : 

38 Analysis of maxmin when index comparisons are included C(n) = 2C (n/2) + 3 (3.22 +3.21+3.30) = 2(2C(n/4)+3)+3) = 4C(n/4)+6+3 = 8C(n/8)+12+6+3 = …..=2k-1C(n/2k-1)+3(20+21+….+2k-2) = 2k-1C(n/2k-1)+3?k-20 2i = 2k-1C(2)+3 ?k-2 0 ?2i = 2k-1 C(2)+3(1+ ?k-212i) = 2k-1x 2+3(1+2k-1-2) = 2k+3(2k-1-1) = n+3n/2-3= 5n/2-3 Straightmaxmin requires 3(n-1) if for loop is included 3(n-1) > 5n/2 -3 So MAXMIN is still better with overhead of stack.