Tuesday, 13 September 2016

Time complexity

Standard

Understanding time complexity

Time complexity of an algorithm signifies the total time required by the program to run to completion. The time complexity of algorithms is most commonly expressed using the big O notation.
Time Complexity is most commonly estimated by counting the number of elementary functions performed by the algorithm. And since the algorithm's performance may vary with different types of input data, hence for an algorithm we usually use the worst-case Time complexity of an algorithm because that is the maximum time taken for any input size.

How to calculate time complexity ?

Now the most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N, as N approaches infinity. In general you can think of it like this :

1. For any single statement like adding, subtracting, dividing, multiplying the time complexity will be 1 (i.e. O(1) ) . If we have multiple addition, subtraction etc. than it's complexity will still be a constant. Unless you have N  number (i.e. very large or the number of cases ) of such statements we would denote it's complexity as O(1).
In general the time complexity of a statement will be Constant. The running time of the statement will not change in relation to N.

2. If we have a loop let say 
for(int i=0 ; i<N ; i++){
       some statement;
}
The time complexity for the above loop will be Linear ( i.e. O(N) ) . The running time of the loop is directly proportional to N. When N doubles, so does the running time.

3. Loop inside a loop.
for(int i=0 ; i<N ; i++){
      for(int j=0 ; j<N ; j++){
            some statement; 
     }
}
In this case the time complexity will be Quadratic ( i.e. O(N*N )  ). The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.

4. If we have a break statement in a loop
while(low <= high) 
{
   mid = (low + high) / 2;
   if (target < list[mid])
     high = mid - 1;
   else if (target > list[mid])
     low = mid + 1;
   else break;
}
This is an algorithm to break a set of numbers into halves, to search a particular field. Now, this algorithm will have a Logarithmic Time Complexity (i.e. O(log(N)) ). The running time of the algorithm is proportional to the number of times N can be divided by 2(N is high-low here). This is because the algorithm divides the working area in half with each iteration.

5 Taking the previous question forward
void quicksort(int list[], int left, int right){
           int pivot=partition(list, left, right);
           quicksort(list, left, pivot-1);
           quicksort(list, picot+1, right);
}
Here we have a small logic of Quick Sort(I will explain this in detail later). Now in Quick Sort, we divide the list into halves every time, but we repeat the iteration N times(where N is the size of list). Hence time complexity will be N*log( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.


Hope it helped some of you. If you need more explanation than comment below.

OR

If you want me to explain any other topic please comment bellow