sorting algorithms #2

2 Comments

and we are back 🙂 , now let’s continue with  some other sorting algorithms  if you haven’t read part one of this Blog post you can  find it here  Sorting algorithms 

selection sort O(n^2)  : 

Selection sort worst case  \mathcal{}  n^2  average case  \mathcal{}  n^2  best case   \mathcal{}  n^2  disadvantages : it doesn’t ever benefit from presorted arrays 

lets start again with a stupid slow  sorting algorithm  , it’s the selection sort   it’s a very basic algorithm even more basic than bubble sort , it’s logic like when you have a bunch of numbers  and you want to sort them , you take the smallest one  , then the second smallest one , then the 3rd smallest one and so on .

and from here comes the naming  ( selection ) , and every time you have to loop on the array to decide the smallest item  (complexity O(n) ) and you repeat this n times in order to pick  n items ,   so the overall complexity will be O( n^2 )

 

 

 

 

 

why the selection turtle is dumber than the Bubble turtle  ?   

selection sort never takes benefit from  presorted arrays or even presorted elements every time it loops on the total array to decide the minimum rest element to pick it   so if we have a presorted array  which will be the best case 
the bubble sort  will sort it in  n iterations  , while Bubble sort will have complexity of  n^2 which makes it the dumbest ever 😀  

_____________________________

Quick Sort  : 

Quicksort  best case  : \mathcal{} n \log n   average case : \mathcal{} n \log n  worst case : \mathcal{} n^2

 

 

 

 

 

 

Quick sort  , who called it Quick ?    

 dunno . maybe it was the Quickest then ?  😀 

kidding 😀 , Quick sort is one of the most efficient algorithms  it has n^2  as a worst case but it often reaches it’s worst case it’s also faster than those  nlogn algorithms in practice 
 because its inner loop can be efficiently implemented by most architectures ( array , linked list ..etc ) 

actually : Quick sort also  competes with merge sort  ,  that Quick sort could be implemented on Linked lists  and as well as that it takes a little Memory utilization needed usually ( Log(n) )  unlike merge sort wich needs  (n) extra memory when using arrays 

let’s see how does Quick sort works : 

Quick  sort Algorithm :

it is from (divide and conquer algorithms , like the  Merge sort )

  1. Divide : using a pivot ( random number selected )
  2. sort  all the numbers less than the Pivot on  before it  
  3. sort all the numbers greater than the pivot after it 
  4. then you have two arrays one all it’s element smaller than the pivot and the other larger than the pivot 
  5. repeat all those steps recursively on the two arrays 

step  2  and  3  are done as following : 

*we use two pointers , one points to the top of the array  ,and the other at the bottom  and in the middle lies the pivot  move the left pointer until you find an element greater than the pivot ( it should to to the next side )

* on the other side move the right pointer until it hit a number less than the pivot  ( it should go to the next side )

*replace the two numbers

*  repeat those steps until you reach the pivot then the left array will become less than the pivot , and the right is more than the pivot

check this video it elaborates will the using of two pointers : 

 you can check this video for further demonstration  and some code snippets :

_____________________________

 

Heap Sort :

Heapsort  best case \mathcal{} {n \log n}  average case \mathcal{} {n \log n}  worst case \mathcal{} {n \log n}

this post is supposing that you have a pre-knowlege of what’s heap if you don’t , you can check this video  or this  

Heap sort is an efficient  kind  of  Selection sort  ( yeah the Dumbest one ever ) 😀 

How ? ? 

in selection sort we used to search for the smallest number in the array  so it was a complex algorithm of complexity n^2 

but in heaps you can easily search for elements inside it .  i needs number of iterations = height of the Tree = log(n) 

so it Gives an overall complexity of  O(nlog(n)) 

this is how it works 

  • Step 1: Build a heap
  • Step 2: removeMin()

the minimum item in the heap will be always the Root of the Heap so we will always remove it and reconstruct the heap again and it will be super easy when you are dealing with a Heap class that have remove Function

removing item from the heap

when you remove the root of the heap you and replace it with the last element in the heap

then you compare it with it’s two children , and replace the root with the minimum of it’s two children

then repeate those steps with the replaced item untill the root is smaller than it’s children

( if you don’t Get this , it’s recommended to revise the Heap removal )

the worst case here is when you compare the item to the Bottom element of the TREE , then you will do log(n) iterations

for removing n items you will have and overall complexity of O(nlog(n)) which is the complexity of the heap sort

comparing to the Selection sort (O(n^2)) it’s a Huge progress, although both based on the same logic

__________________________________________________________________

till now we are Done 🙂

yet let’s do some conclusions 😀
from different searching algorithms above and in the last post we found that

in the average case :

our three heroes will be the : Merge sort , Quick Sort , Heap sort

and our Loosers will be : the Bubble sort and the Selection sort and the insertion sort

in case of a preordered array :

the true heroes will be : the bubble and the Intersion sort for having a O(n) complexity

and the looser will be : the dumb turtle 😀 the Insertion sort

Finally that most of us before using the Algorithms was using the Bubble sorting algorithm to sort Arrays

so it once again will be the Hero as the Most Famous Sorting Algorithm 

______________________________________________________________

DO IT yourself  😀 

 :  

those people who do folk  algorithmic dances watch their youtube channel 

those Guys are awesome 🙂

it’s the best practice now to watch their Dances and try to map them to what you’ve  learnt on different sorting algorithms

also follow their fanpage

________________________________

and once again Credit Goes to :

  • MIT opencourse ware :  introduction to algorithms course , with charles and Eric  🙂  i recommend to watch the Full playlist  or take a look on  the full course with the slides and exercises  the from the MIT opencourseware website  here  thank you MIT 🙂
  • thanks to wikipedia for the Amazing animated Gif  images  🙂
  • Google for images of  Rabbits and Bunnies  😀

Sorting algorithms

1 Comment

why different sorting algorithms ?

why people are in need of different sorting algorithms ?

there are a lot of metrics to compare sorting algorithms on  like :

  • complexity ( usually it resembles the time taken in sorting ) in the best cases , worst cases & random cases as well  
  • memory usage < some sorting algorithms need to make a temporary locations to store data which needs more memory >
  • adaptability  < there are some sorting algorithms that benefits from the presorted elements even they have high complexity  > it’s not reasonable to resort millions of entries when one element only is not sorted 
let’s take a look how complexity could be a big deal : 
if we have 3 sorting algorithms  n , nlogn , n^2  

n

   10

   100

 1000

 10000

 100000

 10^6

  10^7

  10^8

nlogn   

   10       

   200        

 3000  

 40000   

 500000  

 6 X 10^6  

  7 X 10^7 

  8 X 10^ 8  

n^2

   100

  10000

 10^6

 10^8

 10^10

 10^12

  10^15

  10^16

if we used the 3 sorting algorithms above to sort 10^8 elements   :

if  the first algorithm of complexity n  took 27 hours  

the second one of complexity nlogn  will take 8 times the time above 222 hours  ,

But the third algorithm  could take about  325114 YEARS  !!!!!!!!! 

Complexity Chart

before Going through different sorting algorithms , i wanted to share a Funny Video that demonstrates 

3 different sorting algorithms 🙂 , it will ease explanation of them afterwards . 

Bubble sort  (  complexity O(n^2 ) ) : 

Bubble sort best case complexity \mathcal{} n  average case complexity \mathcal{} n^2 worst case complexity  \mathcal{} n^2 advantages :  Tiny code size  

it’s a simple sorting algorithm to write it’s code yet it’s  very complex algorithm , it could take days to sort few millions of elements

even  Obama Know this  😀 

it depends on 3  simple steps :

 repeat those 3 steps until the array is sorted : 

  1. loop on the array 
  2. compare each to consecutive elements , and switch them if they are not sorted 
  3. go to step 1   

watch this video , it explains  a lot  ( you can skip the other kind of sorting ) 

you can check it’s code 

void bubbleSort (Array S, length n) {
 boolean isSorted = true ; 

while(!isSorted) { // repeat until it's sorted isSorted = true; 
for(i = 0; i<n-1; i++) { // step one loop on the elements 

if(S[i] > S[i+1]) // compare each two consecutive elements 
 { 
int temp = S[i];
S[i] = S[i+1]; 
S[i+1] = temp ; 

isSorted = false; // if all elements are sorted the flag wont change 
 } 
}
}
n--;
} 

the best case here is when the array is already sorted or  consecutive elements are only unsorted  so it will be sorted by one while  loop only so it has complexity  

the worst case  here will occur if the array is reversed  , you will need to loop in the while loop for n times so it’s complexity will be n^2

insertion sort  (O(n^2)) : 

Insertion sort best case  \mathcal{}  n  average case  \mathcal{}  n^2  worst case   \mathcal{}  n^2

so it’s a  basic algorithm like sorting a pile of cards ,from left to right  you pick a card and insert in into the right place then the preceding another card  and so on till the pile is sorted

take a look on this video

yet another way of stupid slow sorting  algorithm 😀   the only difference than before is that Obama forgot to mention it  kidding 😀

actually insertion sort is faster than bubble sort when dealing with presorted elements in the array  , how ??

imagine you are handling the element that already in sorted in it’s place ,

all that you need it to compare it to the largest element in the left sorted array to recognize it’s bigger than it and it’s in the right place

Merge sort  ( O(nlog(n))  ):

Merge sort Best case  n log (n)  average case  n log (n)  worst case  n log (n)  advantages : too fast compared to n^2 complexity

it’s considered as a fast search algorithm and least complexity , it  depends on a basic rule , is that sorting and merging small arrays is too much faster than sorting large array .

it uses 3 basic rules :

  • arrays of one element only are already sorted
  • divide any array into two halves and so on till you have  n arrays  each of one element
  • merging each two consecutive arrays  in order //  so if we need to merge those two arrays in figure we will need to compare the first elements in each array then drop down the smaller into the new array

then compare the next two selected elements  and so on till the whole array is merged

take a look on this video :

so why it has complexity of  nlog(n) ??? :

 each level   consists of n elements  ,  merging depends on comparison of each to elements  so merging each  level will have complexity of  O(n)   and since we have  log(n) levels  , so merging will be repeated log(n) times
over all we will have complexity of  O(nlog(n)) .

…to be continued with more Sorting algorithms  ( quick sort , selection sort , heap sort , BST sort  )  =)

click here for part two 

thanks goes to :

NameSpaces “using namespace std “

Leave a comment

actually most of us when beginning to write a C++ code for example we write

using namespace std ;

but most of us doesn’t recognize what does that mean and what’s the Namespace  so this topic basically will talk about namespaces

what’s the NAMESPACE :

it’s and entity that can group inside it a number of classes , object and functions . it’s name is divided into two section ” name ” , ” space ” so actually it’s a space designed for the name of the variable , to prevent the collision of names , suppose you have a function called connect ()  and you want to make a newer version of it inside the same File , and you don’t want to go through alot of naming Connect_new , Connect_newer  …etc so we assign a namespace for  each one

to access elements inside the name space we need to use the operator  ” :: ”  ;

for example

namespace  mynamespace
{
int x , y ;
} 
void main () {
mynamespace::x ;
mynamespace::y ;
}

multiple name spaces in the same file :


more things that make naming collision :

if you are including two header files, those header files have a common function naming inside it which function will get  executed when called ?  acutally nothing will get executed and and error appears calling

 error C2084: function 'connect()' already has a body

the using operator :

using operator is a keyword that introduces a certain namespace to a specific region so when you write

using namespace mynamespace  ;

you can call connect function without using the operator ::

mynamespace::connect () ;
using namespace mynamespace ;
connect() ;    // it should work now  =)

Namespace STD :

all standard libraries inside C++ standard library lies in the name space std , so we  always write the name space std to call the attributes at once  instead of writing

std::cout<<"hellow world " ;
std::cin>> x ;
using namespace std ;
cout <<"now no need to use std:: " ;

that’s it , this is acutally every thing about the explanation of ” using namespace std ; ” for any further questions  don’t hesitate to ask here or DM me  i always check my DMs  @hadyelsahar

Linked Lists in C++

Leave a comment

First of all :

working with linked lists requires a good knowledge about pointers so if you still finding pointers is a hard topic you can practice out this topic
linked listsLinked lists are type of data structures :

In computer science, a linked list (or more clearly, “singly-linked list”) is a data structure that consists of a sequence of nodes each of which contains a reference (i.e., a link) to the next node in the sequence.

source :wikipedia

so the linked list is a type of datastructure that can stand alone or used to create other data structures it consists of Nodes and each node has the reference to the next node ie : “linked to it ”  until the final node where it doesn’t have anything to point to

you can Create a linked list using Classes or Structures but we are using structure and may it be a good thing just to remember how to deal with structures

here’s a little example about how to initialize a Linked list Item and fill any of it’s data

#include <iostream>
#include <string>
using namespace std ;

 struct  listItem {
 string  name  ;  // name of the item 
listItem * next ;  // pointer points to the next node 
 } ;

void main (){
listItem * head ; // pointer will always points on the first node
listItem * myListItem ;   // creating a new instant from ListItem  
myListItem.name = " Greetings Egyptian Geeks "  ; //filling it's data 
cout << myListItem.name ;
head = &myListItem ;  // making the pointer to the adress of the First Item
}

download from here

simple enough !! , it’s a normal Structure  ,  So let’s Move to the NEW part !

Adding New Item to the beginning of the LinkedList

listItem   myListItem2 ;   // creating a new instant from ListItem 
 myListItem2.name = " add me to the beginnning  "  ; //filling it's data 

 myListItem2.next = head ; // making it point to the Next item -that was first- 

 head = & myListItem2 ;  // making the Head pointer points to the new head one

Adding new element to the End

mylastListItem.Next = &newLastItem ;

Addin new element in the Middle

we have two list items listitem1 and listitem2 we want to put middleListItem in the middle we create a new temp pointer to save the adress of listitem2 in it  – as we can’t access it unless by  listitem1.next

temp = listitem1.next ;
listitem1.next = & middleListItem ;
middlelistItem.next = temp ;

Delete elements after a specific number of nodes (n):

temp = head ;
For( i =0 ; i< n ; i++ )
{
temp = temp->next ;  //  hint : there's no something called *temp.next
}
temp = Null ;  // we usually check Temp.next = null then this is the final node

of course this topic doesn’t cover all the operations done on the linked lists , it’s all about the concept but other operations don’t contain any new concepts it’s a Mix between conditions and Pointers

for any suggestions or adds , feel free to post below  or DM me on twitter @hadyelsahar i do always check my DMs

Vectors ,the Dynamic Arrays “D & A episode #2 ”

Leave a comment

what’s a vector ?

simply not precisely we can say that the vector is a Dynamic array of variables, struct or objects in which you Insert data at the end ; and this shows a Great difference with arrays in which arrays was created with a fixed size , this is simply why would we use a vector

declaring vectors :

  • vector <int> myvector ;   >> declaring an empty vector of integers
  • vector <float > myvector (5)  >> declaring a vector of  floats having 5 spaces
  • vector <int> myvector (10,1) >> declaring a vector of  integers having 10 spaces all initialized with value 1

Accessing elements in a vectors :

we will focus on two ways to access elements inside the vector

the normal method ” indexing ” []  :

each element in the vector has an index beginning with index 0 for the first element  1 for the second and …etc till the final element and we use  square brackets to write the index inside it

myvector[1] ; // to select the second element
myvector[n-1] ; // to select the final element among n elements

the function   at ()   :

the vector class has a member method called at()  it takes the index of the element and returns the element having this element

myvector.at(1) ;  // to index the second element in the vector 

Are both  the same ? :

actually no  because in the first method  no index range checking happens in the standard library’s vector so an access out of  the array  index bounds is not caught by the compiler for example

vector<int> myvector(5,1) ; cout<< myvector[6] ;

this might be caught by compiler as an error and might not , maybe rubbish is printed

(Range check- ing can be done by using at method ;

ie : myvector.at ( i  )  is the same as a  [  i  ] ,  except that an exception is thrown  if  i  is out-of-bounds

How to fill  a vector :

dealing with vectors is the same as dealing with arrays so to fill a certain element in a vector you say
myvector[5] = 13 ;

Inserting and Deleting Elements inside the vector :

  • push_back() method is used to insert a new row in the vector  // it takes the value of the row as an argument  ex : myvector.push_back(15) ;

  • pop_back() method is used to remove the last row in the vector

click the image for larger view

 

Getting the size of  a vector  & changing it :

  • size() method is used to return the size of the vector = the number of elements in the vector
  • resize () takes two arguments  the new size as and the value of the new added elements  , if you didn’t add the second argument the new elements will have a value of zero by default

Hello world program in C++ & VS2010 “D&A episode #1”

1 Comment

before going into code we need first to settle some important informations

C++ is originally called C with classes so it’s really a subset of C  , however C++ have a lot more Features than C ; including some basic things like inputs and outputs

do you mean that Scanf () and printf() and getche()  works in C++ ? yeah they do  =)  actually all C functions do work on C++

Great  so that means we don’t need to read about functions , arrays and all the C stuff ,,, yeah however we will elaborate some important things and offcourse things related to data structure

so let’s  code :

in the begining this is a let’s describe how to configure visual studio for a hello world program ,and a sample code to make a simple hello world program

the source code project  from here : Download

  • open new project  cntrl + shift + N
  • click new win 32 console application
  • click on console application  and empty project
  • then on the right click on the solution explorer
  • if you cant find the solution explorer click show it from View next to file in the menu  or click  cntrl +alt + L
  • and write the following in the  main file

  • #include <iostream>
    using namespace std ; 
    void main () { 
     cout << "hello world " ; 
    }
  • click cntrl +f5 to run

NOTES :

  • #include <iostream>    // this to include the libraries for console output  .. has alot of similarities with stdio.h “standard input and output” library in C
  • using  .. this keyword in C++ is like we say assume that we are using , the following so we write using namespace std  , in order not to say  std::cout  and write cout  alone , (we will explain what’s the name space later on )
  • cout is like printf  it takes a string and prints it into the console