Another divide and conquer algorithm, but with the hard work, comparison, done in the divide stage.
Assume the problem is a Sequence S of n unordered objects and we want to return S sorted. Using the Divide and conquer design pattern:
Divide: If S has two or more elements select a specific element called the pivot. (frequently the pivot is the last element.) Remove the elements from S and put them into three sequences:
L, storing elements less the the pivot.
E, storing elements equal to the pivot
G, storing elements greater then the pivot.
Recur: Recursively sort sequences L and G.
Conquer: Put back the elements into S by first inserting L then E and finally G.
using quick-sort tree form
Note that the height of the quick-sort tree is expected to be O( lg n), but can be as bad as O(n) with bad pivots. Using the last element as pivot then an sorted sequence will take O(n). The height of the quick-sort tree is depended on the choice of pivots.
Note: The pivot can be the first, last or middle key. Which to use? Some variations of quick-sort algorithms use the median of the first, last and middle key.
Again storage requirements makes in-place techniques attractive. Hard to make merge-sort in-place, but in-place quick-sort is easy. Note we have to keep track what part of the array is being sorted
Algorithm: inPlaceQuickSort(S, a, b)
// Input: Sequence S and two integers a and b representing ranks, range to sort.
// Output: Sequence S with elements between ranks a and b sorted.
if a ≥ b then return // empty range,
// base case for recursions
pivot = S.elemAtRank(b) // selects pivot
left = a // scans right locating the end of L
right = b -1 // scans left locating the end of G
while left ≤ right do
while left ≤ right and S.elemAtRank(left) ≤ pivot do
left ++ // insert into L
while left ≤ right and S.elemAtRank(right) ≥ pivot do
right -- // insert into G
if left < right then
// swap and try again
S.swapElements(S.atRank(left), S.atRank(right))
// end outter while loop
// moves pivot to the middle
S.swapElements((S.atRank(left) S.atRank(b))
// Recursive calls
inPlaceQuickSort(S, a, left-1) // sort L set
inPlaceQuickSort(S, left +1, b) // sort G set
Expected running time is O(n lg n), worst case is O(n2).
The above algorithm is not really in-place because it uses more then constant additional space. What is the additional space? The stack due to recursive calls adds additional storage space proportional to the recursive depth. To make quick-sort in-place we need to make it iterative.
The choice of a pivot can make a big difference in the time (as seen from the worst case). To help reduce this, we can sample different values (3 or 5 points), and take the median of those.
It does not make sense to recursively call quick or merge sort
down to the size of a single element. The cost of a recursive
cost is to expensive. Also the divide and conquer does not take
advantage that the sequence may be already nearly sorted. Small
sequences are nearly sorted, because the number of unsorted instances
compared to sorted becomes factorially smaller. Insertion
sort has a cost O(n+k), where k is the number of
swaps needed for the insertions. The best case cost is O(n)
for an array which is already sorted. Also the insertion sort
can be performed truly in-place.
Algorithm:
InplaceInsertSort(Sequence S)
for insertIndex = 2 to S.size() do
insertElem = S.atRank(insertI)
sortIndex = insertIndex -1
while insertElem < S.atRank(sortIndex) and sortIndex > 0 do
// find insert position and shift sorted elements right
S.atRank(sortIndex +1) = S.atRank(sortIndex)
sortIndex --
S.atRank(sortIndex +1) = insertElem //insert element in position
Clearly worst case O(n2) and best case O(n)
for already sorted array. Expected O(n) for nearly sorted
arrays is hard to prove, and we will save the proof for the Algorithm
course.