# Sorting Algorithms Summary — When to Use Which

*Some Basic Sorting Algorithms’ Implementation Summary and Code*

## Bubble Sort

- Comparison-based
- First compare first and second elements. If not in order, swap. Then keep on comparing second and third…

At each iteration, compare the same way for (list.size - iteration no) length (starting at index=0 each time). *Optimized Bubble Sort*: “swapped” flag. If its value does not change in an iteration of the loop, this means the array is now ordered and no need to continue.

`- Can be used `**to check if a list is ordered** or not.

*Complexity:*

Time

Best: O(n)

Worst / Avg: O(n2)Space

O(1)

## Insertion Sort

- Comparison-based
- Shift previous elements on the left side of the current index to the right until you find the perfect spot for the current element.

`- Inefficient for large data sets. `

- Faster than others for **small data sets**.

*Complexity:*

Time

Best: O(n)

Worst / Avg: O(n2)Space

O(1)

## Selection Sort

- Comparison-based
- Check all the elements and move the smallest one found (meaning swap occurs at the end of the iteration) to the front.

`- Inefficient for large data sets, prefer for `**small data sets**.

- Can be used **to check if a list is ordered** or not.

- Can be used when there are **memory concerns** (since swap occurs at the end of the iteration).

- Prefer to use **when the array is NOT partially sorted **and** **checking of all the elements is compulsory.

*Complexity:*

Time

Best / Worst / Avg: O(n2)Space

O(1)

## Merge Sort

- Divide and Conquer
- Recursively divide array into two halves until single element sub-arrays are reached (base case).
- Merge by sorting two subsequent sub-arrays.

`- Highly parallelizable, `**time-efficient** with time complexity of O(nlogn) (is guaranteed)

- **Not** as **space-efficient** as other sorting algorithms

- Preferred for linked lists.

*Complexity:*

Time

Best / Worst / Avg: O(nlogn)Space

O(n)

## QuickSort

- Divide and Conquer
- Partition array with a random pivot (usually rightmost).
- Place smaller to left, larger to right side of the pivot.

`- Computer architecture favors.`

- Preferred **when time and space complexity matters**.

- Performs better than merge sort in practice.

- Preferred for arrays.

*Complexity:*

Time

Best / Avg : O(nlogn)

Worst: O(n2)Space

O(logn)

## Counting Sort

- Count of elements are stored in a helper count array (sized [max+1] where max is the max value found).
- Update count array to store cumulative sum of its elements (to place them into correct indices).

`- Prefer to use it when the difference between different keys (values) are not so big.`

*Complexity:*

Time

Best / Avg / Worst : O(n+k)Space

O(max)

A good use for counting sort’s auxiliary count array is to find median of an array like “Fraudulent Activity Notification” problem in Hackerrank.

You can check the solution here:

## Radix Sort

- Non-comparative
- Sort by firstly grouping by digits
- Counting Sort is used as an intermediate sort in Radix Sort.

`- Space Inefficient`

- Prefer to use for data sets having numbers in large ranges.

*Complexity:*

Time

Best / Avg / Worst : O(n+k)Space

O(max)

## Bucket Sort

- Scatter-gather approach
- Decide a bucket size (can be array.length or a random value like 10). This value affects how you decide to put elements in buckets.
- Scatter elements in buckets and order elements within buckets before gather.

`- Prefer to use when elements are uniformly distributed within a range. `

- Preferred to be used for floating point numbers.

*Complexity:*

Time

Best : O(n+k)

Avg : O(n)

Worst : O(n2)Space

O(n+k)

## Heap Sort

- Uses relationship between array indexes and tree elements.
*if**“parent node” = i**; then**“left child” = 2i + 1**,**“right child” = 2i + 2* - Heap is a complete binary tree which always leans to left.

`- A popular & efficient algorithm.`

- Present in Linux Kernel.

- Efficient if we just want to find the smallest or largest number in a data set (Priority Queues).

*Complexity:*

Time

Best / Avg / Worst : O(nlogn)Space

O(1)

## Shell Sort

- Generalized version of Insertion Sort.
- To reduce the interval between adjacent elements, firstly sort elements that are far apart from each other.
- The interval check if performed by some well-known sequences.
- Default sequence goes like
*N/2, N/4, …, 1*

`- Efficient for medium-sized data sets.`

- Present in Linux Kernel.

*Complexity:*

Time

Best / Avg: O(nlogn)

Worst : O(n2)Space

O(1)

Happy Coding!

References:

https://www.acodersjourney.com/selection-sort/

https://www.geeksforgeeks.org/

https://www.programiz.com/dsa