Sorting and Searching Computer Science Questions

1. Given a sorted array arr[] of n elements, write a function to search a given element x in arr[].

The possible answers to this question are fairly long. However, the gist is that you should use binary search rather than linear search to solve. This is because linear search will give you a time complexity of O(n), while binary search will yield a result of O(logn).

Khan Academy has a great article on implementing this in JavaScript: Implementing binary search of an array.

2. What are the different types of sorting algorithms, and when do you use each?

From Stack Overflow:

Let's start with a definition. A stable sort is one that's guaranteed not to reorder elements with identical keys.

Recommendations:

  • Quick sort: When you don't need a stable sort, and average-case performance matters more than worst-case performance. A quick sort is O(N log N) on average, O(N^2) in the worst case. A good implementation uses O(log N) auxiliary storage in the form of stack space for recursion.
  • Merge sort: When you need a stable O(N log N) sort, this is pretty much your only option. The only downsides are that it uses O(N) auxiliary space and has a slightly larger constant than a quick sort. There are some in-place merge sorts, but they are all either not stable or worse than O(N log N). Even the O(N log N) in-place sorts have so much larger a constant than the plain old merge sort that they're more theoretical curiosities than useful algorithms.
  • Heap sort: When you don't need a stable sort and you care more about worst-case performance than average-case performance. It's guaranteed to be O(N log N) and uses O(1) auxiliary space, meaning that you won't unexpectedly run out of heap or stack space on very large inputs.
  • Introsort: This is a quick sort that switches to a heap sort after a certain recursion depth to get around quick sort's O(N^2) worst case. It's almost always better than a plain old quick sort, since you get the average case of a quick sort, with guaranteed O(N log N) performance. Probably the only reason to use a heap sort instead of this is severely memory-constrained systems where O(log N) stack space is practically significant.
  • Insertion sort: When N is guaranteed to be small, including as the base case of a quick sort or merge sort. While this is O(N^2), it has a very small constant and is a stable sort.

  • Bubble sort, selection sort: When you're doing something quick and dirty and for some reason you can't just use the standard library's sorting algorithm. The only advantage these have over insertion sort is that they are slightly easier to implement.

  • Non-comparison sorts: Under some fairly limited conditions it's possible to break the O(N log N) barrier and sort in O(N). Here are some cases where that's worth a try:
  • Counting sort: When you are sorting integers with a limited range.
  • Radix sort: When log(N) is significantly larger than K, where K is the number of radix digits.
  • Bucket sort: When you can guarantee that your input is approximately uniformly distributed.

Time complexity of binary search is O(Logn).

4. When does the worst case of QuickSort occur?

From Geeks for Geeks:

In QuickSort, we select a pivot element, and then partition the given array around the pivot element by placing the pivot element at its correct position in the sorted array.

The worst case of QuickSort occurs when one part after partition contains all elements, and other part is empty. For example, if the input array is sorted and the last or first element is chosen as a pivot, then the worst case occurs.

5. What are Divide and Conquer algorithms? Describe how they work. Can you give any common examples of the types of problems for this approach that might be used?

From TopTal:

"Divide and Conquer algorithms are a paradigm for solving problems that involve several basic steps. First, we divide the problem into smaller pieces and work to solve each of them independently. Once we’ve solved all of the pieces, we take all of the resulting smaller solutions and combine them into a single integrated comprehensive solution.

This process can be performed recursively; that is, each “sub problem” can itself be subdivided into even smaller parts if necessary. This recursive division of the problem is performed until each individual problem is small enough to become relatively trivial to solve.

Some common examples of problems that lend themselves well to this approach are binary search, sorting algorithms (e.g., Merge Sort, QuickSort), optimization of computationally complex mathematical operations (Exponentiation, FFT, Strassen’s algorithm), and others."