What Are Some Common Programming Algorithms?

What Are Some Common Programming Algorithms

To better understand the concept of algorithms in computer programming, let’s imagine that we have a group of various shapes. You may have some circle shapes, oval shapes, squares, rectangles, triangles, and so on. Your goal is to group these various shapes into several different sets. To organize these shapes with a computer program, maybe you could set up a loop that iterates over all of the shapes and determines what shape it is on each iteration. When its shape is determined, it is assigned to a specific group. Once all of the iterations are complete, then you would have a certain number of groups, each with similar shapes. The complete list of steps required to complete this problem is what is known as an algorithm. In this tutorial, we’ll learn a bit about algorithms in Python.


Algorithm Characteristics

Algorithms have various traits we can use to describe them. For instance, algorithms have both time complexity and space complexity.

Time Complexity

Time complexity describes how efficient an algorithm is relative to the size of the input it is given to work on.

Space Complexity

Space complexity describes how much memory and storage space an algorithm needs to complete the task it is assigned to do.

Serial, Parallel, Exact and Approximate

Algorithms can be serial in nature, parallel in nature, produce exact results, or produce approximate results. Some algorithms might process data in a sequential process, meaning they are serial in nature. Parallel algorithms, on the other hand, can break up data into smaller pieces and then work on each simultaneously. An algorithm may be exact or it can be approximate. The exact type produces a known predictable value every time it runs. An approximate algorithm tries to find an answer that might or might not be exact. Algorithms will sometimes execute each step with an exact decision. This is known as a deterministic algorithm. An algorithm may also attempt to produce a solution using successive guesses, which become more accurate over time. This type of algorithm is known as non-deterministic.


Euclid’s Algorithm

Finding the greatest common denominator of two numbers is a common task. We can write a Python program to complete this task using Euclid’s Algorithm. The greatest common denominator of two numbers is the largest integer that divides both numbers without leaving a remainder. Consider we have num1 and num2. The way the algorithm works is to divide num1 by num2 and then look at the remainder. For this, we can use the modulo operator. If the remainder is zero then we stop because we found the greatest common denominator. Otherwise, we set num1 to num2, and then num2 to the remainder, and repeat at step one until the remainder is zero. Here it is in Python.

3
5

Big-O Algorithm Performance

Big-O notation is what is used to describe algorithm performance. It describes algorithm performance as the size of the input grows over time. The letter O is used because the growth rate of an algorithm’s time complexity is also referred to as the order of operation. Data structures can often perform multiple types of operations like inserting or searching for values. Each may have their own order of operation.

Some common Big-O Terms

Notation Description Example
O(1) Constant Time Looking up a single element in an array
O(log n) Logarithmic Finding an item in a sorted array with a binary search
O(n) Linear Time Searching an unserted array for a specific value
O(n log n) Log-linear Complex sorting algorithms like heap and merge sort
O(n2) Quadratic Simple sorting like bubble sort, selection sort, and insertion sort

In the table above are some Big-O terms in ascending order of time complexity. It starts with constant time, which has a Big-O of one. This means that the operation in question does not depend on the number of elements in the given data set. An example may be checking if a number is even or odd, or looking up a specific element index in an array. Then we have log n also known as logarithmic time. Finding a value in a sorted array using a binary search is an example of logarithmic time. Next is the linear time which corresponds to a Big-O of n. An example of this is searching for an item in an unsorted array. Last in our table is order of n squared, which is called quadratic time complexity. This means that as the number of items in the data set increases, the time it takes to process them increases at the square of that number, so it is not that efficient.


List Of Top Programming Algorithms

Here is a list of the most common programming algorithms you may come across.

  • Insertion Sort Algorithm Insertion sort is a basic sorting algorithm that constructs the final sorted array or list one item at a time.
  • Selection Sort Algorithm An in-place algorithm where the list is divided into two parts, the sorted part at the left end and the unsorted part at the right.
  • Bubble Sort Algorithm Iteratively steps through the list and compares adjacent elements swapping them if they are in the wrong order.
  • Merge Sort Algorithm A divide and conquer approach that was invented by John von Neumann in 1945
  • Quicksort Algorithm A comparison sort that can sort items of any type for which a “less-than” relation is defined.
  • Binary Search Algorithm Compares a target value to the middle element of the array.
  • Breadth-First Search Algorithm Used for searching tree or graph data structures. It starts at the tree root and explores all of the sibling nodes at the current depth before moving on to the nodes at the next depth level.
  • Depth First Search Algorithm Starts at the root node and explores as far as it can along each branch before backtracking.
  • Shortest Path In a Maze Algorithm Proceed following the current path until a junction is reached when a random decision about the next direction to follow is made.
  • Flood Fill Algorithm Algorithm Used to determine a bounded area connected to a given node in a multi-dimensional array.
  • Floyd’s Cycle Detection Algorithm A pointer algorithm that uses only two pointers, that move through the sequence at varying speeds.
  • Kadane’s Algorithm A Dynamic approach to solve “the largest contiguous elements in an array” problem.
  • Longest Increasing Subsequence Algorithm Finds a subsequence of a particular sequence where the subsequence’s elements are in sorted order, lowest to highest, and where the subsequence is as long as possible.
  • Inorder, Preorder, Postorder Tree Traversal Algorithm A form of graph traversal and refers to the process of visiting each node in a tree data structure, exactly once.
  • Heap Sort Algorithm Heapsort can be thought of as an improved comparison-based selection sort
  • Union-Find Algorithm A disjoint-set data structure that tracks a set of elements grouped into a number of disjoint subsets.
  • Kruskal’s Algorithm A minimum-spanning-tree algorithm that finds an edge of the lowest possible weight connecting any two trees in the forest.
  • Dijkstra’s Algorithm Used for finding the shortest paths between nodes in a tree or graph.
  • Floyd Warshall Algorithm Used for finding the shortest path in a weighted graph with positive or negative edge weights.

What Are Some Common Programming Algorithms Summary

In this tutorial, we looked at an overview of various algorithms in computer science. Entire books are dedicated to this topic, so while unable to cover each algorithm in-depth here, we do provide helpful links to each of the most commonly seen algorithms in computer science. Another great resource for algorithms can be found at Khan Academy where they cover Binary Search, Asymptotic notation, Selection sort, Insertion sort, Recursive algorithms, Towers of Hanoi, Merge sort, Quick sort, Graph representation, and Breadth-first search.