Even simple algorithms such as the early problems of Project Euler allow for some interesting approaches. The problem asks to generate the largest prime factor for a number N. My initial approach to this problem is to generate a prime factorization of N and take the largest value. In order to accomplish this, I thought up the idea that if you iterate upwards and remove factors from N, N will slowly be divided down until the iterator = N = the largest factor. The nice part of this approach is that the iteration range is continuously shrunk as factors are pulled out and it will only iterate as many times as necessary to hit the largest prime.
The pseudo-code follows:
i = 2 n = N while i < n # Start with worst-case while n % i == 0 # Take out the same factors n = n / i # Reduce iterating range largest_prime_factor = i # The largest factor i += 1 return largest_prime_factor
My code runs very well for numbers that have a large number of factors but gets worse as N has larger prime factors. The worst case for my algorithm is if N is prime.
Now the advantages of collaboration show up as I shared my solution with Sitong Peng. His solution approached the problem from an entirely different angle. He searched for pairs of numbers where x * y = N and checked the larger factor to see if it was prime. Because he iterated starting from 2 as well, the first prime factor would be the answer. Sitong’s R code translated into pseudo-code and a step through of the process follows:
# This block is credit to Sitong Peng def is_prime?(n) if n == 1 return false if n < 4 return true for i in range (2, sqrt(n) if i % n == 0 return False return True def largest_prime_factor(n) i = 2 if is_prime?(n) return n while (i < num) if n % i == 0 if is_prime?(n/i) return n/i i += 1 # Step through of the process Take for example 30: The pairs are: 1 30 # check 30 if prime (no) 2 15 # check 15 if prime (no) 3 10 # check 10 if prime (no) 5 6 # check 6 if prime (no) 6 5 # check 5 if prime (yes) < Answer 10 3 15 2 30 1
An interesting property of this algorithm is that my worst-case scenario happens to be Sitong’s best-case as the first number he checks is N while mine would have to iterate from i to N. Sitong’s worst case scenario on the other hand is having a number that is 2^n or other numbers whose largest prime is small. Sitong’s algorithm would have to iterate until it reached the larger value to find the first prime. Another step through follows:
# Worst case for Sitong's solution Take for example 16: 1 16 # check 16 if prime (no) 2 8 # check 8 if prime (no) 4 4 # check 4 if prime (no) 8 2 # check 2 if prime (yes) 16 1
With two algorithms that are almost inverses in terms of best and worst case, it only makes sense to combine to two to have a super algorithm! With a simple 2 line change in the code, a new algorithm was born. Combining my idea to reduce the searching space by removing factors with Sitong’s prime checking to look for very large prime factors yielded an algorithm that deals nicely with both of our worst cases. This post look much longer than our solution, but the process of problem solving that led to the combination was a lot of fun to share. The pseudo-code follows:
# Recycled is_prime function from Sitong def is_prime?(n) if n == 1 return false if n < 4 return true for i in range (2, sqrt(n) if i % n == 0 return False return True # New solution def largest_prime_factor(n) i = 2 if is_prime?(n) return n while (i < n) while n % i == 0 # 2 lines of changes, while loop here to grab all small primes n = n/i # then reducing n to shrink the iteration if is_prime?(n/i) return n/i i += 1
To check out our epiphany from Sitong’s perspective check out his post!