*Parker Phinney founded Interview Cake, an interactive practice tool for preparing for coding interviews. He’s focused on shining a light on the black box of the algorithm design process, *showing people how to reason their way to the best solutions*. If you’re interested in further developing your algorithmic thinking skills, try some of our practice programming interview questions in Java, JavaScript, or Python.*

There’s a lot of mythos around Google’s software engineering interview questions. They’re fraught with tricky math puzzles, solvable only by geniuses who can Rain Man perfect solutions out of thin air. Unless you’re a child prodigy who built computers at age 10 and studied computer science at Stanford at age 16, you don’t stand a chance.

**Wrong! The Google interview is extremely learnable, and you don’t even need a computer science degree to learn how to win.**

To start, all you need is some general programming skills—the kind you can get on Treehouse. In this post, I’ll explain how to get from “new programmer” to “ready for the Google interview.”

## Understand the game

There’s nothing crazy or mysterious about Google interview questions. They’re just programming exercises. You can check out some practice Google interview questions for yourself. They’re not terribly different from the types of questions you get at Facebook, Amazon and most of the big-name tech companies. In fact, they aren’t terribly different from the types of questions you’ll get at *most* tech companies. Again, they’re *just programming exercises*.

Now, that doesn’t mean they’re *easy*. They’re not. They’re hard. They’re hard for everyone. But if you understand the game, you can learn how to beat them.

What makes Google interview questions hard is this: **Google interviewers aren’t just looking for an answer, they’re looking for the best answer.**

Not only that—you need to be able to show *why* a certain answer is best (a lucky guess isn’t enough). This means you’ll need a vocabulary for talking about *how efficiently a function solves a problem*, and a method for deriving more efficient solutions.

**What we’re talking about here is algorithm design, and it’s the main thing most engineers without a CS degree are lacking**.

It’s the main thing Google tests for in their interviews. But it doesn’t take 4 years to learn. More like 4 *weeks*. Or less.

Today, I want to give you a *taste* for that algorithm design process. My goal is to prove to you that this stuff is:

- Totally learnable
- Really fun
- A great way to elevate your engineering craft, whether or not you want to interview at Google

## A sample problem

The best coding interview problems have 3 answers: a bad answer, a good answer, and a great answer. All 3 answers are *correct*—what makes each answer better than the last is *how much time it takes to run*.

In each Google interview, a candidate spends as much as an hour working aloud—with some help from her interviewer—as she tries to derive the bad answer, tweak and improve to get to the good answer, and (if she’s good!) iterate further to get to the great answer.

The fastest way to get a feel for this process is to look at an example, so let’s work through a problem together. As we go, I’ll share the *thinking used to derive each answer*, so you can see that great coding interview performance isn’t just about being lucky enough to stumble into the right blind insight. There’s a learnable process to solving these problems.

Okay, let’s get into it. Here’s the problem:

**Write a function that takes an array of numbers and returns the greatest difference you can get by subtracting any two of those numbers.**

For example, if our input is `[5,8,6,1]`

, the greatest difference we can get is 8-1, which is 7. So we should return `7`

.

Let’s solve this together. How should we get started?

## The first solution

Here’s my first thought: we’re trying to find the *greatest difference*. So how about we find *all* of the possible differences, and return the greatest from among them?

To get every possible difference, we just need to subtract every number from every other number. We can do this by *nesting* two loops, with each one iterating over our input array. Here’s how that could look in Python:

```
def get_greatest_difference(nums):
greatest_difference = 0
for num1 in nums:
for num2 in nums:
difference = num1 - num2
if difference > greatest_difference:
greatest_difference = difference
return greatest_difference
```

Alright, this’ll work! But *how well* does it work? Specifically, how long does this function take to run?

**In algorithm design, we use a specific language for talking about run time, called big O notation**. It’s a bit tricky to understand at first, but it comes together quickly with a bit of practice. It’s best learned through example, so let’s jump right in:

We want to know how many *steps* our algorithm takes as it solves the problem. In this case the question is, “how many times do we have to compute a new `difference`

and see if it’s our new `greatest_difference`

?”

Well, the outer loop runs once for each item in our input array, and the inner loop runs once for each item in our array.

Let’s create some shorthand for “the number of items in our input array.” Let’s call that number “n.”

So our outer loop runs n times, and our *inner* loop runs *n times for each iteration of the outer loop*. So our inner loop runs a total of n*n, or n² times.

So we would say that our total run time is “order of n²” or O(n²)!

## The better solution

Can we do better? Let’s see. Notice that our O(n²) run time comes from the fact we’re nesting two loops that both go through all n numbers in our array. Perhaps we can solve the problem without nesting two loops?

The reason we nested two loops was to look at every possible difference. Can we get the greatest difference *without looking at every difference*?

This is the part where in a coding interview, we’d try solving a few sample inputs by hand and looking for patterns. In this case, there’s an important pattern to notice: **the greatest difference is always simply the largest number minus the smallest number**!

Can we avoid nesting two loops if we’re just looking for the largest and smallest numbers? Totally! Let’s just sort the input array; then our largest will be at the end and our smallest will be at the beginning:

```
def get_greatest_difference(nums):
sorted_nums = sorted(nums)
return sorted_nums[-1] - sorted_nums[0]
```

Now what’s our run time?

It looks like we’ve saved ourselves from needing any loops at all. But be careful: we still have to count work that’s done by *the other functions we call*! In this case, our sorting function (`sorted()`

) does quite a bit of work! How much work? This is where big O notation can get a bit nuanced. Sometimes we need to figure out *how a utility function works* in order to judge its big O time cost. This gets easier with practice.

It turns out that in general sorting costs O(n * log_{2}(n)) time. We can just treat that as a piece of trivia for now. If you haven’t seen a logarithm since high school, don’t worry! The important thing for now is that log_{2}(n) is much less than n. The best way to get an intuition for this is to look at some examples:

- if n is 100, log
_{2}(n) is ~6.6 - if n is 1,000, log
_{2}(n) is ~9.9. - if n is 10,000, log
_{2}(n) is ~13.29.

So log_{2}(n) is quite small, and it grows slowly. (For more on what log_{2}() means and where it comes up in algorithms, take a look at binary search.)

The good news is our last step—subtracting the 0th item from the last item—is so quick we don’t even count it in our big O calculation. So our total time cost for this approach is O(n * log_{2}(n)).

How does that compare with the O(n²) of our first solution? Well, O(n²) is O(n*n). If we compare that with O(n*log_{2}(n)), we can see that we’re still starting with n, but now we’re multiplying it by log_{2}(n), instead of multiplying it by n. Since log_{2}(n) is much smaller than n, O(n*log_{2}(n)) is much smaller than O(n*n). Our new algorithm takes much less time than our first!

## The best solution

Can we do *even better*? Hmm. Notice that the entirety of our big O time cost comes from sorting our input array. Can we get the max and the min without sorting?

What if we just looped through our array once, keeping track of the max and min “so far” as we went:

```
def get_greatest_difference(nums):
max = nums[0]
min = nums[0]
for num in nums:
if num > max:
max = num
elif num < min:
min = num
return max - min
```

That’ll work! Now we have just one loop going through all n items in our array, giving us a total run time of O(n).

This loop kind of does two things—it checks if the current number is the new max *and* it checks if the current number is the new min. So we might be tempted to say this is O(2n). But in big O notation, we “throw out the constants” which includes the 2 in this case. So we’d still call this O(n).

**So we went from an O(n²) time solution, to an O(n * log _{2}(n)) one, to an O(n) one.** That’s a huge savings!

To get a sense for how much time we saved, we could plug in some values or n. Suppose our input array was huge (as is often the case at large-scale tech companies like Google), and our O(n)-time solution took 4 hours to run…then our O(n²) solution might take order of 4²=16 hours to run! We saved 12 hours!

## Not *just* the Google interview

Without any knowledge of how to think about run time, we might look at these 3 solutions to this problem and say, “Each one is a beautiful unique snowflake that correctly solves the problem.” But our final solution is *much* faster than the others, especially for very large inputs. This is the power of algorithm design!

**Whether or not you’re interviewing for a job at Google, algorithmic thinking like this will dramatically enhance your software engineering skills**. With just a bit of practice, you can get to a point where *whenever* you write code, there’s a background process in your head that’s thinking about the run time implications of what you’re doing and brainstorming optimizations. This is the beginning of the path to going from a good engineer to a great engineer!

Great article I did the test and make sense. I also test using the min and max python function and they are faster than using a for for getting the min and max. Just want to share that with you guys.

Using a list of 10000000 integers:

get_greatest_difference2 4.558023929595947 to run: 1999 #Using Sorted function

get_greatest_difference3 0.5709567070007324 to run: 1999 #Using For to calculate min and max

get_greatest_difference4 0.4448235034942627 to run: 1999 #Using min and max python function

Follow us at https://www.facebook.com/codingpill/ for the latest interview questions from all over Bay Area

Hey! Great article. Good solutions.

Just wanted to suggest something even better for the problem.

If we created 2 heaps, a min-heap and a max-heap simultaneously, we could possibly retrieve this difference in O(log(n)) time! Just FYI

Thanks!

No, you can’t. If you want to find the maximum then you need to look at each element at least once! There is no way to skip an element since that element might, so O(n) is the absolute minimum.

Correct me if I’m wrong.

The solution proposed by Bharath will work as best as the best algorithm proposed in the article.

Creating the heaps will take O(n). Also, if you have a min-heap and max-heap, then retrieving the min and max of the arrays are just O(1) operations.

As HannosB pointed out, the best concievable runtime is O(n).

Very well explained parker and motivational ! You uplift the mood of so many people. Thanks genius 😉

It’s not valid to do computations on times, as you do in comparing 4 to 4*4. For example, take that job that took 4 hours to run. We can say it took 240 minutes to run. So the O(n^2) solution takes 240*240 = 57600 minutes = 960 hours to run. We saved 956 hours, not just 12 hours! But wait: What if we say that the same job took 1/6 day to run. Then the O(n^2) job takes 1/6 * 1/6 = 1/36 day = 40 minutes to run, which means that the bad algorithm is 3 hours 20 minutes faster!

The problem is that if you multiply two times you get something in units of time^2, which isn’t meaningful for time comparisons. You have to square the number of operations and only then multiply by time-per-operation to get truly comparable times.

There’s some misunderstanding from your side. No one is saysing that n is time, instead O(n^2) time complexity means that t ~ n^2 (plus lower orders of n) and n is the size of our array (in this case at least). Since n is an unsigned integer, n^2 > n * log(n) for every n, thus the time t of the O(n^2) algorithm will always be worse.

thanx for the great article 🙂

yes.. absolutely awesome article..

No more resume, No more job interview! Use swapvision to find the best job / employee! http://bit.ly/1MbrABJ

Great post!!! worth reading. Google seeks to hire people who have learnt to think outside the box. Counting on treehouse to make me one

I’m good at coding and knew several programming languages but I’m lacking at algorithmic thinking side. Reading this really increased my confidence as immediately after reading problem within a second my first solution was what used in step 3.

Wonderful article. This will absolutely help a large group of people.

what by using internal functions such as max()-min()?

you will save the time for typing the algorithm too.

I think what matter is which kind of computing environment you are working with.

More Simpler solution would be sort array (iterate once if not to use existing APIs to sort) lets take in ascending way and return last index value – first index value.

I think the fastest sorting algorithm known until date is still O(n log n). So a solution that involves sorting is still the “better” solution (and not the best).

Yeah I mean there are some sorts that can be as good as n if the list is partially sorted or whatever but their average run time is n^2. There’s a proof that shows that for a comparison sort the best average case run time is n log n. Maybe with quantum computers we have better run times. So yeah the best solution is just looping through once to find min and max.

Any sorting algorithm worst case will take O(n*logn) so tht means thats not the best. So sorting is not a useful approach. And also when you read a question you can get clues from it since this asks only for the largest difference we know min and max is needed if i twist it and say the largest difference in descending order then sorting is the best way .

Great article!

There should be a Algorithms course in Treehouse! 🙂

Yes definitely ! Great article.

Yep, especially when it’s really hard to get an engineering job without it.

Awesome article! Just happened to be reading up about the Big O Notation on your site yesterday as well 🙂

Really nice article Parker! Design the simple things is always the hardest part.

Thank you Parker!

I’m enjoyed reading this article and it was very helpful for me. I’ll definitely check Interview Cake right now.