Question about time complexity

Pages: 123
<edit: Somebody removed OP's post. Sad.>

One of the people here, Duthomhas, has written about this in the past, so I'll link his post here:
https://cplusplus.com/forum/general/226968/#msg1034344
Last edited on
big-O notation is the "runtime" of an algorithm depending on inputs size n, disregarding fixed factors.

Here "runtime" usually means the number of CPU operations that are required to complete the computation.

What "input size" (n) is, that totally depends on which kind of algorithm we are looking at.

(for example, if we are talking about sorting algorithms, then n would be the length of the list to be sorted)


Now, if an algorithm has complexity O(n), it means that the runtime grows linearly (proportionally) with the input size n. If we have complexity O(n²), then the runtime grows quadratic with the input size n, which means that runtime grows much faster than n. And, if we have O(log(n)), then the runtime grows logarithmic with the input size n, which means that runtime grows much slower than n. There also is O(n log(n)) and others.

See here:
https://danielmiessler.com/images/big-o-chart-tutorial-bazar-aymptotic-notations-1.png


For example, some "trivial" sorting algorithms, like bubble sort, have time complexity O(n²), which means that these algorithms work okay for "small" lists – but with "large" lists the runtime will explode, up to the point where you will have to wait "forever". Meanwhile, the "smart" sorting algorithms, like quicksort* or heap sort, get the same job done in O(n log(n)), which means that those algorithms can deal fine with "large" lists.

(*) quicksort has "worst case" runtime of O(n²), but that rarely happens in practice; average case runtime is O(n log(n))
Last edited on
how can a program has O(logn), what is logn

O(n) means the algorithm needs to loop once for every "n", usually the number of elements in an array. So if you had an array of 100 elements, the algorithm would loop 100 times.

O(log(n)) means if you had 100 elements, you'd loop log(100) times. You may ask.. log of which base? And the answer is, it doesn't really matter in terms of time complexity, since they'll be similar. This doesn't mean that it doesn't matter for everyone, it can make a difference in real life applications.
And just to give a practical example: The way I was first introduced to log(n) runtime complexity was a number-guessing program, where you guess a number between 1 and 100, and it tells if you if the actual number is higher or lower than what you guessed.

Spoiler alert: The best strategy is to guess the halfway point between the currently known bounds of the number. So if you start at a range of [1, 100], you guess 50, and it responds "incorrect, number is higher", then right after the first step, you've already cut your search space in half to [51, 100].

This repeated halving is what produces the log2(n) behavior. 100 --> 50 --> 25 --> 12 --> 6 --> 3 --> 1 goes through 7 numbers (depending on how you round), and log2(100) equals about 6.64, which rounds up to 7.
zapshe wrote:
... log of which base? And the answer is, it doesn't really matter in terms of time complexity, since they'll be similar. This doesn't mean that it doesn't matter for everyone, it can make a difference in real life applications.


Big O notation ignores constant factors.

O(5n) is the same as O(n).


With logarithms you can always go from one base to another by multiplying by a constant.

logx(n) = k × logy(n)

For example, if x=2 and y=10 then k=1/log10(2)


That means O(log2(n)) is the same as O(log10(n)), because it's just a matter of multiplying with a constant which is ignored by Big O. For that reason we often leave out the base when writing Big O notation ... O(log(n)) ... because it doesn't matter.
For that reason we often leave out the base when writing Big O notation ... O(log(n)) ... because it doesn't matter.

Doesn't matter.. to who?

O(1000n) becomes O(n). This doesn't "matter" when looking at it asymptotically - since you're comparing it to other algorithms that might be O(n^2)!


However, if you know n < 20, and you have to decide between O(1000n) and O(n^2), O(n^2) is the best choice. This is a long way to say that it does matter in practicality, and I'd encourage not needlessly simplifying if you're going to use the time complexity for *actual* coding.
It doesn't matter for the time complexity and the analysis you do with it. There is no way to say that O(log10(n)) is better than O(log2(n)). It means the exact same thing.

However, if you know n < 20, and you have to decide between O(1000n) and O(n^2), O(n^2) is the best choice.

All you can say is that for large enough n the O(1000n) algorithm will be better. For small n you don't know.

You might of course want to find out, by benchmarking or counting the number of operations (rough estimate) but then we're no longer using Big O.
Last edited on
> All you can say is that for large enough n the O(1000n) algorithm is better than the other two.
> For small n you don't know.

Yes. If we run the O(N*N) algorithm on the slowest machine we can find, and the O(NlogN) algorithm on the fastest machine, there would be still be some N above which the O(NlogN) outperforms the O(N).

But that N might be very large: "Rule 3. Fancy algorithms are slow when n is small, and n is usually small. Fancy algorithms have big constants. Until you know that n is frequently going to be big, don't get fancy. Even if n does get big, use Rule 2 [Don't tune for speed until you've measured] first." - Rob Pike

Stroustrup's example: https://cplusplus.com/forum/beginner/206320/#msg976016
You might of course want to find out, by benchmarking or counting the number of operations (rough estimate) but then we're no longer using Big O.

There's no reason to limit yourself to Big-O's limitations as a programmer. Big-O is really meant for people who're studying algorithms. In practical usage, you can make it much more versatile by not over simplifying.

The same method of obtaining Big-O can greatly help you when coding and comparing algorithms. Of course, testing how long they literally take is also important.

There is no way to say that O(log10(n)) is better than O(log2(n)). It means the exact same thing.

Of course there is a "way":

log2(10000000) = 23
log10(10000000) = 7

That's a potential 3x slower program. And that's with a very big, but not unreasonable array size. And if every loop takes a little while to complete, you'll definitely notice a 3x slower program.

Imagine coding an AI and you neglect a O(log10(n)) solution since it's basically the same as log2, and then your AI reacts in 600ms instead of 200ms.

When talking about the study of time complexity, the log of any number is so small, that its a waste really to differentiate, but the difference exists, especially with very large numbers.

...which is the whole basis between one Big O being better than another, that there exists a range of n from some value to infinity where the algorithm will always do better than the other.


We can both agree that by the "rules" of time complexity, differences of log base aren't counted. But ignoring the differences in a real world scenario isn't smart.
zapshe wrote:
There's no reason to limit yourself to Big-O's limitations as a programmer. Big-O is really meant for people who're studying algorithms.

Fine, just don't call it Big O.


Peter87 wrote:
There is no way to say that O(log10(n)) is better than O(log2(n)). It means the exact same thing.
zapshe wrote:
Of course there is a "way":

log2(10000000) = 23
log10(10000000) = 7

That's a potential 3x slower program...

But now you're extracting more information from the Big O notation than is actually there.

It's quite likely that an algorithm that discards 9/10 of all elements in each step has a higher hidden constant compared to an algorithm that only discards half the elements in each step.

E.g.
5 × log10(10000000) = 35


zapshe wrote:
Imagine coding an AI and you neglect a O(log10(n)) solution since it's basically the same as log2, and then your AI reacts in 600ms instead of 200ms.

That would be foolish. Two algorithms with the same time complexity could have very different performance.


zapshe wrote:
We can both agree that by the "rules" of time complexity, differences of log base aren't counted. But ignoring the differences in a real world scenario isn't smart.

I'm only saying we should ignore the constants when dealing with time complexities/Big O.

In other situations it might be relevant to look at those, though we often don't count things in enough detail for it be very meaningful. Often it's just easier to write a benchmark.
Last edited on
Fine, just don't call it Big O if that's not what you're using.

Big-O with benefits? 😳

It's quite likely that an algorithm that discards 9/10 of all elements in each step has a higher hidden constant

It's definitely possible, I don't know about likely.

Obviously this is only one consideration of many while designing/choosing an algorithm. You'll of course benchmark afterwards, and that'll be the most important deciding factor.

This is general algorithm advice, and not strictly about figuring out Big O. If your goal is practicality, following time complexity to the letter simply won't give you the full picture. Looking at the time complexities of sorting algorithms doesn't tell you which one to use. There are hundreds of scenarios where Bubble sort will outperform quicksort.
zapshe wrote:
Big-O with benefits? 😳

That would be confusing. The "limitations" of Big O are not arbitrary. There are good reasons for them. Big O is good at certain things and useless for other things.

It's a matter of using the right tool for the job.
If you don't need this tool, then don't use it.
If you create another tool I think you should call it something else.


zapshe wrote:
following time complexity to the letter simply won't give you the full picture.

The aim was never to give the "whole picture". It would be quite difficult since it's often used in theoretical discussions that don't even deal with real programming languages running on real hardware.

I mean, what are you even counting? Lines? Statements? Basic operations like addition, multiplication, assignment? To count each and every such operation would get tedious, and it would not be very meaningful because in reality they might not take the same amount of time to execute anyway. It would depend on the actual implementation, the language, the compiler, the hardware, etc.
Last edited on
That would be confusing. The "limitations" of Big O are not arbitrary. There are good reasons for them.

Again, who's the target audience for those rules? Professional algorithm analysists or the practical programmer?

It makes sense not to fret over base 10 or base 2 as an analyst, not so much when you're expected to shave off every possible ms of runtime.

As already noted already, the Big-O of an algorithm can only tell you that from x to infinity, it will outperform any other algorithm with a worse time complexity.

This leaves the obvious question of.. what is x?!

If you have O(n) vs O(log(n)), how can you tell at which n value O(log(n)) starts to outperform O(n)? You have no idea. For the practical programmer, showing O(10n) vs O(100*log(n)) will give a better idea - you can deduce what x is.

You'd understand that sorting algorithms tend to do extra work in the short run to save time in the long run AND you'd be able to deduce at what n value you'll start seeing the benefits of the extra short term work.


I mean, what are you even counting? Lines? Statements? Basic operations like addition, multiplication, assignment?

...loops?

If you have linear code, then it takes O(1) time to run. Obviously one program may take an hour while another takes 1 minute and still be O(1). But this is a limitation of seeing differences between different programs/code. No need to view Big-O in a way that gives you a limited understanding within your own program as well!

Big O doesn't care what your program does, it only cares how asymptotically long it takes for your program to do it. A O(log(n)) algorithm CAN take longer than an O(n) one, even without any removal of multiplication of constant numbers, hence why you can't use Big-O as a replacement for timing your program.

But why deliberately remove information to meet the standards of Big-O notation when you're programming for yourself and not to study algorithms?


I'm not sure we're even disagreeing. Perhaps you simply respect the notion of Big-O more as a scientific measurement rather than a practical one.


EDIT: Did my Big-O with benefits joke land? :(
Last edited on
> If you have O(n) vs O(log(n)), how can you tell at which n value O(log(n)) starts to outperform O(n)?

Measure both for one reasonable, small value of n. Once we have done that, we can compute the value of n for which the logarithmic algorithm would start to outperform the linear algorithm.
who's the target audience for those rules? Professional algorithm analysists {sic} or the practical programmer?

I am a self-taught, still learning programming hobbyist and all this high-falutin', hoity-toity obsession with theory without actual, practical application is close to useless for what I do.

First I make sure what I want to do actually works. Actual code that does the task. As good as I can make it within a reasonable amount of time.

Later I might, MIGHT, consider looking at ways of improving the "efficiency" of the program logic to make it speedier. And/or tightening up the code to better express what I am doing.

I understand the importance of the theory of time complexity when it comes to software design and usage. But to not actually test the theory multiple times with the same bit of code is where Ivory Tower Eggheadedness and I part ways.

I am a programmer without any formal classroom training whatsoever. I am an amateur, and damned proud of being one. Computer Science as a whole is not for me. For me programming is a fun hobby. IMO learning CS would drain most, if not all, that fun and enjoyment out of things. And be a huge waste of money and time.

zapshe wrote:
Did my Big-O with benefits joke land? :(

It landed for me, you got a very small, very reserved slight heh™ chuckle out of me. :Þ
zapshe wrote:
Again, who's the target audience for those rules? Professional algorithm analysists or the practical programmer?

Both.

zapshe wrote:
It makes sense not to fret over base 10 or base 2 as an analyst, not so much when you're expected to shave off every possible ms of runtime.

Time complexities can be helpful to understand certain high-level aspects of the performance,
but for low-level optimizations that doesn't improve the time complexities it becomes more a matter of understanding and experience, having a feeling for what is more efficient, and to benchmark and test that it actually is an improvement (this is harder than it sounds).

zapshe wrote:
As already noted already, the Big-O of an algorithm can only tell you that from x to infinity, it will outperform any other algorithm with a worse time complexity.

Yes, exactly. This is still useful information.

Usually it is better to go with the algorithm that has the better time complexity. If you know that another algorithm tend to perform better for your particular situation (from experience or because you have measured) then by all means use that one instead, but if you write code that should be able to handle an unknown number of elements it's usually safer to pick the one that has better time complexity.

zapshe wrote:
If you have O(n) vs O(log(n)), how can you tell at which n value O(log(n)) starts to outperform O(n)? You have no idea.

If I have no other information I would guess the O(log(n)) algorithm is faster. If I know n is always small I might want test to make sure, but because n is small it might not have a big impact on the overall performance so I might not bother to test it after all.

zapshe wrote:
For the practical programmer, showing O(10n) vs O(100*log(n)) will give a better idea - you can deduce what x is.

To me this does not add any information. If I really wanted to check if the O(n) algorithm is faster for my particular use case I would test it.

zapshe wrote:
You'd understand that sorting algorithms tend to do extra work in the short run to save time in the long run AND you'd be able to deduce at what n value you'll start seeing the benefits of the extra short term work.

Is the number of elements small? Are they nearly sorted? Do you care more about the worst case or the average case?

There are a lot of things to consider when implementing a general-purpose sorting algorithm that will be used in a lot of different situations. I've heard that std::sort is often implemented as a hybrid between different sorting algorithms.

Under certain circumstances there might be better algorithms. For sorted or nearly sorted elements insertion sort (or even bubble sort) can often outperform more sophisticated algorithms.

Peter87 wrote:
I mean, what are you even counting?
zapshe wrote:
...loops?

You mean loop iterations?

Does that mean you would describe an algorithm that loops over all elements once and calls f() and g() on each element as O(n)
1
2
3
4
5
for (auto& e : vec)
{
    e.f();
    e.g();
}

and an algorithm that uses two loops, one that calls f() and another one that calls g(), as O(2n)?
1
2
3
4
5
6
7
8
9
for (auto& e : vec)
{
    e.f();
}

for (auto& e : vec)
{
    e.g();
}

My concern is whether 2 is actually a meaningful value. Sure the second algorithm uses two loops and will therefore probably have to pay for the loop overhead twice but is it doing twice as much work? Maybe, but probably not.

Maybe f() and g() are very expensive functions that makes the loop overhead vanishingly small in comparison.

Maybe using two separate loops is better because it allows the compiler (or the person that implements it) to use SIMD instructions to process multiple elements at the same time.

If I added a lot of additional code (that runs in constant time) to each iteration that wouldn't change the value of the constants despite that each iteration then could take 2 or 3 times as long to run.

So to me that number 2 seems very arbitrary and not useful.

zapshe wrote:
A O(log(n)) algorithm CAN take longer than an O(n) one, [...], hence why you can't use Big-O as a replacement for timing your program.

Yes, for sufficiently small n that could indeed be true. I have never claimed otherwise.

zapshe wrote:
But why deliberately remove information to meet the standards of Big-O notation ...

Because it's not meaningful information? The "information" (about the constants) is gone as soon as you write it inside O( here ).

What you're trying to do is redefine what Big O means. It's like saying 10/15 has more information than 2/3. I guess in some context it does but in that case / doesn't really mean "divided by". It instead means "out of" (ten out of fifteen). While you of course could do something similar here with Big O it would be confusing for no good reason.

zapshe wrote:
... when you're programming for yourself and not to study algorithms?

To me there is no real difference between the two. Programs can be thought of as algorithms. Big O should still work the same way.

zapshe wrote:
I'm not sure we're even disagreeing. Perhaps you simply respect the notion of Big-O more as a scientific measurement rather than a practical one.

I think it is practical. It's only a problem if you're trying to interpret it to mean things that it doesn't mean.

I admit to having taken "algorithms" courses in the past and while I'm no expert on the subject and having forgot a lot of things I still try to follow the academic/computer science definition of what Big O means because that is the only thing that makes sense to me. To my recollection, it's the first time I see an alternative definition of Big O being proposed, although I have seen plenty of misconceptions in the past.

zapshe wrote:
Did my Big-O with benefits joke land? :(

As a non-native English speaker I'm afraid I've only heard the expression "with benefits" but don't fully comprehend its comedian value. I did understand it was not totally serious though, especial since you used that smiley. :-)
Last edited on
When N=1000,
N2 is about 100 times N.log(N) (where the logarithm is to base 2, typical of a tree structure - not that that is crucial).

When N=1000000
N2 is about 50000 times N.log(N).

Even with pretty big constants in the order relation, I would go with N.log(N) over N2 unless I knew that N was always going to be quite small.


The ratios aren't so large when comparing N.log(N) with N, however. This one is not so clear cut. log2(109) is only about 30.


Last edited on
As always, Big-O notation is just a tool for relativistic comparisons between algorithms working on the same data. As I mentioned in the other thread, the function in parentheses represents only the dominant term as n ⟶ ∞.

When it comes time to select an algorithm, you must be aware of your data’s characteristics. For a (very short) list:

  • How often will it trigger a particular algorithm’s best case behavior?
  • How often will it trigger that same algorithm’s worst case behavior?
  • Is the worst-case performance acceptable?
  • Does the data typically fall on the left or right side of two algortithms’ intersection point(s)?
  • Does the data typically fall on the left or right side of a single algorithm’s inflection point(s)?
  • Does an algorithm’s measured performance outperform another’s for all the conditions above?

For general-purpose sorting a lot of these questions cannot be answered well. That is why the C++ Standard Library gives you options, doing its best with no guarantees on data input.


The std::sort() algorithm (Introsort) is a hybrid that typically works well for unknown data. Hence its choice for the off-the-shelf sort algorithm.

* Introsort was actually developed for the C++ Standard Library. It is a combination of
  • Quicksort
  • Insertion sort
  • Heapsort
that knows when to switch choice of algorithm to avoid worst performance.

Introsort’s worst-case scenario is actually [n·log₂(n) + n·logₖ(n)], where k is not actually known (because it depends on the data).

In terms of Big-O notation, that’s 2·n·log(n) → O(n·log(n)).

If you know that your data will almost always** trigger a QuickSort worst-case, then skip the Introsort and just use a Heapsort. Your sorting will be consistently faster.

** You are profiling this stuff, right?

If you know that your data will always fit entirely in the L1 cache then skip everything else and use an Insertion sort. Nothing beats it for pure, unadulterated speed for small n.

Introsort requires random-access data. What if you don’t have that? Merge sort, my friend.

The list goes on. If all you are doing is programming a function to sort a couple hundred thousand things (integers, strings, etc) in memory, just grab std::sort.


Getting particular about whether or not something matters without actual, specific data to worry about is meaningless. Don’t throw tools away for an ideological pretext.


Did my Big-O with benefits joke land? :(
As a non-native English speaker


In English there is a phrase that goes “friends with benefits” which basically reads that said friends are having casual sex.

It’s a joke along the same lines as the standard “C++ Where friends have access to your private members.

It’s drivel, but triggers our associations with the lurid to make funny with something that has nothing to do with sex.
@Duthomhas, thank you for explaining the joke.
My concern is whether 2 is actually a meaningful value. Sure the second algorithm uses two loops and will therefore probably have to pay for the loop overhead twice but is it doing twice as much work?

You gave an example where simplifying wouldn't "really" matter, but there are plenty of examples where simplifying would conceal important information.

The purpose of Big-O is to tell you ASYMPTOTIC time complexity. It doesn't care if every iteration takes an hour for one algorithm and 1 minute for the other. This is the very reason bubble sort can outperform quicksort in cases of a small array.

Given that, if you have one algorithm with O(1000n) while another is O(n^2), its obvious that if you're dealing with n < 1000, O(n^2) will be a better choice asymptotically. There's simply no benefit to be had simplifying it to O(n), they are NOT the same.


So to me that number 2 seems very arbitrary and not useful

Well, clearly in your artificial example it isn't - but that's a long ways from it never matters.

In my algorithms course, we found the time complexity of algorithms - but all they did was output "Hello World" to the screen. Why? Because it doesn't matter. Big-O just cannot be used to differentiate between good/bad algorithms - it's only a measure of ASYMPTOTIC time.

If you had to choose between two UNKNOWN algorithms with a known n and time complexity, simplifying the time complexity serves only to hinder you from the best decision.


This reminds of me the Monty Hall problem - where seemingly irrelevant information is actually very statistically relevant.


Getting particular about whether or not something matters without actual, specific data to worry about is meaningless

This is exactly why a researcher will simplify O(100n) to O(n), and that can make sense. But once you're working with real data, it may not be so trivial anymore.

It’s drivel, but triggers our associations with the lurid to make funny with something that has nothing to do with sex.

My finest work yet.
Pages: 123