Hi,

By academia I mean in university and research settings (for example, research of algorithms and other studies that require analysis asymptotic behavior).

Usually, itâs not an issue to mix it up, because the less formal usage is looser (and therefore safer to have a small margin).

So Iâll preface this by saying that the word *is* and *equals* are also often misused. (This is a note that is admitted in books like the MIT Press Introduction to Algorithms, which is a great book by the way). It doesnât mean it will or should stop soon, and outside of uni itâs better not to be pedantic.

What do I mean by misused? An example: In a strict sense to say a function is O(n) means that it is in the set of O(n) functions (all functions with a loose upper bound of O(n)). So to say my_function = O(n) or my_function is O(n) canât really be the caseâŠ itâs more accurate to say my_function **â** O(n). But also interestingly it means that because itâs a loose upper bound, if itâs O(n), itâs also O(n^2), and O(n!) and O(n^n) and so on.

Why is that useful knowledge? Sometimes youâll know the equation that your algorithm runs in but you donât know what itâs bounds are. So one strategy you can use is you can deduce what the wide bounds are as you make more calculationsâŠ to get a rough estimate.

An example of this:

`T(n) = 2T(n/2) + n`

We can see that itâs clearly at least going to be n (because of the linear n at the end of the expression), so we know itâs Î©(n).

`Î©(n) <= 2T(n/2) + n <= ???`

And If you try plugging in a few values for T, (say, 1-9), you may or may not see a pattern, but you can clearly see itâs under n^2, so thatâs a safe upper bound.

`Î©(n) <= 2T(n/2) + n <= O(n^2)`

This is already a much better starting point for guessing than before! It rules out a lot of things it can beâŠ and even if we only learn this much itâs quite a bit.

It turns out that the function is Î(n lg n), which can be proven with induction (and this represents the âtightâ worst case scenario). But the point was to show that the additional notation might seem over-the-top at first but is pretty useful to analyzing oneâs own work and other algorithms to which you might not know properties about (not to mention it helps for understanding written material).

A small list from the same book (again with the strong disclaimer that equality is a potentially misleading thing here):

f(n) = Î(g(n)) is similar to a = b

f(n) = O(g(n)) is similar to a <= b

f(n) = Î(g(n)) is similar to a >= b

f(n) = o(g(n)) is similar to a < b (this is the little-oh)

f(n) = Ï(g(n)) is similar to a > b (this is little-omega)

So a lot of algorithms will have very similar run-times and then the question becomes, are they the same? Which one to pick? A big factor in these choices is contextual. Certain shapes of inputs will mean that the edge case or average case behavior behaves differently.

Hope this helps! Book writers and educators have to balance a very fine-line between density and pragmatism for this material, and different audiences will have different needs.