O(·) is not a function, so how can a function be equal to it?
up vote
23
down vote
favorite
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
complexity-theory asymptotics notation
New contributor
add a comment |
up vote
23
down vote
favorite
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
complexity-theory asymptotics notation
New contributor
Simply, the = symbol in $T(n)=O(f(n))$ does not mean "equals". The question rests on this assumption. Did you find a source that says so?
– ShreevatsaR
6 hours ago
This is just one of the many ways mathematical notation is abused :(
– stendarr
4 hours ago
add a comment |
up vote
23
down vote
favorite
up vote
23
down vote
favorite
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
complexity-theory asymptotics notation
New contributor
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
complexity-theory asymptotics notation
complexity-theory asymptotics notation
New contributor
New contributor
edited yesterday
Gilles♦
32.5k790161
32.5k790161
New contributor
asked yesterday
Mediocre
22127
22127
New contributor
New contributor
Simply, the = symbol in $T(n)=O(f(n))$ does not mean "equals". The question rests on this assumption. Did you find a source that says so?
– ShreevatsaR
6 hours ago
This is just one of the many ways mathematical notation is abused :(
– stendarr
4 hours ago
add a comment |
Simply, the = symbol in $T(n)=O(f(n))$ does not mean "equals". The question rests on this assumption. Did you find a source that says so?
– ShreevatsaR
6 hours ago
This is just one of the many ways mathematical notation is abused :(
– stendarr
4 hours ago
Simply, the = symbol in $T(n)=O(f(n))$ does not mean "equals". The question rests on this assumption. Did you find a source that says so?
– ShreevatsaR
6 hours ago
Simply, the = symbol in $T(n)=O(f(n))$ does not mean "equals". The question rests on this assumption. Did you find a source that says so?
– ShreevatsaR
6 hours ago
This is just one of the many ways mathematical notation is abused :(
– stendarr
4 hours ago
This is just one of the many ways mathematical notation is abused :(
– stendarr
4 hours ago
add a comment |
7 Answers
7
active
oldest
votes
up vote
58
down vote
accepted
Strictly speaking, $O(f(n))$ is a set of functions. So the value of $O(f(n))$ is simply the set of all functions that grow asymptotically not faster than $f(n)$. The notation $T(n) = O(f(n))$ is just a simplified way to write that $T(n) in O(f(n))$.
Note that this also clarifies some issues of the $O$ notation as it is normally used.
For example, $n^2 in O(n^3)$ but $n^3 notin O(n^2)$. So you can write $O(n^2) = O(n^3)$, but you cannot write $O(n^3) = O(n^2)$, even though one normally expects $=$ to represent an equivalence relation (in particular, symmetric).
Comments are not for extended discussion; this conversation has been moved to chat.
– Gilles♦
8 hours ago
add a comment |
up vote
22
down vote
$O$ is a function
$$begin{align}
O : (mathbb{N}to mathbb{R}) &to mathbf{P}(mathbb{N}to mathbb{R})
\ f &mapsto O(f)
end{align}$$
i.e. it accepts a function $f$ and yields a set of functions that share the asymptotic bound of (at most) $f$. And strictly speaking the correct notation is thus
$$
(n mapsto T(n)) in O(nmapsto f(n))
$$
or short
$$
T in O(f)
$$
but it's customary in maths, science and CS to just use a variable somewhere in the expression to denote that you're considering functions of the argument $n$ on both sides. So $T(n) in O(f(n))$ is quite fine as well. $T(n) = O(f(n))$ is pretty much wrong though, as you suspected. It is very commonly used though, so definitely keep in mind what people mean when they write this.
I would advise against ever writing $T(n) = O(f(n))$, but opinions differ.
8
$T(n)=O(f(n)$ is a completely standard use of notation so claiming that it's wrong is unhelpful. (As, IMO, is claiming that $O$ is a function; that's technically true, but it's not really a helpful way to think about it.)
– David Richerby
yesterday
15
@DavidRicherby some things are completely standard but shouldn't be. $T(n) = O(f(n))$ is one example. Sure it's still good to know what people mean by this (as the OP does already), but how is it not helpful to confirm that this notation is technically speaking bogus? Why would you use it? Even if the $=$ version isn't ambiguous, neither is the $in$ one, and the more people switch to that notation the better. It's always better to stick to what actually makes sense mathematically, unless it's much more awkward to write. $in$ is perfectly readable and easy to write.
– leftaroundabout
yesterday
I like your answer., +1. I'd like to suggest though that you clarify in the last sentence that although it's wrong everyone uses it to mean "in" instead of "equal to", unfortunately.
– Pedro A
yesterday
Comments are not for extended discussion; this conversation has been moved to chat.
– D.W.♦
23 hours ago
add a comment |
up vote
7
down vote
Formally speaking, $O(f(n))$ is a the set of functions $g$ such that $g(n)leq k,f(n)$ for some constant $k$ and all large enough $n$. Thus, the most pedantically accurate way of writing it would be $T(n)in O(f(n))$. However, using $=$ instead of $in$ is completely standard, and $T(n)=O(f(n))$ just means $T(n)in O(f(n))$. This is essentially never ambiguous because we almost never manipulate the set $O(f(n))$.
In a sense, using equality makes $O(f(n))$ mean "some function $g$ such that $g(n)leq f,g(n)$ for all large enough $n$", and this means that you can write things like $f(n) = 3n + O(log n)$. Note that this is much more precise than, e.g., $f(n)=Theta(n)$ or $f(n)=O(n+log n)$.
You could also write $f(n) - 3n in O(log n)$. Though I admit that it can be handy to conclude a multiple-step computation with $f(n) = ldots = 3n + O(log n)$.
– leftaroundabout
yesterday
The rearrangement only works in standalone statements. It's much more common in the middle of calculations, where that kind of thing doesn't work, and where multiple functions get absorbed together into the Landau notation. (Stuff like $f(x) = e^{-x}(e^{2x}+O(x)) = e^x+o(1)$).
– David Richerby
yesterday
I find computations like that jarring; those equals signs aren't bidirectional anymore. I'm not sure there's more of a problem with writing $f(x) in e^x(e^{2x}+O(x)) subset e^x + o(1)$. I suppose that's also abuse of notation; basically you're overloading the $=$ operator whereas I prefer to lift $+$ and $cdot$ to also operate on sets.
– leftaroundabout
yesterday
A possibility would be to use slightly different notations for the set of asymptotic functions, $O(h)$, and for an undetermined element of this set, say $mathord{in}O(h)$. So, if $f-g in O(h)$, one can write $f = g+mathord{in}O(h)$ instead of the ambiguous $f = g+O(h)$. You can then write without problem $mathord{in}O(h) = f-g$. Other possible notations for an unspecified element of $O(h)$ might be $dot O(h)$, $hat O(h)$...
– Michel Fioc
18 hours ago
add a comment |
up vote
7
down vote
In The Algorithm Design Manual [1], you can find a paragraph about this issue:
The Big Oh notation [including $O$, $Omega$ and $Theta$] provides for a rough notion of equality when comparing
functions. It is somewhat jarring to see an expression like $n^2 = O(n^3)$, but its
meaning can always be resolved by going back to the definitions in terms of upper
and lower bounds. It is perhaps most instructive to read the " = " here as meaning
"one of the functions that are". Clearly, $n^2$ is one of functions that are $O(n^3)$.
Strictly speaking (as noted by David Richerby's comment), $Theta$ gives you a rough notion of equality, $O$ a rough notion of less-than-or-equal-to, and $Omega$ and rough notion of greater-than-or-equal-to.
Nonetheless, I agree with Vincenzo's answer: you can simply interpret $O(f(n))$ as a set of functions and the = symbol as a set membership symbol $in$.
[1] Skiena, S. S. The Algorithm Design Manual (Second Edition). Springer (2008)
add a comment |
up vote
4
down vote
Usually, statements like
$$f = O(g)$$
can be interpreted as
$$ text{there exists } h in O(g) text{ such that }f = h,. $$
This becomes more useful in contexts like David Richerby mentions, where we write $f(n) = n^3 + O(n^2)$ to mean "there exists $g(n) in O(n^2)$ such that $f(n) = n^2 + g(n)$."
I find this existential quantifier interpretation so useful that I am tempted to write things like
$$ f(n) leq O(n^3) $$
which some will find an even more egregious style violation, but it is just a space-saving way of writing "there exists $C$ such that $f(n) leq C n^3$."
add a comment |
up vote
4
down vote
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${text{ for all }ngeq n_{0}, |T(n)|leq ;Mf(n){text{ for all }}ngeq n_{0}.}$
Please note this interpretation is considered the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. You are right that the equal sign does not mean equality in its ordinary sense. (Another example of abuse of the equality sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome :=
as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It is not an equality; however, it is an equality since there is an equality sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(log n)$? This equality is not covered by the definition above. We would like to introduce another convention, the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $nto infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(cdots)$ on the left side, there are some functions satisfying each $O(cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $Theta$, $Omega$, small $o$, small $omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those.
add a comment |
up vote
0
down vote
Many other posters have explained that the Big-O can be thought of as denoting a set of functions, and that the notation $n^2 = O(n^3)$ indicates that $n^2$ (as a function of $n$) is in the set denoted by $O(n^3)$ (again considering $n$ as the parameter). In English text, you may prefer to write "$n^2$ is in $O(n^3)$" to avoid confusion.
Although the notation can be confusing, it may help to think of the $O$ and the $=$ as parts of the same notation, that is, to treat $= O$ as if it were one symbol. It is little different from what we do when we write >=
in a programming language: two symbols, adjacent, become one in our eyes.
Another tricky aspect of the notation is that the variable acting as the parameter is not explicitly identified, or bound, the way it would be in a function declaration or a lambda notation. This can be especially confusing when there are two variables involved, as in $O(mn)$ or even more so in an expression like $O(n^c)$ since it may be implied that $c$ is a constant. Then again, some algorithms have complexity in a Big-O set that technically varies according to two variables, while in practice one of them is fixed. Still more, there may be multiple reasonable ways of measuring the complexity of one algorithm. For example, if the input is a number, your algorithm might be $O(n)$ in the value $n$ of the number but $O(2^b)$ in the bit size of the number. (Although in complexity theory per se, the bit size is usually the right parameter.)
All this is to say that the Big-O is an informal notation, hounded by imprecision, and you often have to use other context to understand what an author is saying.
As always, you're best to avoid confusion in your own writing, and I suggest avoiding the $= O$ and using instead $in O$ or the English "…is in…"
New contributor
"$n^2=O(n^3)$" would usually be read as "$n$-squared is big-O of $n$-cubed" or "$n$-squared is order $n$-cubed."
– David Richerby
16 hours ago
add a comment |
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
58
down vote
accepted
Strictly speaking, $O(f(n))$ is a set of functions. So the value of $O(f(n))$ is simply the set of all functions that grow asymptotically not faster than $f(n)$. The notation $T(n) = O(f(n))$ is just a simplified way to write that $T(n) in O(f(n))$.
Note that this also clarifies some issues of the $O$ notation as it is normally used.
For example, $n^2 in O(n^3)$ but $n^3 notin O(n^2)$. So you can write $O(n^2) = O(n^3)$, but you cannot write $O(n^3) = O(n^2)$, even though one normally expects $=$ to represent an equivalence relation (in particular, symmetric).
Comments are not for extended discussion; this conversation has been moved to chat.
– Gilles♦
8 hours ago
add a comment |
up vote
58
down vote
accepted
Strictly speaking, $O(f(n))$ is a set of functions. So the value of $O(f(n))$ is simply the set of all functions that grow asymptotically not faster than $f(n)$. The notation $T(n) = O(f(n))$ is just a simplified way to write that $T(n) in O(f(n))$.
Note that this also clarifies some issues of the $O$ notation as it is normally used.
For example, $n^2 in O(n^3)$ but $n^3 notin O(n^2)$. So you can write $O(n^2) = O(n^3)$, but you cannot write $O(n^3) = O(n^2)$, even though one normally expects $=$ to represent an equivalence relation (in particular, symmetric).
Comments are not for extended discussion; this conversation has been moved to chat.
– Gilles♦
8 hours ago
add a comment |
up vote
58
down vote
accepted
up vote
58
down vote
accepted
Strictly speaking, $O(f(n))$ is a set of functions. So the value of $O(f(n))$ is simply the set of all functions that grow asymptotically not faster than $f(n)$. The notation $T(n) = O(f(n))$ is just a simplified way to write that $T(n) in O(f(n))$.
Note that this also clarifies some issues of the $O$ notation as it is normally used.
For example, $n^2 in O(n^3)$ but $n^3 notin O(n^2)$. So you can write $O(n^2) = O(n^3)$, but you cannot write $O(n^3) = O(n^2)$, even though one normally expects $=$ to represent an equivalence relation (in particular, symmetric).
Strictly speaking, $O(f(n))$ is a set of functions. So the value of $O(f(n))$ is simply the set of all functions that grow asymptotically not faster than $f(n)$. The notation $T(n) = O(f(n))$ is just a simplified way to write that $T(n) in O(f(n))$.
Note that this also clarifies some issues of the $O$ notation as it is normally used.
For example, $n^2 in O(n^3)$ but $n^3 notin O(n^2)$. So you can write $O(n^2) = O(n^3)$, but you cannot write $O(n^3) = O(n^2)$, even though one normally expects $=$ to represent an equivalence relation (in particular, symmetric).
edited yesterday
answered yesterday
Vincenzo
76939
76939
Comments are not for extended discussion; this conversation has been moved to chat.
– Gilles♦
8 hours ago
add a comment |
Comments are not for extended discussion; this conversation has been moved to chat.
– Gilles♦
8 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– Gilles♦
8 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– Gilles♦
8 hours ago
add a comment |
up vote
22
down vote
$O$ is a function
$$begin{align}
O : (mathbb{N}to mathbb{R}) &to mathbf{P}(mathbb{N}to mathbb{R})
\ f &mapsto O(f)
end{align}$$
i.e. it accepts a function $f$ and yields a set of functions that share the asymptotic bound of (at most) $f$. And strictly speaking the correct notation is thus
$$
(n mapsto T(n)) in O(nmapsto f(n))
$$
or short
$$
T in O(f)
$$
but it's customary in maths, science and CS to just use a variable somewhere in the expression to denote that you're considering functions of the argument $n$ on both sides. So $T(n) in O(f(n))$ is quite fine as well. $T(n) = O(f(n))$ is pretty much wrong though, as you suspected. It is very commonly used though, so definitely keep in mind what people mean when they write this.
I would advise against ever writing $T(n) = O(f(n))$, but opinions differ.
8
$T(n)=O(f(n)$ is a completely standard use of notation so claiming that it's wrong is unhelpful. (As, IMO, is claiming that $O$ is a function; that's technically true, but it's not really a helpful way to think about it.)
– David Richerby
yesterday
15
@DavidRicherby some things are completely standard but shouldn't be. $T(n) = O(f(n))$ is one example. Sure it's still good to know what people mean by this (as the OP does already), but how is it not helpful to confirm that this notation is technically speaking bogus? Why would you use it? Even if the $=$ version isn't ambiguous, neither is the $in$ one, and the more people switch to that notation the better. It's always better to stick to what actually makes sense mathematically, unless it's much more awkward to write. $in$ is perfectly readable and easy to write.
– leftaroundabout
yesterday
I like your answer., +1. I'd like to suggest though that you clarify in the last sentence that although it's wrong everyone uses it to mean "in" instead of "equal to", unfortunately.
– Pedro A
yesterday
Comments are not for extended discussion; this conversation has been moved to chat.
– D.W.♦
23 hours ago
add a comment |
up vote
22
down vote
$O$ is a function
$$begin{align}
O : (mathbb{N}to mathbb{R}) &to mathbf{P}(mathbb{N}to mathbb{R})
\ f &mapsto O(f)
end{align}$$
i.e. it accepts a function $f$ and yields a set of functions that share the asymptotic bound of (at most) $f$. And strictly speaking the correct notation is thus
$$
(n mapsto T(n)) in O(nmapsto f(n))
$$
or short
$$
T in O(f)
$$
but it's customary in maths, science and CS to just use a variable somewhere in the expression to denote that you're considering functions of the argument $n$ on both sides. So $T(n) in O(f(n))$ is quite fine as well. $T(n) = O(f(n))$ is pretty much wrong though, as you suspected. It is very commonly used though, so definitely keep in mind what people mean when they write this.
I would advise against ever writing $T(n) = O(f(n))$, but opinions differ.
8
$T(n)=O(f(n)$ is a completely standard use of notation so claiming that it's wrong is unhelpful. (As, IMO, is claiming that $O$ is a function; that's technically true, but it's not really a helpful way to think about it.)
– David Richerby
yesterday
15
@DavidRicherby some things are completely standard but shouldn't be. $T(n) = O(f(n))$ is one example. Sure it's still good to know what people mean by this (as the OP does already), but how is it not helpful to confirm that this notation is technically speaking bogus? Why would you use it? Even if the $=$ version isn't ambiguous, neither is the $in$ one, and the more people switch to that notation the better. It's always better to stick to what actually makes sense mathematically, unless it's much more awkward to write. $in$ is perfectly readable and easy to write.
– leftaroundabout
yesterday
I like your answer., +1. I'd like to suggest though that you clarify in the last sentence that although it's wrong everyone uses it to mean "in" instead of "equal to", unfortunately.
– Pedro A
yesterday
Comments are not for extended discussion; this conversation has been moved to chat.
– D.W.♦
23 hours ago
add a comment |
up vote
22
down vote
up vote
22
down vote
$O$ is a function
$$begin{align}
O : (mathbb{N}to mathbb{R}) &to mathbf{P}(mathbb{N}to mathbb{R})
\ f &mapsto O(f)
end{align}$$
i.e. it accepts a function $f$ and yields a set of functions that share the asymptotic bound of (at most) $f$. And strictly speaking the correct notation is thus
$$
(n mapsto T(n)) in O(nmapsto f(n))
$$
or short
$$
T in O(f)
$$
but it's customary in maths, science and CS to just use a variable somewhere in the expression to denote that you're considering functions of the argument $n$ on both sides. So $T(n) in O(f(n))$ is quite fine as well. $T(n) = O(f(n))$ is pretty much wrong though, as you suspected. It is very commonly used though, so definitely keep in mind what people mean when they write this.
I would advise against ever writing $T(n) = O(f(n))$, but opinions differ.
$O$ is a function
$$begin{align}
O : (mathbb{N}to mathbb{R}) &to mathbf{P}(mathbb{N}to mathbb{R})
\ f &mapsto O(f)
end{align}$$
i.e. it accepts a function $f$ and yields a set of functions that share the asymptotic bound of (at most) $f$. And strictly speaking the correct notation is thus
$$
(n mapsto T(n)) in O(nmapsto f(n))
$$
or short
$$
T in O(f)
$$
but it's customary in maths, science and CS to just use a variable somewhere in the expression to denote that you're considering functions of the argument $n$ on both sides. So $T(n) in O(f(n))$ is quite fine as well. $T(n) = O(f(n))$ is pretty much wrong though, as you suspected. It is very commonly used though, so definitely keep in mind what people mean when they write this.
I would advise against ever writing $T(n) = O(f(n))$, but opinions differ.
edited yesterday
answered yesterday
leftaroundabout
1,09559
1,09559
8
$T(n)=O(f(n)$ is a completely standard use of notation so claiming that it's wrong is unhelpful. (As, IMO, is claiming that $O$ is a function; that's technically true, but it's not really a helpful way to think about it.)
– David Richerby
yesterday
15
@DavidRicherby some things are completely standard but shouldn't be. $T(n) = O(f(n))$ is one example. Sure it's still good to know what people mean by this (as the OP does already), but how is it not helpful to confirm that this notation is technically speaking bogus? Why would you use it? Even if the $=$ version isn't ambiguous, neither is the $in$ one, and the more people switch to that notation the better. It's always better to stick to what actually makes sense mathematically, unless it's much more awkward to write. $in$ is perfectly readable and easy to write.
– leftaroundabout
yesterday
I like your answer., +1. I'd like to suggest though that you clarify in the last sentence that although it's wrong everyone uses it to mean "in" instead of "equal to", unfortunately.
– Pedro A
yesterday
Comments are not for extended discussion; this conversation has been moved to chat.
– D.W.♦
23 hours ago
add a comment |
8
$T(n)=O(f(n)$ is a completely standard use of notation so claiming that it's wrong is unhelpful. (As, IMO, is claiming that $O$ is a function; that's technically true, but it's not really a helpful way to think about it.)
– David Richerby
yesterday
15
@DavidRicherby some things are completely standard but shouldn't be. $T(n) = O(f(n))$ is one example. Sure it's still good to know what people mean by this (as the OP does already), but how is it not helpful to confirm that this notation is technically speaking bogus? Why would you use it? Even if the $=$ version isn't ambiguous, neither is the $in$ one, and the more people switch to that notation the better. It's always better to stick to what actually makes sense mathematically, unless it's much more awkward to write. $in$ is perfectly readable and easy to write.
– leftaroundabout
yesterday
I like your answer., +1. I'd like to suggest though that you clarify in the last sentence that although it's wrong everyone uses it to mean "in" instead of "equal to", unfortunately.
– Pedro A
yesterday
Comments are not for extended discussion; this conversation has been moved to chat.
– D.W.♦
23 hours ago
8
8
$T(n)=O(f(n)$ is a completely standard use of notation so claiming that it's wrong is unhelpful. (As, IMO, is claiming that $O$ is a function; that's technically true, but it's not really a helpful way to think about it.)
– David Richerby
yesterday
$T(n)=O(f(n)$ is a completely standard use of notation so claiming that it's wrong is unhelpful. (As, IMO, is claiming that $O$ is a function; that's technically true, but it's not really a helpful way to think about it.)
– David Richerby
yesterday
15
15
@DavidRicherby some things are completely standard but shouldn't be. $T(n) = O(f(n))$ is one example. Sure it's still good to know what people mean by this (as the OP does already), but how is it not helpful to confirm that this notation is technically speaking bogus? Why would you use it? Even if the $=$ version isn't ambiguous, neither is the $in$ one, and the more people switch to that notation the better. It's always better to stick to what actually makes sense mathematically, unless it's much more awkward to write. $in$ is perfectly readable and easy to write.
– leftaroundabout
yesterday
@DavidRicherby some things are completely standard but shouldn't be. $T(n) = O(f(n))$ is one example. Sure it's still good to know what people mean by this (as the OP does already), but how is it not helpful to confirm that this notation is technically speaking bogus? Why would you use it? Even if the $=$ version isn't ambiguous, neither is the $in$ one, and the more people switch to that notation the better. It's always better to stick to what actually makes sense mathematically, unless it's much more awkward to write. $in$ is perfectly readable and easy to write.
– leftaroundabout
yesterday
I like your answer., +1. I'd like to suggest though that you clarify in the last sentence that although it's wrong everyone uses it to mean "in" instead of "equal to", unfortunately.
– Pedro A
yesterday
I like your answer., +1. I'd like to suggest though that you clarify in the last sentence that although it's wrong everyone uses it to mean "in" instead of "equal to", unfortunately.
– Pedro A
yesterday
Comments are not for extended discussion; this conversation has been moved to chat.
– D.W.♦
23 hours ago
Comments are not for extended discussion; this conversation has been moved to chat.
– D.W.♦
23 hours ago
add a comment |
up vote
7
down vote
Formally speaking, $O(f(n))$ is a the set of functions $g$ such that $g(n)leq k,f(n)$ for some constant $k$ and all large enough $n$. Thus, the most pedantically accurate way of writing it would be $T(n)in O(f(n))$. However, using $=$ instead of $in$ is completely standard, and $T(n)=O(f(n))$ just means $T(n)in O(f(n))$. This is essentially never ambiguous because we almost never manipulate the set $O(f(n))$.
In a sense, using equality makes $O(f(n))$ mean "some function $g$ such that $g(n)leq f,g(n)$ for all large enough $n$", and this means that you can write things like $f(n) = 3n + O(log n)$. Note that this is much more precise than, e.g., $f(n)=Theta(n)$ or $f(n)=O(n+log n)$.
You could also write $f(n) - 3n in O(log n)$. Though I admit that it can be handy to conclude a multiple-step computation with $f(n) = ldots = 3n + O(log n)$.
– leftaroundabout
yesterday
The rearrangement only works in standalone statements. It's much more common in the middle of calculations, where that kind of thing doesn't work, and where multiple functions get absorbed together into the Landau notation. (Stuff like $f(x) = e^{-x}(e^{2x}+O(x)) = e^x+o(1)$).
– David Richerby
yesterday
I find computations like that jarring; those equals signs aren't bidirectional anymore. I'm not sure there's more of a problem with writing $f(x) in e^x(e^{2x}+O(x)) subset e^x + o(1)$. I suppose that's also abuse of notation; basically you're overloading the $=$ operator whereas I prefer to lift $+$ and $cdot$ to also operate on sets.
– leftaroundabout
yesterday
A possibility would be to use slightly different notations for the set of asymptotic functions, $O(h)$, and for an undetermined element of this set, say $mathord{in}O(h)$. So, if $f-g in O(h)$, one can write $f = g+mathord{in}O(h)$ instead of the ambiguous $f = g+O(h)$. You can then write without problem $mathord{in}O(h) = f-g$. Other possible notations for an unspecified element of $O(h)$ might be $dot O(h)$, $hat O(h)$...
– Michel Fioc
18 hours ago
add a comment |
up vote
7
down vote
Formally speaking, $O(f(n))$ is a the set of functions $g$ such that $g(n)leq k,f(n)$ for some constant $k$ and all large enough $n$. Thus, the most pedantically accurate way of writing it would be $T(n)in O(f(n))$. However, using $=$ instead of $in$ is completely standard, and $T(n)=O(f(n))$ just means $T(n)in O(f(n))$. This is essentially never ambiguous because we almost never manipulate the set $O(f(n))$.
In a sense, using equality makes $O(f(n))$ mean "some function $g$ such that $g(n)leq f,g(n)$ for all large enough $n$", and this means that you can write things like $f(n) = 3n + O(log n)$. Note that this is much more precise than, e.g., $f(n)=Theta(n)$ or $f(n)=O(n+log n)$.
You could also write $f(n) - 3n in O(log n)$. Though I admit that it can be handy to conclude a multiple-step computation with $f(n) = ldots = 3n + O(log n)$.
– leftaroundabout
yesterday
The rearrangement only works in standalone statements. It's much more common in the middle of calculations, where that kind of thing doesn't work, and where multiple functions get absorbed together into the Landau notation. (Stuff like $f(x) = e^{-x}(e^{2x}+O(x)) = e^x+o(1)$).
– David Richerby
yesterday
I find computations like that jarring; those equals signs aren't bidirectional anymore. I'm not sure there's more of a problem with writing $f(x) in e^x(e^{2x}+O(x)) subset e^x + o(1)$. I suppose that's also abuse of notation; basically you're overloading the $=$ operator whereas I prefer to lift $+$ and $cdot$ to also operate on sets.
– leftaroundabout
yesterday
A possibility would be to use slightly different notations for the set of asymptotic functions, $O(h)$, and for an undetermined element of this set, say $mathord{in}O(h)$. So, if $f-g in O(h)$, one can write $f = g+mathord{in}O(h)$ instead of the ambiguous $f = g+O(h)$. You can then write without problem $mathord{in}O(h) = f-g$. Other possible notations for an unspecified element of $O(h)$ might be $dot O(h)$, $hat O(h)$...
– Michel Fioc
18 hours ago
add a comment |
up vote
7
down vote
up vote
7
down vote
Formally speaking, $O(f(n))$ is a the set of functions $g$ such that $g(n)leq k,f(n)$ for some constant $k$ and all large enough $n$. Thus, the most pedantically accurate way of writing it would be $T(n)in O(f(n))$. However, using $=$ instead of $in$ is completely standard, and $T(n)=O(f(n))$ just means $T(n)in O(f(n))$. This is essentially never ambiguous because we almost never manipulate the set $O(f(n))$.
In a sense, using equality makes $O(f(n))$ mean "some function $g$ such that $g(n)leq f,g(n)$ for all large enough $n$", and this means that you can write things like $f(n) = 3n + O(log n)$. Note that this is much more precise than, e.g., $f(n)=Theta(n)$ or $f(n)=O(n+log n)$.
Formally speaking, $O(f(n))$ is a the set of functions $g$ such that $g(n)leq k,f(n)$ for some constant $k$ and all large enough $n$. Thus, the most pedantically accurate way of writing it would be $T(n)in O(f(n))$. However, using $=$ instead of $in$ is completely standard, and $T(n)=O(f(n))$ just means $T(n)in O(f(n))$. This is essentially never ambiguous because we almost never manipulate the set $O(f(n))$.
In a sense, using equality makes $O(f(n))$ mean "some function $g$ such that $g(n)leq f,g(n)$ for all large enough $n$", and this means that you can write things like $f(n) = 3n + O(log n)$. Note that this is much more precise than, e.g., $f(n)=Theta(n)$ or $f(n)=O(n+log n)$.
answered yesterday
David Richerby
65.4k1597186
65.4k1597186
You could also write $f(n) - 3n in O(log n)$. Though I admit that it can be handy to conclude a multiple-step computation with $f(n) = ldots = 3n + O(log n)$.
– leftaroundabout
yesterday
The rearrangement only works in standalone statements. It's much more common in the middle of calculations, where that kind of thing doesn't work, and where multiple functions get absorbed together into the Landau notation. (Stuff like $f(x) = e^{-x}(e^{2x}+O(x)) = e^x+o(1)$).
– David Richerby
yesterday
I find computations like that jarring; those equals signs aren't bidirectional anymore. I'm not sure there's more of a problem with writing $f(x) in e^x(e^{2x}+O(x)) subset e^x + o(1)$. I suppose that's also abuse of notation; basically you're overloading the $=$ operator whereas I prefer to lift $+$ and $cdot$ to also operate on sets.
– leftaroundabout
yesterday
A possibility would be to use slightly different notations for the set of asymptotic functions, $O(h)$, and for an undetermined element of this set, say $mathord{in}O(h)$. So, if $f-g in O(h)$, one can write $f = g+mathord{in}O(h)$ instead of the ambiguous $f = g+O(h)$. You can then write without problem $mathord{in}O(h) = f-g$. Other possible notations for an unspecified element of $O(h)$ might be $dot O(h)$, $hat O(h)$...
– Michel Fioc
18 hours ago
add a comment |
You could also write $f(n) - 3n in O(log n)$. Though I admit that it can be handy to conclude a multiple-step computation with $f(n) = ldots = 3n + O(log n)$.
– leftaroundabout
yesterday
The rearrangement only works in standalone statements. It's much more common in the middle of calculations, where that kind of thing doesn't work, and where multiple functions get absorbed together into the Landau notation. (Stuff like $f(x) = e^{-x}(e^{2x}+O(x)) = e^x+o(1)$).
– David Richerby
yesterday
I find computations like that jarring; those equals signs aren't bidirectional anymore. I'm not sure there's more of a problem with writing $f(x) in e^x(e^{2x}+O(x)) subset e^x + o(1)$. I suppose that's also abuse of notation; basically you're overloading the $=$ operator whereas I prefer to lift $+$ and $cdot$ to also operate on sets.
– leftaroundabout
yesterday
A possibility would be to use slightly different notations for the set of asymptotic functions, $O(h)$, and for an undetermined element of this set, say $mathord{in}O(h)$. So, if $f-g in O(h)$, one can write $f = g+mathord{in}O(h)$ instead of the ambiguous $f = g+O(h)$. You can then write without problem $mathord{in}O(h) = f-g$. Other possible notations for an unspecified element of $O(h)$ might be $dot O(h)$, $hat O(h)$...
– Michel Fioc
18 hours ago
You could also write $f(n) - 3n in O(log n)$. Though I admit that it can be handy to conclude a multiple-step computation with $f(n) = ldots = 3n + O(log n)$.
– leftaroundabout
yesterday
You could also write $f(n) - 3n in O(log n)$. Though I admit that it can be handy to conclude a multiple-step computation with $f(n) = ldots = 3n + O(log n)$.
– leftaroundabout
yesterday
The rearrangement only works in standalone statements. It's much more common in the middle of calculations, where that kind of thing doesn't work, and where multiple functions get absorbed together into the Landau notation. (Stuff like $f(x) = e^{-x}(e^{2x}+O(x)) = e^x+o(1)$).
– David Richerby
yesterday
The rearrangement only works in standalone statements. It's much more common in the middle of calculations, where that kind of thing doesn't work, and where multiple functions get absorbed together into the Landau notation. (Stuff like $f(x) = e^{-x}(e^{2x}+O(x)) = e^x+o(1)$).
– David Richerby
yesterday
I find computations like that jarring; those equals signs aren't bidirectional anymore. I'm not sure there's more of a problem with writing $f(x) in e^x(e^{2x}+O(x)) subset e^x + o(1)$. I suppose that's also abuse of notation; basically you're overloading the $=$ operator whereas I prefer to lift $+$ and $cdot$ to also operate on sets.
– leftaroundabout
yesterday
I find computations like that jarring; those equals signs aren't bidirectional anymore. I'm not sure there's more of a problem with writing $f(x) in e^x(e^{2x}+O(x)) subset e^x + o(1)$. I suppose that's also abuse of notation; basically you're overloading the $=$ operator whereas I prefer to lift $+$ and $cdot$ to also operate on sets.
– leftaroundabout
yesterday
A possibility would be to use slightly different notations for the set of asymptotic functions, $O(h)$, and for an undetermined element of this set, say $mathord{in}O(h)$. So, if $f-g in O(h)$, one can write $f = g+mathord{in}O(h)$ instead of the ambiguous $f = g+O(h)$. You can then write without problem $mathord{in}O(h) = f-g$. Other possible notations for an unspecified element of $O(h)$ might be $dot O(h)$, $hat O(h)$...
– Michel Fioc
18 hours ago
A possibility would be to use slightly different notations for the set of asymptotic functions, $O(h)$, and for an undetermined element of this set, say $mathord{in}O(h)$. So, if $f-g in O(h)$, one can write $f = g+mathord{in}O(h)$ instead of the ambiguous $f = g+O(h)$. You can then write without problem $mathord{in}O(h) = f-g$. Other possible notations for an unspecified element of $O(h)$ might be $dot O(h)$, $hat O(h)$...
– Michel Fioc
18 hours ago
add a comment |
up vote
7
down vote
In The Algorithm Design Manual [1], you can find a paragraph about this issue:
The Big Oh notation [including $O$, $Omega$ and $Theta$] provides for a rough notion of equality when comparing
functions. It is somewhat jarring to see an expression like $n^2 = O(n^3)$, but its
meaning can always be resolved by going back to the definitions in terms of upper
and lower bounds. It is perhaps most instructive to read the " = " here as meaning
"one of the functions that are". Clearly, $n^2$ is one of functions that are $O(n^3)$.
Strictly speaking (as noted by David Richerby's comment), $Theta$ gives you a rough notion of equality, $O$ a rough notion of less-than-or-equal-to, and $Omega$ and rough notion of greater-than-or-equal-to.
Nonetheless, I agree with Vincenzo's answer: you can simply interpret $O(f(n))$ as a set of functions and the = symbol as a set membership symbol $in$.
[1] Skiena, S. S. The Algorithm Design Manual (Second Edition). Springer (2008)
add a comment |
up vote
7
down vote
In The Algorithm Design Manual [1], you can find a paragraph about this issue:
The Big Oh notation [including $O$, $Omega$ and $Theta$] provides for a rough notion of equality when comparing
functions. It is somewhat jarring to see an expression like $n^2 = O(n^3)$, but its
meaning can always be resolved by going back to the definitions in terms of upper
and lower bounds. It is perhaps most instructive to read the " = " here as meaning
"one of the functions that are". Clearly, $n^2$ is one of functions that are $O(n^3)$.
Strictly speaking (as noted by David Richerby's comment), $Theta$ gives you a rough notion of equality, $O$ a rough notion of less-than-or-equal-to, and $Omega$ and rough notion of greater-than-or-equal-to.
Nonetheless, I agree with Vincenzo's answer: you can simply interpret $O(f(n))$ as a set of functions and the = symbol as a set membership symbol $in$.
[1] Skiena, S. S. The Algorithm Design Manual (Second Edition). Springer (2008)
add a comment |
up vote
7
down vote
up vote
7
down vote
In The Algorithm Design Manual [1], you can find a paragraph about this issue:
The Big Oh notation [including $O$, $Omega$ and $Theta$] provides for a rough notion of equality when comparing
functions. It is somewhat jarring to see an expression like $n^2 = O(n^3)$, but its
meaning can always be resolved by going back to the definitions in terms of upper
and lower bounds. It is perhaps most instructive to read the " = " here as meaning
"one of the functions that are". Clearly, $n^2$ is one of functions that are $O(n^3)$.
Strictly speaking (as noted by David Richerby's comment), $Theta$ gives you a rough notion of equality, $O$ a rough notion of less-than-or-equal-to, and $Omega$ and rough notion of greater-than-or-equal-to.
Nonetheless, I agree with Vincenzo's answer: you can simply interpret $O(f(n))$ as a set of functions and the = symbol as a set membership symbol $in$.
[1] Skiena, S. S. The Algorithm Design Manual (Second Edition). Springer (2008)
In The Algorithm Design Manual [1], you can find a paragraph about this issue:
The Big Oh notation [including $O$, $Omega$ and $Theta$] provides for a rough notion of equality when comparing
functions. It is somewhat jarring to see an expression like $n^2 = O(n^3)$, but its
meaning can always be resolved by going back to the definitions in terms of upper
and lower bounds. It is perhaps most instructive to read the " = " here as meaning
"one of the functions that are". Clearly, $n^2$ is one of functions that are $O(n^3)$.
Strictly speaking (as noted by David Richerby's comment), $Theta$ gives you a rough notion of equality, $O$ a rough notion of less-than-or-equal-to, and $Omega$ and rough notion of greater-than-or-equal-to.
Nonetheless, I agree with Vincenzo's answer: you can simply interpret $O(f(n))$ as a set of functions and the = symbol as a set membership symbol $in$.
[1] Skiena, S. S. The Algorithm Design Manual (Second Edition). Springer (2008)
edited yesterday
answered yesterday
Mario Cervera
2,40411120
2,40411120
add a comment |
add a comment |
up vote
4
down vote
Usually, statements like
$$f = O(g)$$
can be interpreted as
$$ text{there exists } h in O(g) text{ such that }f = h,. $$
This becomes more useful in contexts like David Richerby mentions, where we write $f(n) = n^3 + O(n^2)$ to mean "there exists $g(n) in O(n^2)$ such that $f(n) = n^2 + g(n)$."
I find this existential quantifier interpretation so useful that I am tempted to write things like
$$ f(n) leq O(n^3) $$
which some will find an even more egregious style violation, but it is just a space-saving way of writing "there exists $C$ such that $f(n) leq C n^3$."
add a comment |
up vote
4
down vote
Usually, statements like
$$f = O(g)$$
can be interpreted as
$$ text{there exists } h in O(g) text{ such that }f = h,. $$
This becomes more useful in contexts like David Richerby mentions, where we write $f(n) = n^3 + O(n^2)$ to mean "there exists $g(n) in O(n^2)$ such that $f(n) = n^2 + g(n)$."
I find this existential quantifier interpretation so useful that I am tempted to write things like
$$ f(n) leq O(n^3) $$
which some will find an even more egregious style violation, but it is just a space-saving way of writing "there exists $C$ such that $f(n) leq C n^3$."
add a comment |
up vote
4
down vote
up vote
4
down vote
Usually, statements like
$$f = O(g)$$
can be interpreted as
$$ text{there exists } h in O(g) text{ such that }f = h,. $$
This becomes more useful in contexts like David Richerby mentions, where we write $f(n) = n^3 + O(n^2)$ to mean "there exists $g(n) in O(n^2)$ such that $f(n) = n^2 + g(n)$."
I find this existential quantifier interpretation so useful that I am tempted to write things like
$$ f(n) leq O(n^3) $$
which some will find an even more egregious style violation, but it is just a space-saving way of writing "there exists $C$ such that $f(n) leq C n^3$."
Usually, statements like
$$f = O(g)$$
can be interpreted as
$$ text{there exists } h in O(g) text{ such that }f = h,. $$
This becomes more useful in contexts like David Richerby mentions, where we write $f(n) = n^3 + O(n^2)$ to mean "there exists $g(n) in O(n^2)$ such that $f(n) = n^2 + g(n)$."
I find this existential quantifier interpretation so useful that I am tempted to write things like
$$ f(n) leq O(n^3) $$
which some will find an even more egregious style violation, but it is just a space-saving way of writing "there exists $C$ such that $f(n) leq C n^3$."
edited yesterday
David Richerby
65.4k1597186
65.4k1597186
answered yesterday
usul
3,3881421
3,3881421
add a comment |
add a comment |
up vote
4
down vote
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${text{ for all }ngeq n_{0}, |T(n)|leq ;Mf(n){text{ for all }}ngeq n_{0}.}$
Please note this interpretation is considered the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. You are right that the equal sign does not mean equality in its ordinary sense. (Another example of abuse of the equality sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome :=
as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It is not an equality; however, it is an equality since there is an equality sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(log n)$? This equality is not covered by the definition above. We would like to introduce another convention, the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $nto infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(cdots)$ on the left side, there are some functions satisfying each $O(cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $Theta$, $Omega$, small $o$, small $omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those.
add a comment |
up vote
4
down vote
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${text{ for all }ngeq n_{0}, |T(n)|leq ;Mf(n){text{ for all }}ngeq n_{0}.}$
Please note this interpretation is considered the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. You are right that the equal sign does not mean equality in its ordinary sense. (Another example of abuse of the equality sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome :=
as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It is not an equality; however, it is an equality since there is an equality sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(log n)$? This equality is not covered by the definition above. We would like to introduce another convention, the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $nto infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(cdots)$ on the left side, there are some functions satisfying each $O(cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $Theta$, $Omega$, small $o$, small $omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those.
add a comment |
up vote
4
down vote
up vote
4
down vote
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${text{ for all }ngeq n_{0}, |T(n)|leq ;Mf(n){text{ for all }}ngeq n_{0}.}$
Please note this interpretation is considered the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. You are right that the equal sign does not mean equality in its ordinary sense. (Another example of abuse of the equality sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome :=
as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It is not an equality; however, it is an equality since there is an equality sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(log n)$? This equality is not covered by the definition above. We would like to introduce another convention, the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $nto infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(cdots)$ on the left side, there are some functions satisfying each $O(cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $Theta$, $Omega$, small $o$, small $omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those.
Prologue: The big $O$ notation is a classic example of the power and ambiguity of some notations as part of language loved by human mind. No matter how much confusion it have caused, it remains the choice of notation to convey the ideas that we can easily identify and agree to efficiently.
I totally understand what big $O$ notation means. My issue is when we say $T(n)=O(f(n))$ , where $T(n)$ is running time of an algorithm on input of size $n$.
Sorry, but you do not have an issue if you understand the meaning of big $O$ notation.
I understand semantics of it. But $T(n)$ and $O(f(n))$ are two different things. $T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
What is important is the semantics. What is important is (how) people can agree easily on (one of) its precise interpretations that will describe asymptotic behavior or time or space complexity we are interested in. The default precise interpretation/definition of $T(n)=O(f(n))$ is, as translated from Wikipedia,
$T$ is a real or complex valued function and $f$ is a real valued function, both defined on some unbounded subset of the real positive numbers, such that $f(n)$ is strictly positive for all large enough values of $n$. For for all sufficiently large values of $n$, the absolute value of $T(n)$ is at most a positive constant multiple of $f(n)$. That is, there exists a positive real number $M$ and a real number $n_0$ such that
${text{ for all }ngeq n_{0}, |T(n)|leq ;Mf(n){text{ for all }}ngeq n_{0}.}$
Please note this interpretation is considered the definition. All other interpretations and understandings, which may help you greatly in various ways, are secondary and corollary. Everyone (well, at least every answerer here) agrees to this interpretation/definition/semantics. As long as you can apply this interpretation, you are probably good most of time. Relax and be comfortable. You do not want to think too much, just as you do not think too much about some of the irregularity of English or French or most of natural languages. Just use the notation by that definition.
$T(n)$ is an exact number, But $O(f(n))$ is not a function that spits out a number, so technically we can't say $T(n)$ equals $O(f(n))$, if one asks you what's the value of $O(f(n))$, what would be your answer? There is no answer.
Indeed, there could be no answer, since the question is ill-posed. $T(n)$ does not mean an exact number. It is meant to stand for a function whose name is $T$ and whose formal parameter is $n$ (which is sort of bounded to the $n$ in $f(n)$). It is just as correct and even more so if we write $T=O(f)$. If $T$ is the function that maps $n$ to $n^2$ and $f$ is the function that maps $n$ to $n^3$, it is also conventional to write $f(n)=O(n^3)$ or $n^2=O(n^3)$. You are right that the equal sign does not mean equality in its ordinary sense. (Another example of abuse of the equality sign is the usage of equal sign to mean assignment in most programming languages, instead of more cumbersome :=
as in some languages.)
If we are only concerned about that one equality (I am starting to abuse language as well. It is not an equality; however, it is an equality since there is an equality sign in the notation or it could be construed as some kind of equality), $T(n)=O(f(n))$, this answer is done.
However, the question actually goes on. What does it mean by, for example, $f(n)=3n+O(log n)$? This equality is not covered by the definition above. We would like to introduce another convention, the placeholder convention. Here is the full statement of placeholder convention as stated in Wikipedia.
In more complicated usage, $O(cdots)$ can appear in different places in an equation, even several times on each side. For example, the following are true for $nto infty$.
$(n+1)^{2}=n^{2}+O(n)$
$(n+O(n^{1/2}))(n+O(log n))^{2}=n^{3}+O(n^{5/2})$
$n^{O(1)}=O(e^{n})$
The meaning of such statements is as follows: for any functions which satisfy each $O(cdots)$ on the left side, there are some functions satisfying each $O(cdots)$ on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function $f(n) = O(1)$, there is some function $g(n) = O(e^n)$ such that $n^{f(n)} = g(n)$."
You may want to check here for another example of placeholder convention in action.
You might have noticed by now that I have not used the set-theoretic explanation of the big $O$-notation. All I have done is just to show even without that set-theoretic explanation such as "$O(f(n))$ is a set of functions", we can still understand big $O$-notation fully and perfectly. If you find that set-theoretic explanation useful, please go ahead anyway.
You can check the section in "asymptotic notation" of CLRS for a more detailed analysis and usage pattern for the family of notations for asymptotic behavior, such as big $Theta$, $Omega$, small $o$, small $omega$, multivariable usage and more. The Wikipedia entry is also a pretty good reference.
Lastly, there is some inherent ambiguity/controversy with big $O$ notation with multiple variables,1 and 2. You might want to think twice when you are using those.
answered yesterday
Apass.Jack
5,7481531
5,7481531
add a comment |
add a comment |
up vote
0
down vote
Many other posters have explained that the Big-O can be thought of as denoting a set of functions, and that the notation $n^2 = O(n^3)$ indicates that $n^2$ (as a function of $n$) is in the set denoted by $O(n^3)$ (again considering $n$ as the parameter). In English text, you may prefer to write "$n^2$ is in $O(n^3)$" to avoid confusion.
Although the notation can be confusing, it may help to think of the $O$ and the $=$ as parts of the same notation, that is, to treat $= O$ as if it were one symbol. It is little different from what we do when we write >=
in a programming language: two symbols, adjacent, become one in our eyes.
Another tricky aspect of the notation is that the variable acting as the parameter is not explicitly identified, or bound, the way it would be in a function declaration or a lambda notation. This can be especially confusing when there are two variables involved, as in $O(mn)$ or even more so in an expression like $O(n^c)$ since it may be implied that $c$ is a constant. Then again, some algorithms have complexity in a Big-O set that technically varies according to two variables, while in practice one of them is fixed. Still more, there may be multiple reasonable ways of measuring the complexity of one algorithm. For example, if the input is a number, your algorithm might be $O(n)$ in the value $n$ of the number but $O(2^b)$ in the bit size of the number. (Although in complexity theory per se, the bit size is usually the right parameter.)
All this is to say that the Big-O is an informal notation, hounded by imprecision, and you often have to use other context to understand what an author is saying.
As always, you're best to avoid confusion in your own writing, and I suggest avoiding the $= O$ and using instead $in O$ or the English "…is in…"
New contributor
"$n^2=O(n^3)$" would usually be read as "$n$-squared is big-O of $n$-cubed" or "$n$-squared is order $n$-cubed."
– David Richerby
16 hours ago
add a comment |
up vote
0
down vote
Many other posters have explained that the Big-O can be thought of as denoting a set of functions, and that the notation $n^2 = O(n^3)$ indicates that $n^2$ (as a function of $n$) is in the set denoted by $O(n^3)$ (again considering $n$ as the parameter). In English text, you may prefer to write "$n^2$ is in $O(n^3)$" to avoid confusion.
Although the notation can be confusing, it may help to think of the $O$ and the $=$ as parts of the same notation, that is, to treat $= O$ as if it were one symbol. It is little different from what we do when we write >=
in a programming language: two symbols, adjacent, become one in our eyes.
Another tricky aspect of the notation is that the variable acting as the parameter is not explicitly identified, or bound, the way it would be in a function declaration or a lambda notation. This can be especially confusing when there are two variables involved, as in $O(mn)$ or even more so in an expression like $O(n^c)$ since it may be implied that $c$ is a constant. Then again, some algorithms have complexity in a Big-O set that technically varies according to two variables, while in practice one of them is fixed. Still more, there may be multiple reasonable ways of measuring the complexity of one algorithm. For example, if the input is a number, your algorithm might be $O(n)$ in the value $n$ of the number but $O(2^b)$ in the bit size of the number. (Although in complexity theory per se, the bit size is usually the right parameter.)
All this is to say that the Big-O is an informal notation, hounded by imprecision, and you often have to use other context to understand what an author is saying.
As always, you're best to avoid confusion in your own writing, and I suggest avoiding the $= O$ and using instead $in O$ or the English "…is in…"
New contributor
"$n^2=O(n^3)$" would usually be read as "$n$-squared is big-O of $n$-cubed" or "$n$-squared is order $n$-cubed."
– David Richerby
16 hours ago
add a comment |
up vote
0
down vote
up vote
0
down vote
Many other posters have explained that the Big-O can be thought of as denoting a set of functions, and that the notation $n^2 = O(n^3)$ indicates that $n^2$ (as a function of $n$) is in the set denoted by $O(n^3)$ (again considering $n$ as the parameter). In English text, you may prefer to write "$n^2$ is in $O(n^3)$" to avoid confusion.
Although the notation can be confusing, it may help to think of the $O$ and the $=$ as parts of the same notation, that is, to treat $= O$ as if it were one symbol. It is little different from what we do when we write >=
in a programming language: two symbols, adjacent, become one in our eyes.
Another tricky aspect of the notation is that the variable acting as the parameter is not explicitly identified, or bound, the way it would be in a function declaration or a lambda notation. This can be especially confusing when there are two variables involved, as in $O(mn)$ or even more so in an expression like $O(n^c)$ since it may be implied that $c$ is a constant. Then again, some algorithms have complexity in a Big-O set that technically varies according to two variables, while in practice one of them is fixed. Still more, there may be multiple reasonable ways of measuring the complexity of one algorithm. For example, if the input is a number, your algorithm might be $O(n)$ in the value $n$ of the number but $O(2^b)$ in the bit size of the number. (Although in complexity theory per se, the bit size is usually the right parameter.)
All this is to say that the Big-O is an informal notation, hounded by imprecision, and you often have to use other context to understand what an author is saying.
As always, you're best to avoid confusion in your own writing, and I suggest avoiding the $= O$ and using instead $in O$ or the English "…is in…"
New contributor
Many other posters have explained that the Big-O can be thought of as denoting a set of functions, and that the notation $n^2 = O(n^3)$ indicates that $n^2$ (as a function of $n$) is in the set denoted by $O(n^3)$ (again considering $n$ as the parameter). In English text, you may prefer to write "$n^2$ is in $O(n^3)$" to avoid confusion.
Although the notation can be confusing, it may help to think of the $O$ and the $=$ as parts of the same notation, that is, to treat $= O$ as if it were one symbol. It is little different from what we do when we write >=
in a programming language: two symbols, adjacent, become one in our eyes.
Another tricky aspect of the notation is that the variable acting as the parameter is not explicitly identified, or bound, the way it would be in a function declaration or a lambda notation. This can be especially confusing when there are two variables involved, as in $O(mn)$ or even more so in an expression like $O(n^c)$ since it may be implied that $c$ is a constant. Then again, some algorithms have complexity in a Big-O set that technically varies according to two variables, while in practice one of them is fixed. Still more, there may be multiple reasonable ways of measuring the complexity of one algorithm. For example, if the input is a number, your algorithm might be $O(n)$ in the value $n$ of the number but $O(2^b)$ in the bit size of the number. (Although in complexity theory per se, the bit size is usually the right parameter.)
All this is to say that the Big-O is an informal notation, hounded by imprecision, and you often have to use other context to understand what an author is saying.
As always, you're best to avoid confusion in your own writing, and I suggest avoiding the $= O$ and using instead $in O$ or the English "…is in…"
New contributor
New contributor
answered 18 hours ago
ezrakilty
1
1
New contributor
New contributor
"$n^2=O(n^3)$" would usually be read as "$n$-squared is big-O of $n$-cubed" or "$n$-squared is order $n$-cubed."
– David Richerby
16 hours ago
add a comment |
"$n^2=O(n^3)$" would usually be read as "$n$-squared is big-O of $n$-cubed" or "$n$-squared is order $n$-cubed."
– David Richerby
16 hours ago
"$n^2=O(n^3)$" would usually be read as "$n$-squared is big-O of $n$-cubed" or "$n$-squared is order $n$-cubed."
– David Richerby
16 hours ago
"$n^2=O(n^3)$" would usually be read as "$n$-squared is big-O of $n$-cubed" or "$n$-squared is order $n$-cubed."
– David Richerby
16 hours ago
add a comment |
Mediocre is a new contributor. Be nice, and check out our Code of Conduct.
Mediocre is a new contributor. Be nice, and check out our Code of Conduct.
Mediocre is a new contributor. Be nice, and check out our Code of Conduct.
Mediocre is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Computer Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f101324%2fo-is-not-a-function-so-how-can-a-function-be-equal-to-it%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Simply, the = symbol in $T(n)=O(f(n))$ does not mean "equals". The question rests on this assumption. Did you find a source that says so?
– ShreevatsaR
6 hours ago
This is just one of the many ways mathematical notation is abused :(
– stendarr
4 hours ago