1994-06-18 - Re: O(f(x))

Header Data

From: Jonathan Rochkind <jrochkin@cs.oberlin.edu>
To: cei@access.digex.net
Message Hash: c09325e6f4c85fa1f3e36c7d1609c80186f3ee50599b4563bfe19f6584480acb
Message ID: <199406181801.OAA28517@cs.oberlin.edu>
Reply To: N/A
UTC Datetime: 1994-06-18 18:04:50 UTC
Raw Date: Sat, 18 Jun 94 11:04:50 PDT

Raw message

From: Jonathan Rochkind <jrochkin@cs.oberlin.edu>
Date: Sat, 18 Jun 94 11:04:50 PDT
To: cei@access.digex.net
Subject: Re:  O(f(x))
Message-ID: <199406181801.OAA28517@cs.oberlin.edu>
MIME-Version: 1.0
Content-Type: text/plain


> The way *I* learned it was like this: g(x) = o(f(x)) means that
> g(x)/f(x) -> 0 (as x goes to some specified limit) g(x) = O(f(x))
> means that |g(x)/f(x)| is bounded (as x goes to some limit)
> 
> In other words: a function that is o(f(x)) is of lower order than
> f(x),  while a function that is O(f(x)) is of no higher order than
> f(x).
 
Well, close anyway. Okay, here's straight out of my Discrete math textbook:
"A useful way to check whether f [is a member of] O(g), is to look
at the limit:
Lim(as n->infinity)  (f(n)/g(n))
In other words, we look at the _asymptotic_ behavior of f and g. If this
limit exists (in practice it usually does) and is a finite number (possibly
0), we can conclude taht f [is a member of] O(g). If this limit is
infinity, then f [is not a member of] O9g). For example,
7n**3 + 100n -3 [is a member of] O(n**3), because the limit of the
ratio of these functions, as n->infinity, is the finite number 7. In fact,
if the limit is a _nonzero_ number, as in this case, then O(f)=O(g).
 
Okay, end of the quote. What all this stuff is used for is just comparing
the running time of different algorithms. If you've got an algorithm whose
running time varies with size of input n, according to the function
7n**3 + 100n -3, then this is _basically_ the same as if it varied 
according to n**3. Now, according to the definition of "big -oh notation
", which is what this is called, we could also say that function
was an element of O(n**4), or O(n**20), or even O(3**n). So what
big-oh notation really means is that function f is basically the same
as, or better then, function g. But in practice we pick the "quickest"
simple function g. So we call the functions (5n**4 + 4) (32n**4 +43n)
and (n**4 +n**3 +n**2) elements of O(n**4). Which means that algorithms
whose running times were described by those functions are all
about the same speed, and are all about the same speed as n**4 too.
 
Furthermore, any function which is O(n**k) for any k, is called _polynomial_.
A polynomial algorithm is slow. Better is one which is an element of
O(n log(n)), or even O(n), which is called _linear_.
 
There ends the lesson. :)





Thread