Apa yang dimaksud dengan


15

Apa yang dimaksud dengan log O ( 1 ) nlogO(1)n ?

Saya menyadari notasi O besar, tetapi notasi ini tidak masuk akal bagi saya. Saya juga tidak dapat menemukan apa-apa tentang itu, karena tidak ada mesin pencari yang mengartikan ini dengan benar.

Untuk sedikit konteks, kalimat di mana saya menemukannya berbunyi "[...] kita sebut fungsi [efisien] jika menggunakan ruang O ( log n )O(logn) dan paling banyak mencatat O ( 1 ) nlogO(1)n per item."


1
Saya setuju bahwa seseorang tidak boleh menulis hal-hal seperti ini, kecuali jika seseorang sangat jelas tentang apa artinya (dan memberi tahu pembaca apa itu) dan menggunakannya dengan aturan yang sama secara konsisten.
Raphael

1
Ya, sebagai gantinya seseorang harus menulisnya ( log ( n ) ) O ( 1 )(log(n))O(1).

1
@ RickyDemer Bukan itu maksud Raphael. log b l a h n berarti persis ( log n ) b l a h . logblahn(logn)blah
David Richerby

4
@ Raphael Ini adalah notasi standar di lapangan. Siapa pun yang tahu pasti tahu artinya.
Yuval Filmus

1
@YuvalFilmus Saya pikir berbagai jawaban yang tidak setuju adalah bukti konklusif bahwa klaim Anda salah, dan bahwa seseorang memang harus menahan diri untuk tidak menggunakan notasi tersebut.
Raphael

Jawaban:


16

Anda perlu mengabaikan sejenak perasaan kuat bahwa " O " berada di tempat yang salah dan terus membajak dengan definisi itu. f ( n ) = log O ( 1 ) n berarti ada konstanta k dan n 0 sedemikian rupa sehingga, untuk semua n n 0 , f ( n ) log k 1 n = log k n .Of(n)=logO(1)nkn0nn0f(n)logk1n=logkn

Perhatikan bahwa log k n berarti ( log n ) k . Fungsi dari form log O ( 1 ) n sering disebut polylogarithmic dan Anda mungkin mendengar orang berkata, " f is polylog  n ."logkn(logn)klogO(1)nfn

You'll notice that it's easy to prove that 2n=O(n)2n=O(n), since 2nkn2nkn for all n0n0, where k=2k=2. You might be wondering if 2logn=logO(1)n2logn=logO(1)n. The answer is yes since, for large enough nn, logn2logn2, so 2lognlog2n2lognlog2n for large enough nn.

On a related note, you'll often see polynomials written as nO(1)nO(1): same idea.


This is not supported by the common placeholder convention.
Raphael

I retract my comment: you write in all the important places, which is sufficient.
Raphael

@Raphael OK. I hadn't had time to check it yet but my feeling was you might be ordering quantifiers differently from the way I am. I'm not actually sure we're defining the same class of functions.
David Richerby

I think you are defining my (2), and Tom defines cR>0{logcn}cR>0{logcn}.
Raphael

9

This is an abuse of notation that can be made sense of by the generally accepted placeholder convention: whenever you find a Landau term O(f)O(f), replace it (in your mind, or on the paper) by an arbitrary function gO(f)gO(f).

So if you find

f(n)=logO(1)nf(n)=logO(1)n

you are to read

f(n)=logg(n)nf(n)=logg(n)n for some gO(1).(1)gO(1).(1)

Note the difference from saying "loglog to the power of some constant": g=n1/ng=n1/n is a distinct possibility.

Warning: The author may be employing even more abuse of notation and want you to read

f(n)O(logg(n)n)f(n)O(logg(n)n) for some gO(1).(2)gO(1).(2)

Note the difference between (1) and (2); while it works out to define the same set of positive-valued functions here, this does not always work. Do not move OO around in expressions without care!


3
I think what makes it tick is that xlogx(n)xlogx(n) is monotonic and sufficiently surjective for each fixed nn. Monotonic makes the position of the OO equivalent and gives you (2) ⇒ (1); going the other way requires gg to exist which could fail if f(n)f(n) is outside the range of the function. If you want to point out that moving OO around is dangerous and doesn't cover “wild” functions, fine, but in this specific case it's ok for the kind of functions that represent costs.
Gilles 'SO- stop being evil'

@Gilles I weakened the statement to a general warning.
Raphael

1
This answer has been heavily edited, and now I am confused: do you now claim that (1) and (2) are effectively the same?
Oebele

@Oebele As far as I can tell, they are not in general, but here.
Raphael

But, something like 3log2n3log2n does not match (1) but does match (2) right? or am I just being silly now?
Oebele

6

It means that the function grows at most as loglog to the power of some constant, i.e. log2(n)log2(n) or log5(n)log5(n) or log99999(n)log99999(n)...


This can be used when the function growth is known to be bounded by some constant power of the loglog, but the particular constant is unknown or left unspecified.
Yves Daoust

This is not supported by the common placeholder convention.
Raphael

2

"At most logO(1)nlogO(1)n" means that there is a constant cc such that what is being measured is O(logcn)O(logcn).

In a more general context, f(n)logO(1)nf(n)logO(1)n is equivalent to the statement that there exists (possibly negative) constants aa and bb such that f(n)O(logan)f(n)O(logan) and f(n)Ω(logbn)f(n)Ω(logbn).

It is easy to overlook the Ω(logbn)Ω(logbn) lower bound. In a setting where that would matter (which would be very uncommon if you're exclusively interested in studying asymptotic growth), you shouldn't have complete confidence that the author actually meant the lower bound, and would have to rely on the context to make sure.


The literal meaning of the notation logO(1)nlogO(1)n is doing arithmetic on the family of functions, resulting in the family of all functions logg(n)nlogg(n)n, where g(n)O(1)g(n)O(1). This works in pretty much the same as how multiplying O(g(n))O(g(n)) by h(n)h(n) results in O(g(n)h(n))O(g(n)h(n)), except that you get a result that isn't expressed so simply.


Since the details of the lower bound are in probably unfamiliar territory, it's worth looking at some counterexamples. Recall that any g(n)O(1)g(n)O(1) is bounded in magnitude; that there is a constant cc such that for all sufficiently large nn, |g(n)|<c|g(n)|<c.

When looking at asymptotic growth, usually only the upper bound g(n)<cg(n)<c matters, since, e.g., you already know the function is positive. However, in full generality you have to pay attention to the lower bound g(n)>cg(n)>c.

This means, contrary to more typical uses of big-oh notation, functions that decrease too rapidly can fail to be in logO(1)nlogO(1)n; for example, 1n=log(logn)/(loglogn)nlogO(1)n

1n=log(logn)/(loglogn)nlogO(1)n
because lognloglognO(1)
lognloglognO(1)
The exponent here grows in magnitude too rapidly to be bounded by O(1)O(1).

A counterexample of a somewhat different sort is that 1logO(1)n1logO(1)n.


Can't I just take b=0b=0 and make your claimed lower bound go away?
David Richerby

1
@DavidRicherby No, b=0b=0 still says that ff is bounded below. Hurkyl: why isn't f(n)=1/nf(n)=1/n in logO(1)nlogO(1)n?
Gilles 'SO- stop being evil'

@Gilles: More content added!

@Gilles OK, sure, it's bounded below by 1. Which is no bound at all for "most" applications of Landau notation in CS.
David Richerby

1) Your "move around OO" rule does not always work, and I don't think "at most" usually has that meaning; it's just redundant. 2) Never does OO imply a lower bound. That's when you use ΘΘ. 3) If and how negative functions are dealt with by a given definition of OO (even without abuse of notation) is not universally clear. Most definitions (in analysis of algorithms) exclude them. You seem to assume a definition that bounds the absolute value, which is fine.
Raphael
Dengan menggunakan situs kami, Anda mengakui telah membaca dan memahami Kebijakan Cookie dan Kebijakan Privasi kami.
Licensed under cc by-sa 3.0 with attribution required.