Bagaimana menemukan 5 nilai berulang dalam waktu O (n)?


15

Misalkan Anda memiliki array ukuran n6 mengandung bilangan bulat dari 1 hingga n5 , inklusif, dengan tepat lima diulang. Saya perlu mengusulkan algoritma yang dapat menemukan angka yang diulang dalam waktu O(n) . Saya tidak bisa, untuk kehidupan saya, memikirkan apa pun. Saya pikir menyortir, paling-paling, akan menjadi O(nlogn) ? Kemudian melintasi array akan menjadi O(n) , menghasilkan O(n2logn). Namun, saya tidak begitu yakin apakah penyortiran akan diperlukan karena saya telah melihat beberapa hal rumit dengan daftar tertaut, antrian, tumpukan, dll.


16
O(nlogn)+O(n) bukanO(n2logn) . IniO(nlogn) . Itu akan menjadiO(n2logn) jika Anda melakukan pengurutan n kali.
Dana Gugatan Monica


1
@leftaroundabout Algoritma ini adalah O(kn) mana n adalah ukuran array dan k adalah ukuran set input. karena k=nconstant algoritma ini bekerja di O(n2)
Roman Graf

4
@ RomanGräf tampaknya situasi aktualnya adalah ini: algoritme berfungsi dalam O(logkn) , di mana k adalah ukuran domain. Jadi, untuk masalah seperti OP, turun ke sama apakah Anda menggunakan algoritma seperti pada n -sized domain, atau tradisional O(nlogn) algoritma pada domain tak terbatas ukuran. Masuk akal juga.
leftaroundabout

5
Untuk , satu-satunya angka yang diperbolehkan adalah 1 , menurut uraian Anda. Tetapi kemudian saya harus mengulangi enam, bukan lima, kali. n=611
Alex Reinking

Jawaban:


22

Anda dapat membuat array ukuran tambahan n . Awalnya atur semua elemen array ke 0 . Kemudian putar melalui array input A dan tambah B [ A [ i ] ] dengan 1 untuk setiap i . Setelah itu Anda cukup memeriksa array B : loop di atas A dan jika B [ A [ i ] ] > 1 maka A [ i ] diulang. Anda menyelesaikannya dalam O ( n )Bn0AB[A[i]]iBAB[A[i]]>1A[i]O(n)waktu dengan biaya memori yang dan karena bilangan bulat Anda antara 1 dan n - 5 .O(n)1n5


26

Solusi dalam jawaban fade2black adalah yang standar, tetapi menggunakan ruang . Anda dapat meningkatkan ini ke O ( 1 ) ruang sebagai berikut:O(n)O(1)

  1. Let the array be A[1],,A[n]. For d=1,,5, compute σd=i=1nA[i]d.
  2. Compute τd=σdi=1n5id (you can use the well-known formulas to compute the latter sum in O(1)). Note that τd=m1d++m5d, where m1,,m5 are the repeated numbers.
  3. Compute the polynomial P(t)=(tm1)(tm5). The coefficients of this polynomial are symmetric functions of m1,,m5 which can be computed from τ1,,τ5 in O(1).
  4. Find all roots of the polynomial P(t) by trying all n5 possibilities.

This algorithm assumes the RAM machine model, in which basic arithmetic operations on O(logn)-bit words take O(1) time.


Another way to formulate this solution is along the following lines:

  1. Calculate x1=i=1nA[i], and deduce y1=m1++m5 using the formula y1=x1i=1n5i.
  2. Calculate x2=1i<jA[i]A[j] in O(n) using the formula
    x2=(A[1])A[2]+(A[1]+A[2])A[3]+(A[1]+A[2]+A[3])A[4]++(A[1]++A[n1])A[n].
  3. Deduce y2=1i<j5mimj using the formula
    y2=x21i<jn5ij(i=1n5i)y1.
  4. Calculate x3,x4,x5 and deduce y3,y4,y5 along similar lines.
  5. The values of y1,,y5 are (up to sign) the coefficients of the polynomial P(t) from the preceding solution.

This solution shows that if we replace 5 by d, then we get (I believe) a O(d2n) algorithm using O(d2) space, which performs O(dn) arithmetic operations on integers of bit-length O(dlogn), keeping at most O(d) of these at any given time. (This requires careful analysis of the multiplications we perform, most of which involve one operand of length only O(logn)O(dn)O(d)


σd and τd, P(t), mi and so on? Why d{1,2,3,4,5}?
styrofoam fly

3
The insight behind the solution is the summing trick, which appears in many exercises (for example, how do you find the missing element from an array of length n1 containing all but one of the numbers 1,,n?). The summing trick can be used to compute f(m1)++f(m5) for an arbitrary function f, and the question is which f to choose in order to be able to deduce m1,,m5. My answer uses familiar tricks from the elementary theory of symmetric functions.
Yuval Filmus

1
@hoffmale Actually, O(d2).
Yuval Filmus

1
@hoffmale Each of them takes d machine words.
Yuval Filmus

1
@BurnsBA The problem with this approach is that (n5)# is much larger than (n4)(n5)2. Operations on large numbers are slower.
Yuval Filmus

8

There's also a linear time and constant space algorithm based on partitioning, which may be more flexible if you're trying to apply this to variants of the problem that the mathematical approach doesn't work well on. This requires mutating the underlying array and has worse constant factors than the mathematical approach. More specifically, I believe the costs in terms of the total number of values n and the number of duplicates d are O(nlogd) and O(d) respectively, though proving it rigorously will take more time than I have at the moment.


Algorithm

Start with a list of pairs, where the first pair is the range over the whole array, or [(1,n)] if 1-indexed.

Repeat the following steps until the list is empty:

  1. Take and remove any pair (i,j) from the list.
  2. Find the minimum and maximum, min and max, of the denoted subarray.
  3. If min=max, the subarray consists only of equal elements. Yield its elements except one and skip steps 4 to 6.
  4. If maxmin=ji, the subarray contains no duplicates. Skip steps 5 and 6.
  5. Partition the subarray around min+max2, such that elements up to some index k are smaller than the separator and elements above that index are not.
  6. Add (i,k) and (k+1,j) to the list.

Cursory analysis of time complexity.

Steps 1 to 6 take O(ji) time, since finding the minimum and maximum and partitioning can be done in linear time.

Every pair (i,j) in the list is either the first pair, (1,n), or a child of some pair for which the corresponding subarray contains a duplicate element. There are at most dlog2n+1 such parents, since each traversal halves the range in which a duplicate can be, so there are at most 2dlog2n+1 total when including pairs over subarrays with no duplicates. At any one time, the size of the list is no more than 2d.

Consider the work to find any one duplicate. This consists of a sequence of pairs over an exponentially decreasing range, so the total work is the sum of the geometric sequence, or O(n). This produces an obvious corollary that the total work for d duplicates must be O(nd), which is linear in n.

To find a tighter bound, consider the worst-case scenario of maximally spread out duplicates. Intuitively, the search takes two phases, one where the full array is being traversed each time, in progressively smaller parts, and one where the parts are smaller than nd so only parts of the array are traversed. The first phase can only be logd deep, so has cost O(nlogd), and the second phase has cost O(n) because the total area being searched is again exponentially decreasing.


Thank you for the explanation. Now I understand. A very pretty algorithm!
D.W.

5

Leaving this as an answer because it needs more space than a comment gives.

You make a mistake in the OP when you suggest a method. Sorting a list and then transversing it O(nlogn) time, not O(n2logn) time. When you do two things (that take O(f) and O(g) respectively) sequentially then the resulting time complexity is O(f+g)=O(maxf,g) (under most circumstances).

In order to multiply the time complexities, you need to be using a for loop. If you have a loop of length f and for each value in the loop you do a function that takes O(g), then you'll get O(fg) time.

So, in your case you sort in O(nlogn) and then transverse in O(n) resulting in O(nlogn+n)=O(nlogn). If for each comparison of the sorting algorithm you had to do a computation that takes O(n), then it would take O(n2logn) but that's not the case here.


In case your curious about my claim that O(f+g)=O(maxf,g), it's important to note that that's not always true. But if fO(g) or gO(f) (which holds for a whole host of common functions), it will hold. The most common time it doesn't hold is when additional parameters get involved and you get expressions like O(2cn+nlogn).


3

There's an obvious in-place variant of the boolean array technique using the order of the elements as the store (where arr[x] == x for "found" elements). Unlike the partition variant that can be justified for being more general I'm unsure when you'd actually need something like this, but it is simple.

for idx from n-4 to n
    while arr[arr[idx]] != arr[idx]
        swap(arr[arr[idx]], arr[idx])

This just repeatedly puts arr[idx] at the location arr[idx] until you find that location already taken, at which point it must be a duplicate. Note that the total number of swaps is bounded by n since each swap makes its exit condition correct.


You're going to have to give some sort of argument that the inner while loop runs in constant time on average. Otherwise, this isn't a linear-time algorithm.
David Richerby

@DavidRicherby It doesn't run constant time on average, but the outer loop only runs 5 times so that's fine. Note that the total number of swaps is bounded by n since each swap makes its exit condition correct, so even if the number of duplicate values increases the total time is still linear (aka. it takes n steps rather than nd) .
Veedrac

Oops, I somehow didn't notice that the outer loop runs a constant number of times! (Edited to include your note about the number of swaps and also so I could reverse my downvote.)
David Richerby

1

Subtract the values you have from the sum i=1ni=(n1)n2.

So, after Θ(n) time (assuming arithmetic is O(1), which it isn't really, but let's pretend) you have a sum σ1 of 5 integers between 1 and n:

x1+x2+x3+x4+x5=σ1

Supposedly, this is no good, right? You can't possibly figure out how to break this up into 5 distinct numbers.

Ah, but this is where it gets to be fun! Now do the same thing as before, but subtract the squares of the values from i=1ni2. Now you have:

x12+x22+x32+x42+x52=σ2

See where I'm going with this? Do the same for powers 3, 4 and 5 and you have yourself 5 independent equations in 5 variables. I'm pretty sure you can solve for x.

Caveats: Arithmetic is not really O(1). Also, you need a bit of space to represent your sums; but not as much as you would imagine - you can do most everything modularly, as long as you have, oh, log(5n6) bits; that should do it.


Doesn't @YuvalFilmus propose the same solution?
fade2black

@fade2black: Oh, yes, it does, sorry, I just saw the first line of his solution.
einpoklum

0

Easiest way to solve the problem is to create array in which we will count the apperances for each number in the original array, and then traverse all number from 1 to n5 and check if the number appears more than once, the complexity for this solution in both memory and time is linear, or O(N)


1
This is the same @fade2black's answer (although a bit easier on the eyes)
LangeHaare

0

Map an array to 1 << A[i] and then XOR everything together. Your duplicates will be the numbers where corresponding bit is off.


There are five duplicates, so the xor trick will not break in some cases.
Evil

1
The running time of this is O(n2). Each bitvector is n bits long, so you each bitvector operation takes O(n) time, and you do one bit vector operation per element of the original array, for a total of O(n2) time.
D.W.

@D.W. But given that the machines we normally use are fixed at either 32 or 64-bits, and these don't change at run-time (i.e. they're constant), why shouldn't they be treated as such and assume that the bit operations are in O(1) instead of O(n)?
code_dredd

1
@ray, I think you answered your own question. Given that the machines we normally use are fixed at 64-bits, the running time to do an operation on a n-bit vector is O(n), not O(1). It takes something like n/64 instructions to do some operation on all n bits of a n-bit vector, and n/64 is O(n), not O(1).
D.W.

@D.W. What I got out of prev. comments was that a bit vector referred to a single element in an n-sized array, with the bit vector being 64-bits, which would be the constant I'm referring to. Obviously, processing an an array of size n will take O(kn) time, if we assume there're k-bits per element and n the number of elements in the array. But k=64, so an operation for an array element w/ a constant bit count should be O(1) instead of O(k) and the array O(n) instead of O(kn). Are you keeping the k for the sake of completeness/correctness or am I missing something else?
code_dredd

-2
DATA=[1,2,2,2,2,2]

from collections import defaultdict

collated=defaultdict(list):
for item in DATA:
    collated[item].append(item)
    if len(collated) == 5:
        return item.

# n time

4
Welcome to the site. We're a computer science site, so we're looking for algorithms and explanations, not code dumps that require understanding of a particular language and its libraries. In particular, your claim that this code runs in linear time assumes that collated[item].append(item) runs in constant time. Is that really true?
David Richerby

3
Also, you are looking for a value which is repeated five times. In contrast, the OP is looking for five values, which are each repeated twice.
Yuval Filmus
Dengan menggunakan situs kami, Anda mengakui telah membaca dan memahami Kebijakan Cookie dan Kebijakan Privasi kami.
Licensed under cc by-sa 3.0 with attribution required.