Berikut ini berbagai cara untuk melakukan ini, bersama dengan beberapa tolok ukur. Metode terbaik adalah versi yang menggunakan kode yang dioptimalkan dari pustaka lain. The bottleneck.move_mean
Metode mungkin terbaik di sekitar. The scipy.convolve
Pendekatan ini juga sangat cepat, extensible, dan sintaksis dan konseptual sederhana, tetapi tidak baik skala untuk nilai-nilai jendela yang sangat besar. The numpy.cumsum
Metode baik jika Anda membutuhkan murni numpy
pendekatan.
Catatan: Beberapa di antaranya (misalnya bottleneck.move_mean
) tidak berada di tengah, dan akan menggeser data Anda.
import numpy as np
import scipy as sci
import scipy.signal as sig
import pandas as pd
import bottleneck as bn
import time as time
def rollavg_direct(a,n):
'Direct "for" loop'
assert n%2==1
b = a*0.0
for i in range(len(a)) :
b[i]=a[max(i-n//2,0):min(i+n//2+1,len(a))].mean()
return b
def rollavg_comprehension(a,n):
'List comprehension'
assert n%2==1
r,N = int(n/2),len(a)
return np.array([a[max(i-r,0):min(i+r+1,N)].mean() for i in range(N)])
def rollavg_convolve(a,n):
'scipy.convolve'
assert n%2==1
return sci.convolve(a,np.ones(n,dtype='float')/n, 'same')[n//2:-n//2+1]
def rollavg_convolve_edges(a,n):
'scipy.convolve, edge handling'
assert n%2==1
return sci.convolve(a,np.ones(n,dtype='float'), 'same')/sci.convolve(np.ones(len(a)),np.ones(n), 'same')
def rollavg_cumsum(a,n):
'numpy.cumsum'
assert n%2==1
cumsum_vec = np.cumsum(np.insert(a, 0, 0))
return (cumsum_vec[n:] - cumsum_vec[:-n]) / n
def rollavg_cumsum_edges(a,n):
'numpy.cumsum, edge handling'
assert n%2==1
N = len(a)
cumsum_vec = np.cumsum(np.insert(np.pad(a,(n-1,n-1),'constant'), 0, 0))
d = np.hstack((np.arange(n//2+1,n),np.ones(N-n)*n,np.arange(n,n//2,-1)))
return (cumsum_vec[n+n//2:-n//2+1] - cumsum_vec[n//2:-n-n//2]) / d
def rollavg_roll(a,n):
'Numpy array rolling'
assert n%2==1
N = len(a)
rolling_idx = np.mod((N-1)*np.arange(n)[:,None] + np.arange(N), N)
return a[rolling_idx].mean(axis=0)[n-1:]
def rollavg_roll_edges(a,n):
# see /programming/42101082/fast-numpy-roll
'Numpy array rolling, edge handling'
assert n%2==1
a = np.pad(a,(0,n-1-n//2), 'constant')*np.ones(n)[:,None]
m = a.shape[1]
idx = np.mod((m-1)*np.arange(n)[:,None] + np.arange(m), m) # Rolling index
out = a[np.arange(-n//2,n//2)[:,None], idx]
d = np.hstack((np.arange(1,n),np.ones(m-2*n+1+n//2)*n,np.arange(n,n//2,-1)))
return (out.sum(axis=0)/d)[n//2:]
def rollavg_pandas(a,n):
'Pandas rolling average'
return pd.DataFrame(a).rolling(n, center=True, min_periods=1).mean().to_numpy()
def rollavg_bottlneck(a,n):
'bottleneck.move_mean'
return bn.move_mean(a, window=n, min_count=1)
N = 10**6
a = np.random.rand(N)
functions = [rollavg_direct, rollavg_comprehension, rollavg_convolve,
rollavg_convolve_edges, rollavg_cumsum, rollavg_cumsum_edges,
rollavg_pandas, rollavg_bottlneck, rollavg_roll, rollavg_roll_edges]
print('Small window (n=3)')
%load_ext memory_profiler
for f in functions :
print('\n'+f.__doc__+ ' : ')
%timeit b=f(a,3)
print('\nLarge window (n=1001)')
for f in functions[0:-2] :
print('\n'+f.__doc__+ ' : ')
%timeit b=f(a,1001)
print('\nMemory\n')
print('Small window (n=3)')
N = 10**7
a = np.random.rand(N)
%load_ext memory_profiler
for f in functions[2:] :
print('\n'+f.__doc__+ ' : ')
%memit b=f(a,3)
print('\nLarge window (n=1001)')
for f in functions[2:-2] :
print('\n'+f.__doc__+ ' : ')
%memit b=f(a,1001)
Pengaturan waktu, Jendela kecil (n = 3)
Direct "for" loop :
4.14 s ± 23.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
List comprehension :
3.96 s ± 27.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
scipy.convolve :
1.07 ms ± 26.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
scipy.convolve, edge handling :
4.68 ms ± 9.69 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
numpy.cumsum :
5.31 ms ± 5.11 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
numpy.cumsum, edge handling :
8.52 ms ± 11.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Pandas rolling average :
9.85 ms ± 9.63 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
bottleneck.move_mean :
1.3 ms ± 12.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Numpy array rolling :
31.3 ms ± 91.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Numpy array rolling, edge handling :
61.1 ms ± 55.9 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Pengaturan waktu, Jendela besar (n = 1001)
Direct "for" loop :
4.67 s ± 34 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
List comprehension :
4.46 s ± 14.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
scipy.convolve :
103 ms ± 165 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
scipy.convolve, edge handling :
272 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
numpy.cumsum :
5.19 ms ± 12.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
numpy.cumsum, edge handling :
8.7 ms ± 11.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Pandas rolling average :
9.67 ms ± 199 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
bottleneck.move_mean :
1.31 ms ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Memori, Jendela kecil (n = 3)
The memory_profiler extension is already loaded. To reload it, use:
%reload_ext memory_profiler
scipy.convolve :
peak memory: 362.66 MiB, increment: 73.61 MiB
scipy.convolve, edge handling :
peak memory: 510.24 MiB, increment: 221.19 MiB
numpy.cumsum :
peak memory: 441.81 MiB, increment: 152.76 MiB
numpy.cumsum, edge handling :
peak memory: 518.14 MiB, increment: 228.84 MiB
Pandas rolling average :
peak memory: 449.34 MiB, increment: 160.02 MiB
bottleneck.move_mean :
peak memory: 374.17 MiB, increment: 75.54 MiB
Numpy array rolling :
peak memory: 661.29 MiB, increment: 362.65 MiB
Numpy array rolling, edge handling :
peak memory: 1111.25 MiB, increment: 812.61 MiB
Memori, Jendela besar (n = 1001)
scipy.convolve :
peak memory: 370.62 MiB, increment: 71.83 MiB
scipy.convolve, edge handling :
peak memory: 521.98 MiB, increment: 223.18 MiB
numpy.cumsum :
peak memory: 451.32 MiB, increment: 152.52 MiB
numpy.cumsum, edge handling :
peak memory: 527.51 MiB, increment: 228.71 MiB
Pandas rolling average :
peak memory: 451.25 MiB, increment: 152.50 MiB
bottleneck.move_mean :
peak memory: 374.64 MiB, increment: 75.85 MiB