Ini menunjukkan bahwa semua contoh pelatihan memiliki panjang urutan tetap, yaitu timesteps
.
Itu tidak sepenuhnya benar, karena dimensi itu bisa None
, yaitu panjang variabel. Dalam satu batch , Anda harus memiliki jumlah timesteps yang sama (ini biasanya Anda melihat 0-padding dan masking). Tapi di antara batch tidak ada batasan seperti itu. Selama inferensi, Anda dapat memiliki panjang apa pun.
Kode contoh yang membuat batch acak dari data pelatihan.
from keras.models import Sequential
from keras.layers import LSTM, Dense, TimeDistributed
from keras.utils import to_categorical
import numpy as np
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=(None, 5)))
model.add(LSTM(8, return_sequences=True))
model.add(TimeDistributed(Dense(2, activation='sigmoid')))
print(model.summary(90))
model.compile(loss='categorical_crossentropy',
optimizer='adam')
def train_generator():
while True:
sequence_length = np.random.randint(10, 100)
x_train = np.random.random((1000, sequence_length, 5))
# y_train will depend on past 5 timesteps of x
y_train = x_train[:, :, 0]
for i in range(1, 5):
y_train[:, i:] += x_train[:, :-i, i]
y_train = to_categorical(y_train > 2.5)
yield x_train, y_train
model.fit_generator(train_generator(), steps_per_epoch=30, epochs=10, verbose=1)
Dan inilah yang dicetaknya. Perhatikan bentuk output (None, None, x)
menunjukkan ukuran batch variabel dan ukuran cap waktu variabel.
__________________________________________________________________________________________
Layer (type) Output Shape Param #
==========================================================================================
lstm_1 (LSTM) (None, None, 32) 4864
__________________________________________________________________________________________
lstm_2 (LSTM) (None, None, 8) 1312
__________________________________________________________________________________________
time_distributed_1 (TimeDistributed) (None, None, 2) 18
==========================================================================================
Total params: 6,194
Trainable params: 6,194
Non-trainable params: 0
__________________________________________________________________________________________
Epoch 1/10
30/30 [==============================] - 6s 201ms/step - loss: 0.6913
Epoch 2/10
30/30 [==============================] - 4s 137ms/step - loss: 0.6738
...
Epoch 9/10
30/30 [==============================] - 4s 136ms/step - loss: 0.1643
Epoch 10/10
30/30 [==============================] - 4s 142ms/step - loss: 0.1441
Masking
layer ke ignor