2017-01-31 16 views
1

Моя форма ввода данных является [п, 3, 64, 64]ValueError: GpuCorrMM изображения и ядра должны иметь одинаковый размер стека

Я получил это после запуска кода на Stampede.

Using Theano backend. 
Using gpu device 0: Tesla K20m (CNMeM is disabled, cuDNN not available) 
ValueError: GpuCorrMM images and kernel must have the same stack size 

Apply node that caused the error: GpuCorrMM{half, (1, 1)}(GpuContiguous.0, GpuContiguous.0) 
Toposort index: 115 
Inputs types: [CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, 4D)] 
Inputs shapes: [(32, 8, 16, 1024), (256, 512, 5, 5)] 
Inputs strides: [(131072, 16384, 1024, 1), (12800, 25, 5, 1)] 
Inputs values: ['not shown', 'not shown'] 
Outputs clients: [[GpuElemwise{Add}[(0, 0)](GpuCorrMM{half, (1, 1)}.0, GpuReshape{4}.0)]] 

Что происходит с кодом и как решить эту проблему? Благодаря

Мой код:

g_input = Input(shape=(100,)) 

generator = Sequential() 
generator.add(Dense(1024 * 4 * 4, input_shape=(100,))) 
generator.add(BatchNormalization(mode=2)) 
generator.add(Activation('relu')) 
generator.add(Reshape([1024, 4, 4])) 

generator.add(UpSampling2D(size=(2, 2), dim_ordering='th')) 
generator.add(Convolution2D(512, 5, 5, border_mode='same', dim_ordering='th')) 
generator.add(BatchNormalization(mode=2)) 
generator.add(Activation('relu')) 

generator.add(UpSampling2D(size=(2, 2), dim_ordering='th')) 
generator.add(Convolution2D(256, 5, 5, border_mode='same', dim_ordering='th')) 
generator.add(BatchNormalization(mode=2)) 
generator.add(Activation('relu')) 

generator.add(UpSampling2D(size=(2, 2), dim_ordering='th')) 
generator.add(Convolution2D(128, 5, 5, border_mode='same', dim_ordering='th')) 
generator.add(BatchNormalization(mode=2)) 
generator.add(Activation('relu')) 

generator.add(UpSampling2D(size=(2, 2), dim_ordering='th')) 
generator.add(Convolution2D(64, 5, 5, border_mode='same', dim_ordering='th')) 
generator.add(BatchNormalization(mode=2)) 
generator.add(Activation('relu')) 

generator.add(Convolution2D(3, 5, 5, border_mode='same', dim_ordering='th')) 
generator.add(Activation('sigmoid')) 

generator.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.0002, beta_1=0.5)) 
generator.summary() 

# discriminative model 

discriminator = Sequential() 

discriminator.add(Convolution2D(64, 5, 5, subsample=(2, 2), border_mode='same', dim_ordering='th', input_shape=X_train.shape[1:])) 
discriminator.add(LeakyReLU(0.2)) 


discriminator.add(Convolution2D(128, 5, 5, subsample=(2, 2), border_mode='same', dim_ordering='th')) 
discriminator.add(LeakyReLU(0.2)) 


discriminator.add(Convolution2D(256, 5, 5, subsample=(2, 2), border_mode='same', dim_ordering='th')) 
discriminator.add(LeakyReLU(0.2)) 

discriminator.add(Convolution2D(512, 5, 5, subsample=(2, 2), border_mode='same', dim_ordering='th')) 
discriminator.add(LeakyReLU(0.2)) 

discriminator.add(Flatten()) 

discriminator.add(Dense(1024)) 
discriminator.add(LeakyReLU(0.2)) 
discriminator.add(Dropout(0.5)) 

discriminator.add(Dense(2, activation='softmax')) 

discriminator.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.0002, beta_1=0.5)) 
discriminator.summary() 

# GAN Model 
gan_input = Input(shape=(100,)) 
gan_output = discriminator(generator(gan_input)) 
gan_model = Model(gan_input, gan_output) 

gan_model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.0002, beta_1=0.5)) 
gan_model.summary() 
print("Pre-training generator...") 
noise_gen = np.random.uniform(0, 1, size=(14000, 100)) # at (0,1) creates 10000 points 
generated_images = generator.predict(noise_gen) 

print('generated_images shape ----', generated_images.shape) 

X = np.concatenate((X_train[:14000, :, :, :], generated_images)) 
y = np.zeros([28000, 2]) 
y[:14000, 1] = 1 
y[14000:, 0] = 1 

discriminator.fit(X, y, nb_epoch=1, batch_size=128) 
y_hat = discriminator.predict(X) 

# set up loss storage vector 
losses = {"d": [], "g": []} 


def train_for_n(nb_epoch=28000, batch_size=128): 
    for e in range(nb_epoch): 

     # Make generative images 
     train_idx = np.random.randint(0, X_train.shape[0], size=batch_size) # 0 <= train_idx <= X_train.shape[0] 
     mini_batch = X_train[train_idx] 
     noise_gen = np.random.normal(0, 1, size=(batch_size, 100)) 
     generated_images = generator.predict(noise_gen) 

     # Train discriminator on generated images 
     X = np.concatenate((mini_batch, generated_images)) 
     y = np.zeros([2 * batch_size, 2]) 
     y[:batch_size, 1] = 1 
     y[batch_size:, 0] = 1 

     discriminator.trainable = True 
     for layer in discriminator.layers: 
      layer.trainable = True 
     d_loss = discriminator.train_on_batch(X, y) 
     losses["d"].append(d_loss) 

     noise_tr = np.random.uniform(0, 1, size=(batch_size, 100)) 
     y2 = np.zeros([batch_size, 2]) 
     y2[:, 1] = 1 

     discriminator.trainable = False 
     for layer in discriminator.layers: 
      layer.trainable = False 
     g_loss = gan_model.train_on_batch(noise_tr, y2) 
     losses["g"].append(g_loss) 

     if e % 10 == 9: 
      generator.save_weights('G0_weights.h5') 
      discriminator.save_weights('D0_weights.h5') 
      noise = np.random.uniform(0, 1, size=(100, 100)) 
      generated_images = generator.predict(noise) 
      np.save('/Users/zhangguanghua/Desktop/Stampede/generated_images_0.npy', generated_images) 

     print(("Iteration: {0}/{1}, G-Loss: {2:.4f}".format(e, nb_epoch, float(g_loss)))) 


train_for_n(nb_epoch=2000, batch_size=128) 

Кроме того, кто знает, что это входы формы: [(32, 8, 16, 1024), (256, 512, 5, 5)] означает? Как я могу исправить эту проблему?

Благодаря

+0

Не могли бы вы добавить результат gan_model.summary() в сообщение? –

+0

Спасибо за быстрый ответ. Что означает u, добавив результат gan_model.summary()? Я не понимаю. Спасибо – zghyfbmw

+0

эта функция печатает сводку вашего проекта. Он должен напечатать его топологию –

ответ

1

При запуске этот код на процессоре, то gan_model.summary() является результатом: 0,0 1,0 X_train форма --- (29404, 3, 64, 64) 29404 Образцы поезда


Layer (type)      Output Shape   Param #  Connected to      
==================================================================================================== 
dense_1 (Dense)     (None, 16384)   1654784  dense_input_1[0][0]    
____________________________________________________________________________________________________ 
batchnormalization_1 (BatchNorma (None, 16384)   65536  dense_1[0][0]      
____________________________________________________________________________________________________ 
activation_1 (Activation)  (None, 16384)   0   batchnormalization_1[0][0]  
____________________________________________________________________________________________________ 
reshape_1 (Reshape)    (None, 1024, 4, 4) 0   activation_1[0][0]    
____________________________________________________________________________________________________ 
upsampling2d_1 (UpSampling2D) (None, 1024, 8, 8) 0   reshape_1[0][0]     
____________________________________________________________________________________________________ 
convolution2d_1 (Convolution2D) (None, 512, 8, 8)  13107712 upsampling2d_1[0][0]    
____________________________________________________________________________________________________ 
batchnormalization_2 (BatchNorma (None, 512, 8, 8)  32   convolution2d_1[0][0]    
____________________________________________________________________________________________________ 
activation_2 (Activation)  (None, 512, 8, 8)  0   batchnormalization_2[0][0]  
____________________________________________________________________________________________________ 
upsampling2d_2 (UpSampling2D) (None, 512, 16, 16) 0   activation_2[0][0]    
____________________________________________________________________________________________________ 
convolution2d_2 (Convolution2D) (None, 256, 16, 16) 3277056  upsampling2d_2[0][0]    
____________________________________________________________________________________________________ 
batchnormalization_3 (BatchNorma (None, 256, 16, 16) 64   convolution2d_2[0][0]    
____________________________________________________________________________________________________ 
activation_3 (Activation)  (None, 256, 16, 16) 0   batchnormalization_3[0][0]  
____________________________________________________________________________________________________ 
upsampling2d_3 (UpSampling2D) (None, 256, 32, 32) 0   activation_3[0][0]    
____________________________________________________________________________________________________ 
convolution2d_3 (Convolution2D) (None, 128, 32, 32) 819328  upsampling2d_3[0][0]    
____________________________________________________________________________________________________ 
batchnormalization_4 (BatchNorma (None, 128, 32, 32) 128   convolution2d_3[0][0]    
____________________________________________________________________________________________________ 
activation_4 (Activation)  (None, 128, 32, 32) 0   batchnormalization_4[0][0]  
____________________________________________________________________________________________________ 
upsampling2d_4 (UpSampling2D) (None, 128, 64, 64) 0   activation_4[0][0]    
____________________________________________________________________________________________________ 
convolution2d_4 (Convolution2D) (None, 64, 64, 64) 204864  upsampling2d_4[0][0]    
____________________________________________________________________________________________________ 
batchnormalization_5 (BatchNorma (None, 64, 64, 64) 256   convolution2d_4[0][0]    
____________________________________________________________________________________________________ 
activation_5 (Activation)  (None, 64, 64, 64) 0   batchnormalization_5[0][0]  
____________________________________________________________________________________________________ 
convolution2d_5 (Convolution2D) (None, 3, 64, 64)  4803  activation_5[0][0]    
____________________________________________________________________________________________________ 
activation_6 (Activation)  (None, 3, 64, 64)  0   convolution2d_5[0][0]    
==================================================================================================== 
Total params: 19,134,563 
Trainable params: 19,101,555 
Non-trainable params: 33,008 
____________________________________________________________________________________________________ 
____________________________________________________________________________________________________ 
Layer (type)      Output Shape   Param #  Connected to      
==================================================================================================== 
convolution2d_6 (Convolution2D) (None, 64, 32, 32) 4864  convolution2d_input_1[0][0]  
____________________________________________________________________________________________________ 
leakyrelu_1 (LeakyReLU)   (None, 64, 32, 32) 0   convolution2d_6[0][0]    
____________________________________________________________________________________________________ 
convolution2d_7 (Convolution2D) (None, 128, 16, 16) 204928  leakyrelu_1[0][0]     
____________________________________________________________________________________________________ 
leakyrelu_2 (LeakyReLU)   (None, 128, 16, 16) 0   convolution2d_7[0][0]    
____________________________________________________________________________________________________ 
convolution2d_8 (Convolution2D) (None, 256, 8, 8)  819456  leakyrelu_2[0][0]     
____________________________________________________________________________________________________ 
leakyrelu_3 (LeakyReLU)   (None, 256, 8, 8)  0   convolution2d_8[0][0]    
____________________________________________________________________________________________________ 
convolution2d_9 (Convolution2D) (None, 512, 4, 4)  3277312  leakyrelu_3[0][0]     
____________________________________________________________________________________________________ 
leakyrelu_4 (LeakyReLU)   (None, 512, 4, 4)  0   convolution2d_9[0][0]    
____________________________________________________________________________________________________ 
flatten_1 (Flatten)    (None, 8192)   0   leakyrelu_4[0][0]     
____________________________________________________________________________________________________ 
dense_2 (Dense)     (None, 1024)   8389632  flatten_1[0][0]     
____________________________________________________________________________________________________ 
leakyrelu_5 (LeakyReLU)   (None, 1024)   0   dense_2[0][0]      
____________________________________________________________________________________________________ 
dropout_1 (Dropout)    (None, 1024)   0   leakyrelu_5[0][0]     
____________________________________________________________________________________________________ 
dense_3 (Dense)     (None, 2)    2050  dropout_1[0][0]     
==================================================================================================== 
Total params: 12,698,242 
Trainable params: 12,698,242 
Non-trainable params: 0 
____________________________________________________________________________________________________ 
____________________________________________________________________________________________________ 
Layer (type)      Output Shape   Param #  Connected to      
==================================================================================================== 
input_2 (InputLayer)    (None, 100)   0            
____________________________________________________________________________________________________ 
sequential_1 (Sequential)  (None, 3, 64, 64)  19134563 input_2[0][0]      
____________________________________________________________________________________________________ 
sequential_2 (Sequential)  (None, 2)    12698242 sequential_1[1][0]    
==================================================================================================== 
Total params: 31,832,805 
Trainable params: 31,799,797 
Non-trainable params: 33,008 
____________________________________________________________________________________________________ 
Pre-training generator... 
+0

Помогает ли кто-нибудь решить эту проблему? – zghyfbmw

+0

Проблема решена. Просто обновите keras и theano на stampede. Старая версия Keras позволяет эту проблему. – zghyfbmw

 Смежные вопросы

  • Нет связанных вопросов^_^