0

Я в старшей школе, и я пытаюсь выполнить проект, включающий нейронные сети. Я использую Ubuntu и пытаюсь усилить обучение с помощью тензорного потока, но я постоянно получаю множество предупреждений о недогрузке, когда я тренирую нейронную сеть. Они имеют форму ALSA lib pcm.c:7963:(snd_pcm_recover) underrun occurred. Это сообщение печатается на экране все чаще и чаще по мере прохождения обучения. В конце концов, я получаю ResourceExhaustedError, и программа завершается. Вот полное сообщение об ошибке:Недостаточность буфера и ресурсов Исчерпанные ошибки с тензорным потоком

W tensorflow/core/framework/op_kernel.cc:975] Resource exhausted: OOM when allocating tensor with shape[320000,512] 
Traceback (most recent call last): 
    File "./train.py", line 121, in <module> 
    loss, _ = model.train(minibatch, gamma, sess) # Train the model based on the batch, the discount factor, and the tensorflow session. 
    File "/home/perrin/neural/dqn.py", line 174, in train 
    return sess.run([self.loss, self.optimize], feed_dict=self.feed_dict) # Runs the training. This is where the underrun errors happen 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 766, in run 
    run_metadata_ptr) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 964, in _run 
    feed_dict_string, options, run_metadata) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1014, in _do_run 
    target_list, options, run_metadata) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1034, in _do_call 
    raise type(e)(node_def, op, message) 
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[320000,512] 
    [[Node: gradients/fully_connected/MatMul_grad/MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](dropout/mul, gradients/fully_connected/BiasAdd_grad/tuple/control_dependency)]] 

Caused by op u'gradients/fully_connected/MatMul_grad/MatMul_1', defined at: 
    File "./train.py", line 72, in <module> 
    model = AC_Net([None, 201, 201, 3], 5, trainer) # This creates the neural network using the imported AC_Net class. 
    File "/home/perrin/neural/dqn.py", line 128, in __init__ 
    self.optimize = trainer.minimize(self.loss) # This tells the trainer to adjust the weights in such a way as to minimize the loss. This is what actually 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 269, in minimize 
    grad_loss=grad_loss) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 335, in compute_gradients 
    colocate_gradients_with_ops=colocate_gradients_with_ops) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 482, in gradients 
    in_grads = grad_fn(op, *out_grads) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_grad.py", line 731, in _MatMulGrad 
    math_ops.matmul(op.inputs[0], grad, transpose_a=True)) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1729, in matmul 
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 1442, in _mat_mul 
    transpose_b=transpose_b, name=name) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op 
    op_def=op_def) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2240, in create_op 
    original_op=self._default_original_op, op_def=op_def) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1128, in __init__ 
    self._traceback = _extract_stack() 

...which was originally created as op u'fully_connected/MatMul', defined at: 
    File "./train.py", line 72, in <module> 
    model = AC_Net([None, 201, 201, 3], 5, trainer) # This creates the neural network using the imported AC_Net class. 
    File "/home/perrin/neural/dqn.py", line 63, in __init__ 
    net = slim.fully_connected(net, 512, activation_fn=tf.nn.elu, scope='fully_connected') # Feeds the input through a fully connected layer 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 177, in func_with_args 
    return func(*args, **current_args) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/layers.py", line 1350, in fully_connected 
    outputs = standard_ops.matmul(inputs, weights) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1729, in matmul 
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 1442, in _mat_mul 
    transpose_b=transpose_b, name=name) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op 
    op_def=op_def) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2240, in create_op 
    original_op=self._default_original_op, op_def=op_def) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1128, in __init__ 
    self._traceback = _extract_stack() 

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[320000,512] 
    [[Node: gradients/fully_connected/MatMul_grad/MatMul_1 = MatMul[T=DT_FLOAT, transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/cpu:0"](dropout/mul, gradients/fully_connected/BiasAdd_grad/tuple/control_dependency)]] 

Я исследовал эти проблемы, но не получил четкого представления о том, как я мог их исправить. Я довольно новичок в программировании, поэтому я мало знаю о том, как работают буферы и чтение/запись данных. Я очень озадачен этими ошибками. Кто-нибудь знает, какие части моего кода могут вызвать это и как его исправить? Спасибо, что нашли время, чтобы рассмотреть этот вопрос!

Вот мой код для определения нейронной сети (на основе this tutorial):

#! /usr/bin/python 

import numpy as np 
import tensorflow as tf 
slim = tf.contrib.slim 

# The neural network 
class AC_Net: 
    # This defines the actual neural network. 
    # output_size: the number of outputs of the policy 
    # trainer: the tensorflow training optimizer used by the network 
    def __init__(self, input_shape, output_size, trainer): 

     with tf.name_scope('input'): 
      self.input = tf.placeholder(shape=list(input_shape), dtype=tf.float32, name='input') 
      net = tf.image.per_image_standardization(self.input[0]) 
      net = tf.expand_dims(net, [0]) 

     with tf.name_scope('convolution'): 
      net = slim.conv2d(net, 32, [8, 8], activation_fn=tf.nn.elu, scope='conv') 
      net = slim.max_pool2d(net, [2, 2], scope='pool') 

     net = slim.flatten(net) 
     net = tf.nn.dropout(net, .5) 
     net = slim.fully_connected(net, 512, activation_fn=tf.nn.elu, scope='fully_connected') 
     net = tf.nn.dropout(net, .5) 

     with tf.name_scope('LSTM'): 
      cell = tf.nn.rnn_cell.BasicLSTMCell(256, state_is_tuple=True, activation=tf.nn.elu) 

      with tf.name_scope('state_in'): 
       state_in = cell.zero_state(tf.shape(net)[0], tf.float32) 

      net = tf.expand_dims(net, [0]) 
      step_size = tf.shape(self.input)[:1] 
      output, state = tf.nn.dynamic_rnn(cell, net, initial_state=state_in, sequence_length=step_size, time_major=False, scope='LSTM') 

     out = tf.reshape(output, [-1, 256]) 
     out = tf.nn.dropout(out, .5) 
     self.policy = slim.fully_connected(out, output_size, activation_fn=tf.nn.softmax, scope='policy') 

     self.value = slim.fully_connected(out, 1, activation_fn=None, scope='value') 

     # Defines the loss functions 
     with tf.name_scope('loss_function'): 
      self.target_values = tf.placeholder(dtype=tf.float32, name='target_values') # The target value is the discounted reward. 
      self.actions = tf.placeholder(dtype=tf.int32, name='actions') # This is the network's policy. 
      # The advantage is the difference between what the network thought the value of an action was, and what it actually was. 
      # It is computed as R - V(s), where R is the discounted reward and V(s) is the value of being in state s. 
      self.advantages = tf.placeholder(dtype=tf.float32, name='advantages') 

      with tf.name_scope('entropy'): 
       entropy = -tf.reduce_sum(tf.log(self.policy + 1e-10) * self.policy) 
      with tf.name_scope('responsible_actions'): 
       actions_onehot = tf.one_hot(self.actions, output_size, dtype=tf.float32)  
       responsible_actions = tf.reduce_sum(self.policy * actions_onehot, [1]) # This returns only the actions that were selected. 

      with tf.name_scope('loss'): 

       with tf.name_scope('value_loss'): 
        self.value_loss = tf.reduce_sum(tf.square(self.target_values - tf.reshape(self.value, [-1]))) 

       with tf.name_scope('policy_loss'): 
        self.policy_loss = -tf.reduce_sum(tf.log(responsible_actions + 1e-10) * self.advantages) 

       with tf.name_scope('total_loss'): 
        self.loss = self.value_loss + self.policy_loss - entropy * .01 

       tf.summary.scalar('loss', self.loss) 

     with tf.name_scope('gradient_clipping'): 
      tvars = tf.trainable_variables() 
      grads = tf.gradients(self.loss, tvars)   
      grads, _ = tf.clip_by_global_norm(grads, 20.) 
     self.optimize = trainer.apply_gradients(zip(grads, tvars)) 

    def predict(self, inputs, sess): 
     return sess.run([self.policy, self.value], feed_dict={self.input:inputs}) 

    def train(self, train_batch, gamma, sess): 

     inputs = train_batch[:, 0] 
     actions = train_batch[:, 1] 
     rewards = train_batch[:, 2] 
     values = train_batch[:, 4] 

     discounted_rewards = rewards[::-1] 
     for i, j in enumerate(discounted_rewards): 
      if i > 0: 
       discounted_rewards[i] += discounted_rewards[i - 1] * gamma 
     discounted_rewards = np.array(discounted_rewards, np.float32)[::-1] 
     advantages = discounted_rewards - values 
     self.feed_dict = { 
       self.input:np.vstack(inputs), 
       self.target_values:discounted_rewards, 
       self.actions:actions, 
       self.advantages:advantages 
       } 
     return sess.run([self.loss, self.optimize], feed_dict=self.feed_dict) 

Вот мой код для обучения нейронной сети:

#! /usr/bin/python 

import game_env, move_right, move_right_with_obs, random, inspect, os 
import tensorflow as tf 
import numpy as np 
from dqn import AC_Net 

def process_outputs(x): 
    a = [int(x > 2), int(x%2 == 0 and x > 0)*2-int(x > 0)] 
    return a 

environment = game_env # The environment to use 
env_name = str(inspect.getmodule(environment).__name__) # The name of the environment 

ep_length = 2000 
num_episodes = 20 

total_steps = ep_length * num_episodes # The total number of steps 
model_path = '/home/perrin/neural/nn/' + env_name 

learning_rate = 1e-4 # The learning rate 
trainer = tf.train.AdamOptimizer(learning_rate=learning_rate) # The gradient descent optimizer used 
first_epsilon = 0.6 # The initial chance of random action 
final_epsilon = 0.01 # The final chance of random action 
gamma = 0.9 
anneal_steps = 35000 # The number of steps it takes to go from initial to random 

count = 0 # Keeps track of the number of steps we've run 
experience_buffer = [] # Stores the agent's experiences in a list 
buffer_size = 10000 # How large the experience buffer can be 
train_step = 256 # How often to train the model 
batches_per_train = 10 
save_step = 500 # How often to save the trained model 
batch_size = 256 # How many experiences to train on at once 
env_size = 500 # How many pixels tall and wide the environment should be. 
load_model = True # Whether or not to load a pretrained model 
train = True # Whether or not to train the model 
test = False # Whether or not to test the model 

tf.reset_default_graph() 

sess = tf.InteractiveSession() 

model = AC_Net([None, 201, 201, 3], 5, trainer) 
env = environment.Env(env_size) 
action = [0, 0] 
state, _ = env.step(True, action) 

saver = tf.train.Saver() # This saves the model 
epsilon = first_epsilon 
tf.global_variables_initializer().run() 

if load_model: 
    ckpt = tf.train.get_checkpoint_state(model_path) 
    saver.restore(sess, ckpt.model_checkpoint_path) 
    print 'Model loaded' 

prev_out = None 

while count <= total_steps and train: 

    if random.random() < epsilon or count == 0: 
     if prev_out is not None: 
      out = prev_out 
     if random.randint(0, 100) == 100 or prev_out is None: 
      out = np.random.rand(5) 
      out = np.array([val/np.sum(out) for val in out]) 
      _, value = model.predict(state, sess) 
      prev_out = out 

    else: 
     out, value = model.predict(state, sess) 
     out = out[0] 
    act = np.random.choice(out, p=out) 
    act = np.argmax(out == act) 
    act1 = process_outputs(act) 
    action[act1[0]] = act1[1] 
    _, reward = env.step(True, action) 
    new_state = env.get_state() 

    experience_buffer.append((state, act, reward, new_state, value[0, 0])) 

    state = new_state 

    if len(experience_buffer) > buffer_size: 
     experience_buffer.pop(0) 

    if count % train_step == 0 and count > 0: 
     print "Training model" 
     for i in range(batches_per_train): 
     # Get a random sample of experiences and train the model based on it. 
      x = random.randint(0, len(experience_buffer)-batch_size) 
      minibatch = np.array(experience_buffer[x:x+batch_size]) 
      loss, _ = model.train(minibatch, gamma, sess) 
      print "Loss for batch", str(i+1) + ":", loss 


    if count % save_step == 0 and count > 0: 
     saver.save(sess, model_path+'/model-'+str(count)+'.ckpt') 
     print "Model saved" 

    if count % ep_length == 0 and count > 0: 
     print "Starting new episode" 
     env = environment.Env(env_size) 

    if epsilon > final_epsilon: 
     epsilon -= (first_epsilon - final_epsilon)/anneal_steps 

    count += 1 

while count <= total_steps and test: 
    out, _ = model.predict(state, sess) 
    out = out[0] 
    act = np.random.choice(out, p=out) 
    act = np.argmax(out == act) 
    act1 = process_outputs(act) 
    action[act1[0]] = act1[1] 
    state, reward = env.step(True, action) 
    new_state = env.get_state() 
    count += 1 

# Write log files to create tensorboard visualizations 
merged = tf.summary.merge_all() 
writer = tf.summary.FileWriter('/home/perrin/neural/summaries', sess.graph) 
if train: 
    summary = sess.run(merged, feed_dict=model.feed_dict) 
    writer.add_summary(summary) 
writer.flush() 
+0

у вас заканчивается память, можете ли вы попробовать использовать меньший размер партии? –

+0

@YaroslavBulatov Спасибо за предложение. Я попробовал его с размером партии 10, но у меня все еще есть все ошибки. – CyborgOctopus

+0

как насчет размера партии 1? Если в этом не хватает памяти, вам нужно сделать вашу сеть меньшей или использовать машину с большим объемом памяти. –

ответ

0

Вы работаете из памяти. Возможно, что вашей сети требуется больше памяти, чем вам нужно запустить, поэтому первым шагом к отслеживанию чрезмерного использования памяти является выяснение того, что использует столько памяти.

Вот один подход, который использует временную шкалу и statssummarizer: https://gist.github.com/yaroslavvb/08afccbe087171881ceafc0c98abca05

Это будет печатать из нескольких таблиц, одна из таблиц тензоров, упорядоченные по началу использования памяти. Вы должны убедиться, что у вас нет чего-то необычайно большого.

Вы также можете увидеть график памяти с помощью Chrome визуализатор, как подробно описано here

Более продвинутый метод для построения шкалы отчислений/deallocations памяти, как это было сделано в этом issue

Теоретически использование памяти не должен 't растут между этапами, если вы не создаете новые операторы с переменным состоянием (переменные), но я обнаружил, что распределение глобальной памяти может увеличиваться, если размеры ваших тензоров изменяются между шагами.

Обход - это периодическое сохранение ваших параметров на контрольной точке и перезапуск сценария.

 Смежные вопросы

  • Нет связанных вопросов^_^