Fixing Pytorch RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed

Pytorch RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed is an error which occurs when your training loop doesn’t repackage or detach the hidden state between batches.

Today, I explain why this error occurs and how to get rid of it, while also trying to add other possible solutions that could help you remove the error for good.

Exploring the RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed

This is an error which occurs when the training loop you are using doesn’t repackage or detach the hidden state between batches.

Please double check so you can avoid mixing between errors. The error message should look like the error message bellow.

                                                                       #
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed.
                                                                       #

Bellow I make my best attempt at solving the error and present multiple possible fixes.

The best method to get rid of this problem : 

use loss.backward(retain_graph=True) instead of using loss.backward() or use hidden.detach_()

We already established that the problem is caused by the training loop because the loop doesn’t repackage or detach the hidden state between batches. There are two options to solve the issue.

The first option is to use loss.backward(retain_graph=True) instead of using loss.backward() the only inconvenience here is that the training time will be longer, the error is going to be solved which is good.

For example if your initial code is

                                                                       #
hidden = repackage_hidden(hidden)
        model.zero_grad()
        output, hidden = model(data, hidden)
        loss = criterion(output.view(-1, ntokens), targets)
        loss.backward()
                                                                       #

You should replace it with, it is easy since you just have to add loss.backward(retain_graph=True) to the end

                                                                       #
hidden = repackage_hidden(hidden)
        model.zero_grad()
        output, hidden = model(data, hidden)
        loss = criterion(output.view(-1, ntokens), targets)
        loss.backward(retain_graph=True) 
                                                                       #

I would confidently say that this should solve the issue for 99 percent of people.

The second option is to detach or repackage the hidden state in between batches. To do that you can add the two lines bellow to enable that.

                                                                       #
hidden.detach_()
hidden = hidden.detach()
                                                                       #

I hope one of the two options helped you avoid running into the error every time you launch your code.

Summing-up : 

This is it, it is the end of our article, I hope I helped you solve the Python error : Pytorch RuntimeError Trying to backward through the graph a second time but the buffers have already been freed , a simple upgrade will usually solve this issue. Do not give up, keep coding and learning, cheers.

If you want to learn more about Python, please check out the Python Documentation : https://docs.python.org/3/