EDIT: Also, in the case you do not have enough data for a validation set, you can use crossvalidation to tune the parameters as well as estimate the test error.Cross-validation set is used for model selection, for example, select the polynomial model with the least amount of errors for a given parameter set.The training is stopped at the moment the validation error starts to rise.The weight configuration at this point indicates a model, which predicts the training data well, as well as the data affects the weight configuration indirectly to select the weight configuration. This set of data is never used in the training process.
If you are using gradient based algorithm to train the network then the error surface and the gradient at some point will completely depend on the training data set thus the training data set is being directly used to adjust the weights.
When you sign in to comment, IBM will provide your email, first name and last name to DISQUS.
By commenting, you are accepting the DISQUS terms of service.
def train(self, train, validation, N=0.3, M=0.1): # N: learning rate # M: momentum factor accuracy = list() while(True): error = 0.0 for p in train: input, target = p self.update(input) error = error Propagate(target, N, M) print "validation" total = 0 for p in validation: input, target = p output = self.update(input) total = sum([abs(target - output) for target, output in zip(target, output)]) #calculates sum of absolute diference between target and output accuracy.append(total) print min(accuracy) print sum(accuracy[-5:])/5 #if i % 100 == 0: print 'error %-14f' % error if ?
Test set: A set of examples used only to assess the performance [generalization] of a fully specified classifier.