Home > GVSToolbox > runSVMLeave1Out.m

runSVMLeave1Out

PURPOSE ^

calculate leave 1 subject out accuracy for linear SVM

SYNOPSIS ^

function [ percCorrect w predictedVsTrue ] = runSVMLeave1Out( featureVect, classLabels,numSamplesPerSubj, expLabels )

DESCRIPTION ^

 calculate leave 1 subject out accuracy for linear SVM
 
 syntax: [ percCorrect w ] = runSVMLeave1Out( featureVect, classLabels, numSamplesPerSubj, expLabels)
 
 Inputs:
   featureVect: (dim x numSamples) matrix of data
   classLabels: class labels, valued [0,1,2] for visualization, leave empty to
       not visualize. Originally, class 1 for learn, 0 for didn't learn,
       2 for remove because unclear
   numSamplesPerSubj: as name implies, for splitting up the data into
      leave 1 subject out cross validation
   expLabels: optional input usually returned by 'getLeave1OutLabels'
      function which specifies how to partition the data for leave 1
      subject out cross validation
 
 Outputs:
   percCorrect: percentage correctly classified by leave 1 subj out CV
   w: w vector returned by all the well labeled data
   predictedVsTrue: cell containing the [predeicted ; true] labels of the
       data for each fold of the cross validation
 
 Note, class 1 for learn, 0 for didn't learn, 2 for remove because unclear

CROSS-REFERENCE INFORMATION ^

This function calls: This function is called by:

SOURCE CODE ^

0001 % calculate leave 1 subject out accuracy for linear SVM
0002 %
0003 % syntax: [ percCorrect w ] = runSVMLeave1Out( featureVect, classLabels, numSamplesPerSubj, expLabels)
0004 %
0005 % Inputs:
0006 %   featureVect: (dim x numSamples) matrix of data
0007 %   classLabels: class labels, valued [0,1,2] for visualization, leave empty to
0008 %       not visualize. Originally, class 1 for learn, 0 for didn't learn,
0009 %       2 for remove because unclear
0010 %   numSamplesPerSubj: as name implies, for splitting up the data into
0011 %      leave 1 subject out cross validation
0012 %   expLabels: optional input usually returned by 'getLeave1OutLabels'
0013 %      function which specifies how to partition the data for leave 1
0014 %      subject out cross validation
0015 %
0016 % Outputs:
0017 %   percCorrect: percentage correctly classified by leave 1 subj out CV
0018 %   w: w vector returned by all the well labeled data
0019 %   predictedVsTrue: cell containing the [predeicted ; true] labels of the
0020 %       data for each fold of the cross validation
0021 %
0022 % Note, class 1 for learn, 0 for didn't learn, 2 for remove because unclear
0023 
0024 function [ percCorrect w predictedVsTrue ] = runSVMLeave1Out( featureVect, classLabels, ...
0025                                 numSamplesPerSubj, expLabels )
0026 
0027 % leave one subject out cross validation
0028 [ dim numSamples] = size( featureVect);
0029 
0030 % center and scale variables to unit variance
0031 featureVect = featureVect - repmat( mean(featureVect,2), [1,numSamples] );
0032 featStdev = std( featureVect, 0, 2);
0033 featureVect( featStdev ~= 0,:) = featureVect( featStdev ~= 0,:)./repmat(featStdev(featStdev ~= 0), [1,numSamples]);
0034 % featureVect = featureVect./repmat( std( featureVect, 0, 2)+.001, [1,numSamples]);
0035 
0036 
0037 if nargin < 4 || isempty(expLabels)
0038     expLabels = getLeave1OutLabels( numSamples, numSamplesPerSubj);
0039 %     expLabels = balanceClasses( expLabels, classLabels );  % even out number samples in each class
0040 end
0041 
0042 numTrials = length(expLabels);
0043 numCorrect = 0;
0044 totalNumTest=0;
0045 options = ['-t 0 -h 0 -e .01' ];  %-t 0 for linear kernel, h 0 for shrinkage huristic
0046 
0047 predictedVsTrue = cell( numTrials,1 );
0048 
0049 for i1 = 1:numTrials 
0050     
0051     trainLabels = classLabels(:,expLabels(i1).train);
0052     trainFeatures = featureVect(:,expLabels(i1).train);
0053     trainFeatures( :, trainLabels==2) = [];
0054     trainLabels( :, trainLabels==2) = [];
0055     
0056     testLabels = classLabels(:,expLabels(i1).test);
0057     testFeatures = featureVect(:,expLabels(i1).test);
0058     testFeatures( :, testLabels==2) = [];
0059     testLabels( :, testLabels==2) = [];
0060     
0061     if ~isempty( testLabels) 
0062         model = svmtrain(trainLabels', trainFeatures', options );
0063         [predicted, accuracy, probEst] = svmpredict(testLabels',testFeatures', model);
0064         numCorrect = numCorrect + accuracy(1)*.01*length(testLabels);            
0065         totalNumTest = totalNumTest+length(testLabels);  
0066         
0067         predictedVsTrue{i1} = [ predicted'; testLabels];
0068         
0069     end
0070 
0071 end
0072 percCorrect = numCorrect/totalNumTest; %length(classLabels~=2);
0073 
0074 % normal vector of optimal hyperplane (ALL DATA)
0075 featureVect( :, classLabels==2) = [];
0076 classLabels( :, classLabels==2) = [];
0077 model = svmtrain(classLabels', featureVect', options );
0078 w = sum( repmat( model.sv_coef', [dim,1]).*full(model.SVs'),2);
0079

Generated on Tue 01-Jul-2014 12:35:04 by m2html © 2005