tasks:
- exp1: repeat the experiments in paper:
- illumination extrapolation: trainSet: frontal pose P00, 7 illumination at 12degree: 7 x 1pose x 10perseon = 70; testSet1: 38(illumination between 12 degree and 77 degree) x 1pose(P00) x 10person = 380
- pose and illumination extrapolation: trainSet: as above; testSet2: 380 + 45(illumination < 77degree) x 8poses x 10person = 3980
- exp2: pose extrapolation. pick one pose for training, the other poses for test. trainSet: as above; testSet: 7illumination x 8pose x 10person=560
- exp3: illumination extrapolation. pick the 7 frontal illumination for training, the others for test. TrainSet: as above; testSet: 38illunimation x 1pose(P00) x 10person =380
- exp4: trainSet: 8 frontal illumination x 9pose x 10person, testSet: 38 illumination x 9pose x 10person
- exp5: trainSet: random pick 20 illumination x 90pose x 10person, testSet: 26illumination x 9pose x10person
what I have done:
- extractecd the tar.gz files into 90 folders
- load the images in the 90 folders and save to yaleB.mat
- align the face images: for frontal pose, affine transform is used since there are 3 pairs of control points; for other poses, images are shifted to have the face centered at the center of the image
- tform = cp2tform(input_points,base_points,’affine’); %cp2tform(input_points, base_points, transformtype)
- B = imtransform(A,tform,interp,’XData’,[182,457],’YData’,[108,404],’XYScale’,1); %interp=’bicubic’
- visually compare the aligned face images
- found bizarre images for person 2 pose 00, further inspection needed.
- due to affine transformation for the frontal pose, the frontal pose and other poses of the same person look distorted for some persons. However, train images are only from the frontal pose for exp1. A better way to register face images may be to just translate the images to align the face centers. This way train and test images may be more consistent.
- histogram equalization: each face image is divided into 2 halfs (left, right), and histeq is applied on each half image.
- partition the database into training set and 2 test sets
- trainSet: frontal pose P00, 7 illumination at 12degree: 7 x 1pose x 10perseon = 70
- testSet1: 38(illumination between 12 degree and 77 degree) x 1pose(P00) x 10person = 380
- testSet2: 380 + 45(illumination < 77degree) x 8poses x 10person = 3980
- train the system on trainSet, obtain the eigenface, fisherface, reconstructed images, etc.
- test the trained system using the 2 test sets.
- 3 subsets for illumination: 12degree, 25degree, 50degree,77degree
- 3 subsets in pose: frontal (Pose 1), 12degree (pose 2~6), 24degree (pose 7~8)
- compare the results to the published results
works to be done:
- get detailed classification accuracies for each subset.
- get results for rmcld first and compare with the published result.
- perform exp 4
- compare PCA, RMLD, RCLD, RBCLD