Huang Dong’s Blog, email: huangdongxy@hotmail.com

February 10, 2009

BCI Competitions

Filed under: pattern recognition, Tech — donghuang @ 5:33 pm

The BCI Competition website

BCI Competition IV:

  • classification of continuous EEG without trial structure (data sets 1).
  • classification of EEG signals affected by eye movement artifacts (data sets 2).
  • classification of the direction of wrist movements from MEG (data sets 3).
  • discrimination requiring fine grained spatial resolution in ECoG (data sets 4).

Data sets:

 data sets: 1    , 2a    , 2b    , 3    , 4   

number of classes: 2    , 4    , 2    , 4    , 5

signal type: EEG, EEG/EOG, EEG/EOG, MEG, ECoG

sampling rate: 1K,  /250, /250, 400, 1K

frequency:0.05-200Hz, 0.5-100Hz/ , 0.5-100Hz/  , 0.5-100Hz, 0.15-200Hz

number of channels: 64, 22/3, 3/3, 10, 48

number of subjects: 7, 9, 9, 2, 3

 Data set 3: ‹hand movement direction in MEG›

The data set contains directionally modulated low-frequency MEG activity that was recorded while subjects performed wrist movements in four different directions.

[10 MEG channels (filtered to 0.5-100Hz), 400Hz sampling rate, 4 classes, 2 subjects]

Detailed description:

The trials were cut to contain data from 0.4s before to 0.6s after movement onset and the signals were band pass filtered (0.5 to 100Hz) and resampled at 400Hz, whereas in Waldert et al. (JNeurosci 28(4), 2008) we showed that especially the low-frequency activity (<8Hz) contains information about movement direction. The data are composed of signals from ten MEG channels which were located above the motor areas.

February 8, 2009

breast cancer cell classification using SVM (EE5904 project)

Filed under: Neural networks, pattern recognition, Tech — donghuang @ 6:00 pm

Title: Wisconsin Diagnostic Breast Cancer (WDBC)

<!–

–>

Data Set Characteristics:

Multivariate

Number of Instances:

569

Area:

Life

Attribute Characteristics:

Real

Number of Attributes:

32

Date Donated

1995-11-01

Associated Tasks:

Classification

Missing Values?

No

Number of Web Hits:

30874

Highest Percentage Achieved:  

98%

Attribute Information:

1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32) real-valued input features

Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass.  They describe characteristics of the cell nuclei present in the image.

Ten real-valued features are computed for each cell nucleus:

a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area – 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry
j) fractal dimension (“coastline approximation” – 1)

The mean, standard error, and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features.  For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.

Data set information:

  • predicting field 2, diagnosis: B = benign, M = malignant
  • sets are linearly separable using all 30 input features
  • Class distribution: 357 benign, 212 malignant

January 19, 2009

experiments based on yaleB

Filed under: pattern recognition, Tech — donghuang @ 12:40 am

tasks:

  1. exp1: repeat the experiments in paper:
    1. illumination extrapolation: trainSet: frontal pose P00, 7 illumination at 12degree: 7 x 1pose x 10perseon = 70; testSet1: 38(illumination between 12 degree and 77 degree) x 1pose(P00) x 10person = 380
    2. pose and illumination extrapolation: trainSet: as above; testSet2: 380 + 45(illumination < 77degree) x 8poses x 10person = 3980
  2. exp2: pose extrapolation.  pick one pose for training, the other poses for test. trainSet: as above; testSet: 7illumination x 8pose x 10person=560
  3. exp3: illumination extrapolation. pick the 7 frontal illumination for training, the others for test. TrainSet: as above; testSet: 38illunimation x 1pose(P00) x 10person =380
  4. exp4: trainSet: 8 frontal illumination x 9pose x 10person, testSet: 38 illumination x 9pose x 10person
  5. exp5: trainSet: random pick 20 illumination x 90pose x 10person, testSet: 26illumination x 9pose x10person

what I have done:

  • extractecd the tar.gz files into 90 folders
  • load the images in the 90 folders and save to yaleB.mat
  • align the face images: for frontal pose, affine transform is used since there are 3 pairs of control points; for other poses, images are shifted to have the face centered at the center of the image
    • tform = cp2tform(input_points,base_points,’affine’); %cp2tform(input_points, base_points, transformtype)
    • B = imtransform(A,tform,interp,’XData’,[182,457],’YData’,[108,404],’XYScale’,1); %interp=’bicubic’
    • visually compare the aligned face images
      • found bizarre images for person 2 pose 00, further inspection needed.
      • due to affine transformation for the frontal pose, the frontal pose and other poses of the same person look distorted for some persons. However, train images are only from the frontal pose for exp1. A better way to register face images may be to just translate the images to align the face centers. This way train and test images may be more consistent.
  • histogram equalization: each face image is divided into 2 halfs (left, right), and histeq is applied on each half image.
  • partition the database into training set and 2 test sets
    • trainSet: frontal pose P00, 7 illumination at 12degree: 7 x 1pose x 10perseon = 70
    • testSet1: 38(illumination between 12 degree and 77 degree) x 1pose(P00) x 10person = 380
    • testSet2: 380 + 45(illumination < 77degree) x 8poses x 10person = 3980
  • train the system on trainSet, obtain the eigenface, fisherface, reconstructed images, etc.
  • test the trained system using the 2 test sets.
    • 3 subsets for illumination: 12degree, 25degree, 50degree,77degree
    • 3 subsets in pose: frontal (Pose 1), 12degree (pose 2~6), 24degree (pose 7~8)
  • compare the results to the published results

 works to be done:

  • get detailed classification accuracies for each subset.
    • get results for rmcld first and compare with the published result.
  • perform exp 4
    • compare PCA, RMLD, RCLD, RBCLD

January 12, 2009

Protected: Efficient BackProp

Filed under: Neural networks, Tech — donghuang @ 11:54 pm

This content is password protected. To view it please enter your password below:

January 7, 2009

Protected: The Yale Face Database B

Filed under: pattern recognition — Tags: , , , — donghuang @ 5:46 pm

This content is password protected. To view it please enter your password below:

December 15, 2008

Protected: Local feature analysis: a general statistical theory for object representation

Filed under: pattern recognition — donghuang @ 2:03 pm

This content is password protected. To view it please enter your password below:

December 11, 2008

Protected: Face Recognition: A Literature Survey

Filed under: pattern recognition — Tags: — donghuang @ 10:00 pm

This content is password protected. To view it please enter your password below:

December 9, 2008

Protected: Kernel Machine-Based One-Parameter Regularized Fisher Discriminant Method for Face Recognition

This content is password protected. To view it please enter your password below:

Protected: An efficient renovation on kernel Fisher discriminant analysis and face recognition experiments

Filed under: Image Processing & Computer Vision, pattern recognition — Tags: , , , , — donghuang @ 6:50 am

This content is password protected. To view it please enter your password below:

Protected: A new kernel Fisher discriminant algorithm with application to face recognition

Filed under: Image Processing & Computer Vision, pattern recognition — Tags: , , , , , — donghuang @ 6:34 am

This content is password protected. To view it please enter your password below:

« Newer PostsOlder Posts »

Blog at WordPress.com.