$29
This assignment requires you to explore how the dimensionality reduction of the data helps enhance the classification performance and write a report to record your results, findings and analysis.
• You need create a mini-computer program for the dimensionality reduction, classification and for the computation of the classification accuracy or error rate against the data dimensionality.
• You may use any programming language such as Matlab, Python or others.
• You can choose any public high dimensional dataset (images or others) downloaded from internet for your study or even use synthetic data.
• Proper pre-processing such as alignment and normalization of the data may be necessary to convert the data suitable for the classification purpose.
• Proper partition of the dataset into training and testing datasets is necessary. All parameters for dimensionality reduction and classification should be specified only by the training data. The testing data can only be used to generate the classification accuracy or error rate.
• You may apply PCA, LDA or the both or any others to reduce the data into various dimensionalities for classification. The minimum Mahalanobis distance classifier or linear classifier is recommended in this study.
• Write a report in IEEE conference short paper style of 3 to 5 pages including everything such as figures, tables and references, excluding the program. List your program as appendix at the end of the report. The report should record your experimental process or settings, results your get from the experiments, analysis and comparison of your results, conclusions drawn from the experiments and the list of your program.
Submit the PDF file of the report in NTULearn by Monday of week 14, 20 November 2023. Please use your name on your matric card followed by your matric number as your file name for submission, e.g. HOEJIUNTIAN-U2203856C.
Reference
[1] X. Jiang, "Linear Subspace Learning-Based Dimensionality Reduction," IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 16-26, March 2011.
[2] X. Jiang, "Asymmetric Principal Component and Discriminant Analyses for Pattern Classification," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 931-937, May 2009.
[3] X. Jiang, B. Mandal and A. Kot, Eigenfeature Regularization and Extraction in Face Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 383-394, March 2008.