Citation
Share
Abstract
IImage classification algorithms are a tool that can be implemented on a variety of research sectors, some of these researches need an extensive amount of data for the model to obtain appropriate results. A work around this problem is to implement a multimodal data fusion algorithm, a model that utilizes data from different acquisition frameworks to complement for the missing data. In this paper, we discuss about the generation of a CNN model for image classification using transfer learning from three types of architectures in order to compare their results and use the best model, we also implement a Spatial Pyramid Pooling layer to be able to use images with varying dimensions. The model is then tested on three uni-modal data-sets to analyze its performance and tune the hyperparameters of the model according to the results. Then we use the optimized architecture and hyperparameters to train a model on a multimodal data-set. The aim of this thesis is to generate a multimodal image classification model that can be used by researchers and people that need to analyze images for their own cause, avoiding the need to implement a model for a specific study.
Description
0000-0003-1770-471X