Classification of brain MRI images by using the automatic segmentation and texture analysis
DOI:
https://doi.org/10.15276/aait.04.2020.4Keywords:
classification, “One versus All”, multiclass task, automatic segmentation, texture analysis, tumor, magnetic resonance imagingAbstract
Brain tumor is a relatively severe human disease type. Its timely diagnosis and tumor type definition are an actual task in
modern medicine. Lately, the segmentation methods on 3D brain images (like computer and magnetic resonance tomography) are
used for definition of a certain tumor type. Nevertheless, the segmentation is usually conducted manually, which requires a lot of
time and depends on the experience of a doctor. This paper looks at the possibility of creating a method for the automatic
segmentation of images. As a training sample, the medical database of MRI brain tomography with three tumor types (meningioma,
glioma, and pituitary tumor) was taken. Taking into account the different slices, the base had: 708 examples of meningioma, 1426
examples of glioma, and 930 examples of pituitary tumor. The database authors marked the regions of interest on each image, which
were used as a tutor (supervised learning) for automatic segmentation model. Before model creation, currently existing popular
automatic segmentation models were analyzed. U-Net deep convolution neural network architecture was used as the most suitable
one. As the result of its use, the model was obtained, which can segment the image correctly in seventy four percent of six hundred
images (testing sample). After obtaining the automatic segmentation model, the Random Forest models for three “One versus All”
tasks and one multiclass task were created for brain tumor classification. The total sample was divided into training (70 %), testing
(20 %), and examining (10 %) ones before creating the models. The accuracy of the models in the examining sample varies from 84
to 94 percent. For model classification creation, the texture features were used, obtained by texture analysis method, and created by
the co-authors of the Department of Biomedical Cybernetics in the task of liver ultrasound image classification. They were compared
with well-known Haralick texture features. The comparison showed that the best way to achieve an accurate classification model is to
combine all the features into one stack.