Fetal brain segmentation and gestational age prediction have been under active research in the field of medical image processing for a long time. However, both these tasks are challenging due to factors like difficulty in acquiring a proper fetal brain image owing to the fetal movement during the scan. With the recent advancements in deep learning, many models have been proposed for performing both the tasks, individually, with good accuracy. In this paper, we present Multi-Tasking Single Encoder U-Net, MTSE U-Net, a deep learning architecture for performing three tasks on fetal brain images. The first task is the segmentation of the fetal brain into its seven components: intracranial space and extra-axial cerebrospinal fluid spaces, gray matter, white matter, ventricles, cerebellum, deep gray matter, and brainstem, and spinal cord. The second task is the prediction of the type of the fetal brain (pathological or neurotypical). The third task is the prediction of the gestational age of the fetus from its brain. All of this will be performed by a single model. The fetal brain images can be obtained by segmenting it from the fetal magnetic resonance images using any of the previous works on fetal brain segmentation, thus showing our work as an extension of the already existing segmentation works. The Jaccard similarity and Dice score for the segmentation task by this model are 77 and 82%, respectively, accuracy for the type of prediction task is 89% and the mean absolute error for the gestational age task is 0.83 weeks. The salient region identification by the model is also tested and these results show that a single model can perform multiple, but related, tasks simultaneously with good accuracy, thus eliminating the need to use separate models for each task.
Automatic identity verification is one of the most critical and research-demanding areas. One of the most effective and reliable identity verification methods is using unique human biological characteristics and biometrics. Among all types of biometrics, palm print is recognized as one of the most accurate and reliable identity verification methods. However, this biometrics domain also has several critical challenges: image rotation, image displacement, change in image scaling, presence of noise in the image due to devices, region of interest (ROI) detection, or user error. For this purpose, a new method of identity verification based on median robust extended local binary pattern (MRELBP) is introduced in this study. In this system, after normalizing the images and extracting the ROI from the microscopic input image, the images enter the feature extraction step with the MRELBP algorithm. Next, these features are reduced by the dimensionality reduction step, and finally, feature vectors are classified using the k-nearest neighbor classifier. The microscopic images used in this study were selected from IITD and CASIA data sets, and the identity verification rate for these two data sets without challenge was 97.2% and 96.6%, respectively. In addition, computed detection rates have been broadly stable against changes such as salt-and-pepper noise up to 0.16, rotation up to 5, displacement up to 6 pixels, and scale change up to 94%.