Early diagnosis of brain tumors is crucial for treatment planning and increasing the survival rates of infected patients. In fact, brain tumors exist in a range of different forms, sizes, and features, as well as treatment choices. One of the essential roles of neurologists and radiologists is the diagnosis of brain tumors in their early stages. However, manual brain tumor diagnosis is difficult, time-consuming, and prone to error. Based on the problem highlighted, an automated brain tumor detection system is mandatory to identify the tumor in its initial stages. This research presents an efficient deep learningbased system for the classification of brain tumors from brain MRI using the deep convolutional network and salp swarm algorithm. All experiments are performed using the publicly available brain tumor Kaggle dataset. To enhance the classification rate, preprocessing and data augmentation such as skewed data ideas are devised. In addition, AlexNet and VGG19 are leveraged to perform specific functionality. Finally, all features merged into a single feature vector for brain tumor classification. Some of the extracted features found insignificant towards effective classification. Hence, we employed an efficient feature selection technique named slap swarm to find the most discriminative features to attain best tumor classification rate. Finally, several SVM kernels are merged for the final classification and 99.1% accuracy is achieved by selecting 4111 optimal features from 8192.
Identification and recognition of number plate is very difficult from low resolution images due to poor boundary and contrast. Our goal is to identify the digits from a low-quality number plate image correctly, but correct detection was exceedingly difficult in some cases due to the low-resolution image. Another goal of this paper was to upscale the image from a very low resolution to high resolution to recover helpful information to improve the accuracy of number plate detection and recognition. We have used Enhanced- Super-Resolution with Generative Adversarial Network (SRGAN). We modified native Dense Blocks of the Generative Adversarial Network with a Residual in Residual Dense Block model. In addition to Convolutional Neural Networks for thresholding. We also used a Rectified Linear Unit (ReLU) activation layer. The plate image is then used for segmentation using the OCR model for detection and recognizing the characters in the number plates. The Optical character recognition (OCR) model reaches an average accuracy of 84% for high resolution, whereas the accuracy is 4% - 7% for low resolution. The model’s accuracy increases with the resolution enhancement of the plate images. ESRGAN provides better enhancement of low-resolution images than SRGAN and Pro-SRGAN, which the OCR model validates. The accuracy significantly increased digit/alphabet detection in the number plate than the original low-resolution image when converted to a high-resolution image using ESRGAN.
Internet of Things (IoT), a strong integration of radio frequency identifier (RFID), wireless devices, and sensors, has provided a difficult yet strong chance to shape existing systems into intelligent ones. Many new applications have been created in the last few years. As many as a million objects are anticipated to be linked together to form a network that can infer meaningful conclusions based on raw data. This means any IoT system is heterogeneous when it comes to the types of devices that are used in the system and how they communicate with each other. In most cases, an IoT network can be described as a layered network, with multiple tiers stacked on top of each other. IoT network performance improvement typically focuses on a single layer. As a result, effectiveness in one layer may rise while that of another may fall. Ultimately, the achievement issue must be addressed by considering improvements in all layers of an IoT network, or at the very least, by considering contiguous hierarchical levels. Using a parallel and clustered architecture in the device layer, this paper examines how to improve the performance of an IoT network’s controller layer. A particular clustered architecture at the device level has been shown to increase the performance of an IoT network by 16% percent. Using a clustered architecture at the device layer in conjunction with a parallel architecture at the controller layer boosts performance by 24% overall.
Fetal brain segmentation and gestational age prediction have been under active research in the field of medical image processing for a long time. However, both these tasks are challenging due to factors like difficulty in acquiring a proper fetal brain image owing to the fetal movement during the scan. With the recent advancements in deep learning, many models have been proposed for performing both the tasks, individually, with good accuracy. In this paper, we present Multi-Tasking Single Encoder U-Net, MTSE U-Net, a deep learning architecture for performing three tasks on fetal brain images. The first task is the segmentation of the fetal brain into its seven components: intracranial space and extra-axial cerebrospinal fluid spaces, gray matter, white matter, ventricles, cerebellum, deep gray matter, and brainstem, and spinal cord. The second task is the prediction of the type of the fetal brain (pathological or neurotypical). The third task is the prediction of the gestational age of the fetus from its brain. All of this will be performed by a single model. The fetal brain images can be obtained by segmenting it from the fetal magnetic resonance images using any of the previous works on fetal brain segmentation, thus showing our work as an extension of the already existing segmentation works. The Jaccard similarity and Dice score for the segmentation task by this model are 77 and 82%, respectively, accuracy for the type of prediction task is 89% and the mean absolute error for the gestational age task is 0.83 weeks. The salient region identification by the model is also tested and these results show that a single model can perform multiple, but related, tasks simultaneously with good accuracy, thus eliminating the need to use separate models for each task.
Purpose For radiologists, identifying and assessing thelung nodules of cancerous form from CT scans is a difficult and laborious task. As a result, early lung growing prediction is required for the investigation technique, and hence it increases the chances of a successful treatment. To ease this problem, computer-aided diagnostic (CAD) solutions have been deployed. The main purpose of the work is to detect the nodules are malignant or not and to provide the results with better accuracy.
Methods A neural network model that incorporates a feedback loop is the recurrent neural network. Evolutionary algorithms such as the Grey Wolf Optimization Algorithm and Recurrent Neural Network (RNN) Techniques are investigated utilising the Matlab Tool in this work. Statistical attributes are also produced and compared with other RNN with Genetic Algorithm (GA) and Particle Swarm Optimization (PSO)combinations for study.
Results The proposed method produced very high accuracy, sensitivity, specificity, and precision and compared with other state of art methods. Because of its simplicity and possible global search capabilities, evolutionary algorithms have shown tremendous promise in the area of feature selection in the latest years.
Conclusion The proposed techniques have demonstrated outstanding outcomes in various disciplines, outperforming classical methods. Early detection of lung nodules will aid in determining whether the nodules will become malignant or not.
The global healthcare sector continues to grow rapidly and is reflected as one of the fastestgrowing sectors in the fourth industrial revolution (4.0). The majority of the healthcare industry still uses labor-intensive, time-consuming, and error-prone traditional, manual, and manpower-based methods. This review addresses the current paradigm, the potential for new scientific discoveries, the technological state of preparation, the potential for supervised machine learning (SML) prospects in various healthcare sectors, and ethical issues. The effectiveness and potential for innovation of disease diagnosis, personalized medicine, clinical trials, non-invasive image analysis, drug discovery, patient care services, remote patient monitoring, hospital data, and nanotechnology in various learning-based automation in healthcare along with the requirement for explainable artificial intelligence (AI) in healthcare are evaluated. In order to understand the potential architecture of non-invasive treatment, a thorough study of medical imaging analysis from a technical point of view is presented. This study also represents new thinking and developments that will push the boundaries and increase the opportunity for healthcare through AI and SML in the near future. Nowadays, SML-based applications require a lot of data quality awareness as healthcare is data-heavy, and knowledge management is paramount. Nowadays, SML in biomedical and healthcare developments needs skills, quality data consciousness for data-intensive study, and a knowledge-centric health management system. As a result, the merits, demerits, and precautions need to take ethics and the other effects of AI and SML into consideration. The overall insight in this paper will help researchers in academia and industry to understand and address the future research that needs to be discussed on SML in the healthcare and biomedical sectors.
Bone diseases are common and can result in various musculoskeletal conditions (MC). An estimated 1.71 billion patients suffer from musculoskeletal problems worldwide. Apart from musculoskeletal fractures, femoral neck injuries, knee osteoarthritis, and fractures are very common bone diseases, and the rate is expected to double in the next 30 years. Therefore, proper and timely diagnosis and treatment of a fractured patient are crucial. Contrastingly, missed fractures are a common prognosis failure in accidents and emergencies. This causes complications and delays in patients’ treatment and care. These days, artificial intelligence (AI) and, more specifically, deep learning (DL) are receiving significant attention to assist radiologists in bone fracture detection. DL can be widely used in medical image analysis. Some studies in traumatology and orthopaedics have shown the use and potential of DL in diagnosing fractures and diseases from radiographs. In this systematic review, we provide an overview of the use of DL in bone imaging to help radiologists to detect various abnormalities, particularly fractures. We have also discussed the challenges and problems faced in the DL-based method, and the future of DL in bone imaging.