Please use this identifier to cite or link to this item: https://essuir.sumdu.edu.ua/handle/123456789/97135
Or use following links to share this resource in social networks: Recommend this item
Title Transformer based attention guided network for segmentation and hybrid network for classification of liver tumor from CT scan images
Authors Stephe, S.
Kumar, S.B.
Thirumalraj, A.
Dzhyvak, V.
ORCID
Keywords computed tomography liver tumor
transformer
vision transformer
gated recurrent unit
global spatial attention
Type Article
Date of Issue 2024
URI https://essuir.sumdu.edu.ua/handle/123456789/97135
Publisher Sumy State University
License Creative Commons Attribution 4.0 International License
Citation Stephe S, Kumar SB, Thirumalraj A, Dzhyvak V. Transformer based attention guided network for segmentation and hybrid network for classification of liver tumor from CT scan images. East Ukr Med J. 2024;12(3):692-710. DОI: https://doi.org/10.21272/eumj.2024;12(3):692-710
Abstract When a liver disease causes changes in the image's pixel quality, an ultrasonic filter can identify these changes as potential indicators of cancer. An ultrasonic filter may detect changes in the quality of an image's pixels based on the state of the liver, which are indicators of the closeness of malignant development. It is possible that alcohol, rather than liver disease, is the cause of cirrhosis because such alterations are more prevalent in alcoholic liver diseases. Current 2D ultrasound data sets have an accuracy degree of 85.9%, whereas a 2D CT data set has an accuracy rating of 91.02%. This work presents TAGN, a new Transformer-based Attention Guided Network that aims to improve the semantical segmentation architecture's performance through a combination of multi-level assembly. In order to efficiently learn the non-local interactions among encoder characteristics, TAGN incorporates the self-aware attention (SAA) element with Transformer Self Attention (TSA) besides Global Spatial Attention (GSA), which is inspired by Transformer. In addition, the work aggregates the upsampled features with distinct semantic scales by using extra multi-scale skip connections across decoder blocks. By doing so, the capacity to produce discriminative features from multi-scale context information is enhanced. For the purpose of reliable and accurate liver tumor classification using segmented pictures, this study suggests a system that integrates a Vision with a Gated Recurrent Unit (GRU). By analyzing the input image, the ViT finds important characteristics, and the GRU finds obvious relationships between them. Іn the experimental analysis of the projected ViT-GRU model achieved a recall rate of 95.21, accuracy as a 97.57, precision of 95.62, specificity of 98.33, and an f-score of 95.88. Based on segmentation and classification analyses performed on publically accessible datasets, the suggested classifier achieved a total accuracy of 98.79% in the experiments. When used optimally, the suggested strategy improves the accuracy of liver tumor diagnoses by medical professionals.
Appears in Collections: Східноукраїнський медичний журнал

Views

Downloads

Files

File Size Format Downloads
Stephe_Transformer_computed_tomography_liver_tumor.pdf 1.62 MB Adobe PDF 0

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.