Model and Training Method of the Resilient Image Classifier Considering Faults, Concept Drift, and Adversarial Attacks

dc.contributor.authorМоскаленко, В`ячеслав Васильович
dc.contributor.authorМоскаленко, Вячеслав Васильевич
dc.contributor.authorMoskalenko, Viacheslav Vasylovych
dc.contributor.authorKharchenko, V.
dc.contributor.authorМоскаленко, Альона Сергіївна
dc.contributor.authorМоскаленко, Алена Сергеевна
dc.contributor.authorMoskalenko, Alona Serhiivna
dc.contributor.authorПетров, Сергій Олександрович
dc.contributor.authorПетров, Сергей Александрович
dc.contributor.authorPetrov, Serhii Oleksandrovych
dc.date.accessioned2022-11-24T12:34:30Z
dc.date.available2022-11-24T12:34:30Z
dc.date.issued2022
dc.description.abstractModern trainable image recognition models are vulnerable to different types of perturbations; hence, the development of resilient intelligent algorithms for safety-critical applications remains a relevant concern to reduce the impact of perturbation on model performance. This paper proposes a model and training method for a resilient image classifier capable of efficiently functioning despite various faults, adversarial attacks, and concept drifts. The proposed model has a multi-section structure with a hierarchy of optimized class prototypes and hyperspherical class boundaries, which provides adaptive computation, perturbation absorption, and graceful degradation. The proposed training method entails the application of a complex loss function assembled from its constituent parts in a particular way depending on the result of perturbation detection and the presence of new labeled and unlabeled data. The training method implements principles of self-knowledge distillation, the compactness maximization of class distribution and the interclass gap, the compression of feature representations, and consistency regularization. Consistency regularization makes it possible to utilize both labeled and unlabeled data to obtain a robust model and implement continuous adaptation. Experiments are performed on the publicly available CIFAR-10 and CIFAR-100 datasets using model backbones based on modules ResBlocks from the ResNet50 architecture and Swin transformer blocks. It is experimentally proven that the proposed prototype-based classifier head is characterized by a higher level of robustness and adaptability in comparison with the dense layer-based classifier head. It is also shown that multi-section structure and self-knowledge distillation feature conserve resources when processing simple samples under normal conditions and increase computational costs to improve the reliability of decisions when exposed to perturbations.en_US
dc.identifier.citationMoskalenko, V.; Kharchenko, V.; Moskalenko, A.; Petrov, S. Model and Training Method of the Resilient Image Classifier Considering Faults, Concept Drift, and Adversarial Attacks. Algorithms 2022, 15, 384. https://doi.org/10.3390/a15100384en_US
dc.identifier.sici0000-0003-3443-3990en
dc.identifier.urihttps://essuir.sumdu.edu.ua/handle/123456789/90070
dc.language.isoenen_US
dc.publisherMDPIen_US
dc.rights.uriCC BY 4.0en_US
dc.subjectimage classificationen_US
dc.subjectrobustnessen_US
dc.subjectresilienceen_US
dc.subjectresilienceen_US
dc.subjectgraceful degradationen_US
dc.subjectadversarial attacksen_US
dc.subjectfaults injectionen_US
dc.subjectconcept driften_US
dc.subjectconvolutional neural networken_US
dc.subjectself-learningen_US
dc.subjectself-knowledge distillationen_US
dc.subjectprototypical classifieren_US
dc.subjectcontrastive-center lossen_US
dc.titleModel and Training Method of the Resilient Image Classifier Considering Faults, Concept Drift, and Adversarial Attacksen_US
dc.typeArticleen_US

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Moskalenko_et al._Classifier_Considering_Faults.pdf
Size:
1.66 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.96 KB
Format:
Item-specific license agreed upon to submission
Description: