ISSN: 1899-0967
Polish Journal of Radiology
Established by prof. Zygmunt Grudziński in 1926 Sun
Current issue Archive Manuscripts accepted About the journal Editorial board Abstracting and indexing Contact Instructions for authors Ethical standards and procedures
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
1/2023
vol. 88
 
Share:
Share:
Head and neck radiology
Review paper

Accuracy of artificial intelligence in the detection and segmentation of oral and maxillofacial structures using cone-beam computed tomography images: a systematic review and meta-analysis

Farida Abesi
1
,
Atena Sadat Jamali
2
,
Mohammad Zamani
2

1.
Department of Oral and Maxillofacial Radiology, Dental Faculty, Babol University of Medical Sciences, Babol, Iran
2.
Student Research Committee, Babol University of Medical Sciences, Babol, Iran
© Pol J Radiol 2023; 88: e256-e263
Online publish date: 2023/05/19
Article file
- Accuracy.pdf  [0.16 MB]
Get citation
 
PlumX metrics:
 

Introduction

Cone-beam computed tomography (CBCT) is an X-ray-based imaging method with a shorter scanning time and lower radiation dose than traditional CT [1,2]. This system is commonly used in dental practice to provide three-dimensional high-resolution visualization of the structure of oral and maxillofacial lesions, helping the diagnostic and therapeutic approaches in the patients efficiently [3,4]. However, different factors can negatively influence the interpretation accuracy of the prepared scans, such as low inter- and intra-observer agreement (principally for junior and less experienced practitioners) [5,6]; therefore, CBCT imaging has weaknesses as well as the advantages mentioned above.

Artificial intelligence is a wide-ranging branch of computer science (software-based) that enables machines to learn and simulate human behaviour automatically. This system has been suggested in recent years as a supplementary tool to potentially increase the diagnostic performance of other imaging instruments in dental practice, such as CBCT [7,8].

Several studies have assessed the diagnostic performance of artificial intelligence integrated with CBCT imaging in relation to oral and maxillofacial anatomical landmarks and lesions. In this regard, the researchers have reported differing values of accuracy [9-11]. However, no comprehensive studies have endeavoured to resolve these conflicting data. Therefore, in the present study, we aimed to conduct a contemporaneous systematic review and meta-analysis of studies reporting the accuracy of artificial intelligence in the detection and segmentation of oral and maxillofacial structures, with a focus on CBCT images, to provide an overall estimate for the diagnostic performance of artificial intelligence.

Material and methods

Search strategy and eligibility criteria

We reported the present study based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guideline [12]. We did a literature search of the Embase, PubMed, and Scopus databases for reports published from their inception to 31 October 2022 without language restrictions using the following keywords: artificial intelligence OR deep learning OR machine learning OR automatic OR automated AND cone-beam computed tomography OR CBCT. The search was limited to the Title/Abstract. We enrolled original research publications that explored the accuracy of artificial intelligence in the automatic detection or segmentation of oral and maxillofacial anatomical landmarks or lesions using CBCT images. Accuracy referred to the rate of correct findings regarding all the findings observed. We excluded reviews, case reports, editorials, and letter to the editors. We also excluded duplicate publications. Reports without extractable data on the study outcome were also excluded. Finally, studies not published in full text were ineligible for inclusion. We performed a hand search of the references of the retrieved articles to capture additional papers.

Study selection and data extraction

We independently investigated the eligibility of the sources primarily identified in the database search by screening the titles and abstracts using a pre-designed suitability form. In the next step, we collected the full texts of the relevant reports for more detailed appraisals. Any discrepancies were resolved by consensus between the reviewers. We extracted the following data from the eligible studies finally selected for inclusion: first author’s name, publication year, study location (country), objective, study design, artificial intelligence technique, validation method, sample size, and accuracy value (ranging from 0.00 to 1.00). If a study used different artificial intelligence techniques or validation methods, we considered each as a separate report. We used Google Translate for translating full texts not in English.

Statistical analysis

We pooled the accuracy values of artificial intelligence extracted from the studies using a random-effects model to provide overall estimates. We also performed subgroup analysis according to anatomical landmarks/lesions, detection/segmentation tasks, and artificial intelligence technique. The estimates calculated were presented with 95% confidence intervals (CIs). We used the I2 index to explore the heterogeneity between the studies, which ranges from 0.0% to 100.0%; a p-value < 0.10 was considered statistically significant [13]. Forest plots were generated to illustrate the results of the meta-analysis. We appraised the publication bias using a funnel plot. We also conducted a meta-regression to assess the potential influence of the publication year on the study outcome, and a p-value < 0.05 was considered statistically significant; a scatter plot was drawn in this regard. All statistical analyses were carried out by Comprehensive Meta-Analysis V2 software.

Results

Search results and study selection

A total of 3206 records were yielded from the initial electronic database search, of which 3175 papers were excluded because of either duplication or unsuitability. Full texts of the remaining 31 articles were then evaluated. Ultimately, 19 studies were identified as eligible for inclusion in this systematic review and meta-analysis [9-11,14-29]. The PRISMA flowchart of the search strategy and results is presented in Figure 1.

Figure 1

PRISMA flow diagram

/f/fulltexts/PJR/50709/PJR-88-50709-g001_min.jpg

Study characteristics

Out of 19 individual studies included in this review, there were 4 studies from China, 4 studies from Iran, 3 studies from Belgium, 2 studies from Turkey, one study from Austria, one study from South Korea, one study from Thailand, one study from the USA, and 2 multicentre studies. The language of all articles was English. The publication year of the studies was between 2017 and 2022. Deep learning was the most frequently used artificial intelligence technique in the studies. Table 1 summarizes the baseline information of the included surveys.

Table 1

Baseline characteristics of the included studies

StudyCountryObjectiveStudy designArtificial intelligence techniqueValidation methodSample size (n)
Abdolali, 2017 [14]IranClassification of maxillofacial cystsDevelopmentSupport vector machine and sparse discriminant analysisThree-fold cross-validation125 scans (20,000 axial images)
Abdolali, 2019Iran,Maxillofacial lesionDevelopmentKnowledge-based support vectorSplit-sample1145 scans
[15]Japandetectionmachine and sparse discriminant analysisvalidation
Chai, 2021ChinaAmeloblastomaDevelopmentDeep learningSplit-sample350 scans
[16]and odontogenic(convolutional neural network)validation
keratocyst diagnosis
Fontenele, 2022BelgiumTooth segmentationValidationDeep learningSplit-sample175 scans
[9](convolutional neural network)validation(500 teeth)
Gerhardt, 2022BelgiumDetection and labellingValidationDeep learningSplit-sample175 scans
[10]of teeth and small(convolutional neural network)validation
edentulous regions
Haghnegahdar,IranDiagnosis ofDevelopmentK-nearest neighbourTen-fold132 scans, 66 patients
2018temporomandibularcross-validation(132 joints) with TMD
[17]joint disordersand 66 normal cases
(132 joints)
Hu, 2022ChinaDiagnosis of verticalValidationDeep learningFive-fold552 tooth images
[18]root fracture(convolutional neural networkcross-validation
[ResNet-50, VGG-19, DenseNet-169])
Johari, 2017IranDetection of verticalDevelopmentProbabilistic neural networkThree-fold240 radiographs of extrac-
[11]root fracturescross-validationted teeth (CBCT and PA),
120 intact, 120 fractured
Kirnbauer, 2022AustriaDetection of periapicalDevelopmentDeep learning (convolutional neuralFour-fold144 scans
[19]osteolytic lesionsand validationnetwork [SpatialConfiguration-Net])cross-validation
Kwak, 2020China,Segmentation ofDevelopmentDeep learningSplit-sample102 scans
[20]Finland,mandibular canal(convolutional neural network [U-net])validation(49,094 images)
South
Korea
Lahoud, 2022BelgiumMandibular canalDevelopmentDeep learningSplit-sample235 scans
[21]segmentationand validationvalidation
Lee, 2020SouthDetection and diagnosisDevelopmentDeep learningSplit-sample2126 images
[22]Koreaof odontogenic kerato-(convolutional neural network)validation(1140 panoramic
cysts, dentigerous cysts,and 986 CBCT images)
and periapical cysts
Lin, 2021ChinaPulp cavity and toothValidationDeep learningExternal30 teeth
[23]segmentation(convolutional neural network [U-net])validation
Roongruangsilp,ThailandDental implantDevelopmentDeep learning (region-basedSplit-sample184 scans
2021 [24]planningconvolutional neural network)validation
Serindere, 2022TurkeyDiagnose of maxillaryDevelopmentDeep learningFive-fold cross-296 images
[25]sinusitis(convolutional neural network)validation
Setzer, 2020USAPeriapical lesionsDevelopmentDeep learningFive-fold cross-20 volumes (61 roots)
[26]detection(convolutional neural network)validation
Sorkhabi, 2019 [27]IranAlveolar bone density classificationDevelopmentDeep learning (convolutional neural network)Split-sample validation83 scans (207 surgery target areas)
Wang, 2021 [28]ChinaSegmentation of oral lesionValidationDeep learning (convolutional neural network)External validation90 scans
Yilmaz, 2017 [29]TurkeyDiagnosis of the periapical cyst and keratocystic odontogenic tumourDevelopmentSupport vector machineTen-fold cross-validation, split-sample validation, leave-one-out cross validation50 dental scans

Overall accuracy

Nineteen individual studies reported the diagnostic accuracy of artificial intelligence using oral and maxillofacial CBCT images. The lowest and highest accuracy values reported were 0.78 and 1.00, respectively. Analysis of the studies showed that the overall pooled diagnostic accuracy of artificial intelligence was 0.93 (95% CI: 0.91-0.94; I2 = 93.3%, p < 0.001) (Figure 2). The funnel plot was suggestive of publication bias (Figure 3). Meta-regression analysis revealed that the publication year explained the heterogeneity for the outcome (β = –0.137, p = 0.045) (Figure 4).

Figure 2

Diagnostic accuracy of artificial intelligence using oral and maxillofacial cone-beam computed tomography imaging

/f/fulltexts/PJR/50709/PJR-88-50709-g002_min.jpg
Figure 3

Funnel plot to assess publication bias across studies assessing diagnostic accuracy of artificial intelligence using oral and maxillofacial cone-beam computed tomography imaging

/f/fulltexts/PJR/50709/PJR-88-50709-g003_min.jpg
Figure 4

Scatter plot of the association between publication year and logit event rate for diagnostic accuracy of artificial intelligence using oral and maxillofacial cone-beam computed tomography imaging

/f/fulltexts/PJR/50709/PJR-88-50709-g004_min.jpg

Accuracy by anatomical landmarks and lesions

There were 7 individual studies that reported the dia-gnostic accuracy of artificial intelligence for oral and maxillofacial anatomical landmarks using CBCT images. Analysis of these studies demonstrated that the pooled diagnostic accuracy of artificial intelligence was 0.93 (95% CI: 0.89-0.96; I2 = 90.3%, p < 0.001) for anatomical landmarks. In addition, the diagnostic accuracy of artificial intelligence for oral and maxillofacial lesions was investigated in 12 studies. According to the analysis of these studies, the overall pooled diagnostic accuracy of artificial intelligence was 0.92 (95% CI: 0.90-0.94; I2 = 95.1%, p < 0.001) for lesions. Figure 5 visualizes the results of the subgroup analysis by anatomical landmarks and lesions.

Figure 5

Diagnostic accuracy of artificial intelligence by anatomical landmarks and lesions

/f/fulltexts/PJR/50709/PJR-88-50709-g005_min.jpg

Accuracy of detection and segmentation

Figure 6 displays the forest plot of the pooled accuracy of artificial intelligence in detecting and segmenting oral and maxillofacial structures using CBCT images. Analysis of 14 studies reporting the information on the detection accuracy indicated that the pooled accuracy of detection for artificial intelligence was 0.93 (95% CI: 0.91-0.94; I2 = 94.7%, p < 0.001). Also, based on the analysis of 5 studies that assessed the segmentation accuracy of artificial intelligence using the CBCT system, the overall pooled accuracy of segmentation for artificial intelligence was 0.92 (95% CI: 0.85-0.95; I2 = 91.9%, p < 0.001).

Figure 6

Diagnostic accuracy of artificial intelligence by detection and segmentation objectives

/f/fulltexts/PJR/50709/PJR-88-50709-g006_min.jpg

Accuracy by artificial intelligence technique

Fifteen studies reported the artificial intelligence accuracy using the deep learning techniques [9-11,16,18-28], and 4 studies reported the accuracy using machine learning [14,15,17,29]. According to the analysis, the pooled diagnostic accuracy of artificial intelligence was 0.93 (95% CI: 0.90-0.95; I2 = 92.7%, p < 0.001) using deep learning and 0.93 (95% CI: 0.89-0.96; I2 = 94.6%, p < 0.001) using machine learning.

Discussion

Artificial intelligence systems are presently utilized as auxiliary equipment for clinical and paraclinical management of dentomaxillofacial conditions due to recent technological developments in computer-aided tools [30,31]. However, the validity and reliability of these applications remained to be determined. To answer this question, several studies tried to investigate the diagnostic accuracy of artificial intelligence regarding oral and maxillofacial structures with a focus on CBCT imaging; however, the reported values varied between the surveys [8,19-23], needing a comprehensive study to gather the available data and calculate an overall estimate on this topic. In the current systematic review and meta-analysis, we assembled information from 19 studies that reported the values of artificial intelligence accuracy in the detection and segmentation of oral and maxillofacial structures using CBCT scans. We found that the overall pooled diagnostic accuracy of artificial intelligence was 0.93, which is an excellent performance. Moreover, recent publications demonstrated lower accuracy values. The accuracy value interestingly remained high when we performed subgroup analysis by anatomical landmarks and lesions; i.e. the pooled diagnostic accuracy of artificial intelligence was 0.93 for anatomical landmarks and 0.92 for lesions. Also, other subgroup analyses indicated that the pooled accuracy values of detection and segmentation were 0.93 and 0.92, respectively.

To the best of our knowledge, this is the first systematic review and meta-analysis that comprehensively assessed the available evidence in the literature on the detection and segmentation accuracy of artificial intelligence models about oral and maxillofacial structures using CBCT images. Preview reviews either did not conduct a meta-analysis [32] or included very few studies [33]. In the present study, we systematically searched multiple databases with different keywords. Then we screened thousands of citations initially identified using strict eligibility criteria. We used a random-effects model for pooling the data to provide more conservative estimates. We also evaluated the publication bias using a funnel plot. In addition, meta-regression analysis was performed to explore the influence of publication year on the study outcome. Finally, we conducted subgroup analyses to minimize the effect of heterogeneity on the study findings.

As per the available evidence, clinicians utilize artificial intelligence systems in different oral and maxillofacial fields, typically such as orthodontics, endodontics, implantology, and oral and maxillofacial surgery [31,32]. Concerning orthodontics, researchers proposed that deep learning frameworks could be used in automatic landmark detection and cluster-based segmentation about cephalometric analysis [34,35]. With respect to endodontics, studies suggested learning models for detecting and segmenting tooth roots, alveolar bone, and periapical lesions, using methods that incorporate knowledge of orofacial anatomy, resulting in fewer images for training [7,11,26]. Regarding dental implant planning, region-based and three-dimensional deep convolutional neural network algorithms have been developed for qualitative and quantitative appraisal of alveolar bone, with acceptable diagnostic performance [24,27,36]. Pertaining to oral and maxillofacial surgery, deep learning has been utilized for diagnosing and classifying temporomandibular joint diseases, maxillary and mandibular segmentation, and guiding oral and maxillofacial surgeons with high diagnostic performance [16,17,23]. The studies included in our review assessed different anatomical regions, such as maxillary sinuses, temporomandibular joint, dental roots, dental alveoli, periodontium, pulp cavity, and mandibular canal. Furthermore, various artificial intelligence techniques were used in the studies, but deep learning was the most common among them.

A weakness of the present review study was the high heterogeneity between the retrieved surveys, which could be explained by differences in study location, purposes, sample size, scanning device and parameters, presence of noise or artifacts, image acquisition protocols, and interobserver or intraobserver agreement. It should be mentioned that publication bias could justify the heterogeneity for the study outcome. Also, the publication year explained the heterogeneity in the outcome according to the meta-regression results. Altogether, it is proposed that more homogeneous research be designed and carried out.

Conclusions

The results of this systematic review and meta-analysis revealed excellent accuracy for the detection and segmentation tasks of artificial intelligence systems using oral and maxillofacial CBCT images. Artificial intelligence-based systems seem to have the potential to streamline oral and dental healthcare services and facilitate preventive and personalized dentistry. Positive and negative points and challenges of these applications need to be evaluated in further clinical studies.

Conflict of interest

The authors report no conflict of interest.

References

1 

Walter C, Schmidt JC, Rinne CA, et al. Cone beam computed tomo-graphy (CBCT) for diagnosis and treatment planning in periodonto-logy: systematic review update. Clin Oral Investig 2020; 24: 2943-2958.

2 

Abesi F, Alimohamadi M. Cone Beam Computed Tomography (CBCT) findings of fungal sinusitis in post COVID-19 patient: a case report. Caspian J Intern Med 2022; 13: 307-310.

3 

Shukla S, Chug A, Afrashtehfar KI. Role of cone beam computed tomography in diagnosis and treatment planning in dentistry: an update. J Int Soc Prev Community Dent 2017; 7: S125-S136.

4 

Abesi F, Motaharinia S, Moudi E, et al. Prevalence and anatomical variations of maxillary sinus septa: A cone-beam computed tomo-graphy analysis. J Clin Exp Dent 2022; 14: e689-e693.

5 

Fokas G, Vaughn VM, Scarfe WC, et al. Accuracy of linear measurements on CBCT images related to presurgical implant treatment planning: a systematic review. Clin Oral Implants Res 2018; 29 Suppl 16: 393-415.

6 

Jafarzadeh M, Ansari S, Sharifishoshtari S. In vitro diagnostic accuracy of cone-beam computed tomography with variable gamma values for detection of vertical root fractures in teeth with prefabricated metal posts. Dent Res J (Isfahan) 2022; 19: 7. doi: 10.4103/1735-3327.336692.

7 

Orhan K, Bayrakdar IS, Ezhov M, et al. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J 2020; 53: 680-689.

8 

Lahoud P, EzEldeen M, Beznik T, et al. Artificial intelligence for fast and accurate 3-dimensional tooth segmentation on cone-beam computed tomography. J Endod 2021; 47: 827-835.

9 

Fontenele RC, Gerhardt MDN, Pinto JC, et al. Influence of dental fillings and tooth type on the performance of a novel artificial intelligence-driven tool for automatic tooth segmentation on CBCT images–a validation study. J Dent 2022; 119: 104069. doi: 10.1016/j.jdent.2022.104069.

10 

Gerhardt MDN, Fontenele RC, Leite AF, et al. Automated detection and labelling of teeth and small edentulous regions on cone-beam computed tomography using convolutional neural networks. J Dent 2022; 122: 104139. doi: 10.1016/j.jdent.2022.104139.

11 

Johari M, Esmaeili F, Andalib A, et al. Detection of vertical root fractures in intact and endodontically treated premolar teeth by designing a probabilistic neural network: an ex vivo study. Dentomaxillofac Radiol 2017; 46: 20160107. doi: 10.1259/dmfr.20160107.

12 

Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6: e1000097. doi: 10.1371/journal.pmed.1000097.

13 

Higgins JP, Thompson SG, Deeks JJ, et al. Measuring inconsistency in meta-analyses. BMJ 2003; 327: 557-560.

14 

Abdolali F, Zoroofi RA, Otake Y, et al. Automated classification of maxillofacial cysts in cone beam CT images using contourlet transformation and Spherical Harmonics. Comput Methods Programs Biomed 2017; 139: 197-207.

15 

Abdolali F, Zoroofi RA, Otake Y, et al. A novel image-based retrieval system for characterization of maxillofacial lesions in cone beam CT images. Int J Comput Assist Radiol Surg 2019; 14: 785-796.

16 

Chai ZK, Mao L, Chen H, et al. Improved diagnostic accuracy of ameloblastoma and odontogenic keratocyst on cone-beam CT by arti-ficial intelligence. Front Oncol 2021; 11: 793417. doi: 10.3389/fonc.2021.793417.

17 

Haghnegahdar AA, Kolahi S, Khojastepour L, et al. Diagnosis of tempromandibular disorders using local binary patterns. J Biomed Phys Eng 2018; 8: 87-96.

18 

Hu Z, Cao D, Hu Y, et al. Diagnosis of in vivo vertical root fracture using deep learning on cone-beam CT images. BMC Oral Health 2022; 22: 382. doi: 10.1186/s12903-022-02422-9.

19 

Kirnbauer B, Hadzic A, Jakse N, et al. Automatic detection of periapical osteolytic lesions on cone-beam computed tomography using deep convolutional neuronal networks. J Endod 2022; 48: 1434-1440.

20 

Kwak GH, Kwak EJ, Song JM, et al. Automatic mandibular canal detection using a deep convolutional neural network. Sci Rep 2020; 10: 5711. doi: 10.1038/s41598-020-62586-8.

21 

Lahoud P, Diels S, Niclaes L, et al. Development and validation of a novel artificial intelligence driven tool for accurate mandibular canal segmentation on CBCT. J Dent 2022; 116: 103891. doi: 10.1016/j.jdent.2021.103891.

22 

Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis 2020; 26: 152-158.

23 

Lin X, Fu Y, Ren G, et al. Micro-computed tomography-guided artificial intelligence for pulp cavity and tooth segmentation on cone-beam computed tomography. J Endod 2021; 47: 1933-1941.

24 

Roongruangsilp P, Khongkhunthian P. The learning curve of artificial intelligence for dental implant treatment planning: a descriptive study. Appl Sci 2021; 11: 10159.

25 

Serindere G, Bilgili E, Yesil C, et al. Evaluation of maxillary sinusitis from panoramic radiographs and cone-beam computed tomographic images using a convolutional neural network. Imaging Sci Dent 2022; 52: 187-195.

26 

Setzer FC, Shi KJ, Zhang Z, et al. Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images. J Endod 2020; 46: 987-993.

27 

Sorkhabi MM, Khajeh MS. Classification of alveolar bone density using 3-D deep convolutional neural network in the cone-beam CT images: a 6-month clinical study. Measurement 2019; 148: 106945.

28 

Wang X, Meng X, Yan S. Deep learning-based image segmentation of cone-beam computed tomography images for oral lesion detection. J Healthc Eng 2021; 2021: 4603475. doi: 10.1155/2021/4603475.

29 

Yilmaz E, Kayikcioglu T, Kayipmaz S. Computer-aided diagnosis of periapical cyst and keratocystic odontogenic tumor on cone beam computed tomography. Comput Methods Programs Biomed 2017; 146: 91-100.

30 

Gokdeniz ST, Kamburoğlu K. Artificial intelligence in dentomaxillofacial radiology. World J Radiol 2022; 14: 55-59.

31 

Hung K, Montalvao C, Tanaka R, et al. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review. Dentomaxillofac Radiol 2020; 49: 20190107. doi: 10.1259/dmfr.20190107.

32 

Mureșanu S, Almășan O, Hedeșiu M, et al. Artificial intelligence models for clinical usage in dentistry with a focus on dentomaxillofacial CBCT: a systematic review. Oral Radiol 2023; 39: 18-40.

33 

Badr FF, Jadu FM. Performance of artificial intelligence using oral and maxillofacial CBCT images: a systematic review and meta-analysis. Niger J Clin Pract 2022; 25: 1918-1927.

34 

Chen R, Ma Y, Chen N, et al. Structure-aware long short-term memory network for 3d cephalometric landmark detection. IEEE Trans Med Imaging 2022; 41: 1791-1801.

35 

Abesi F, Haghanifar S, Khafri S, et al. The evaluation of the anatomical variations of osteomeatal complex in cone beam computed tomography images. J Babol Univ Med Sci 2018; 20: 30-34.

36 

Abdolali F, Zoroofi RA, Abdolali M, et al. Automatic segmentation of mandibular canal in cone beam CT images using conditional statistical shape model and fast marching. Int J Comput Assist Radiol Surg 2017; 12: 581-593.

Copyright: © Polish Medical Society of Radiology This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial-No Derivatives 4.0 International (CC BY-NC-ND 4.0). License allowing third parties to download articles and share them with others as long as they credit the authors and the publisher, but without permission to change them in any way or use them commercially.
 
Quick links
© 2024 Termedia Sp. z o.o.
Developed by Bentus.