Ultrasonography Search

CLOSE


Kim, Cho, and Kwon: Applications of artificial intelligence in obstetrics

Abstract

Artificial intelligence, which has been applied as an innovative technology in multiple fields of healthcare, analyzes large amounts of data to assist in disease prediction, prevention, and diagnosis, as well as in patient monitoring. In obstetrics, artificial intelligence has been actively applied and integrated into our daily medical practice. This review provides an overview of artificial intelligence systems currently used for obstetric diagnostic purposes, such as fetal cardiotocography, ultrasonography, and magnetic resonance imaging, and demonstrates how these methods have been developed and clinically applied.

Introduction

Medical artificial intelligence (AI), which began to gain attention with the advent of IBM Watson Health, is rapidly developing in various fields, and efforts to refine and apply AI to clinical practice are more active than ever. In particular, technologies using AI are being commercialized in a number of fields due to the accumulation of big data, and in particular, deep learning using artificial neural networks (ANNs) for analyzing medical images has been recognized as the technology closest to clinical application. Machine learning is a basic method of AI that uses data, learns from the data, and makes decisions or predictions by itself; however, machine learning needs some guidance. Deep learning is a subtype of machine learning, through which ANNs themselves can judge the accuracy of predictions [1]. AI applications using deep learning have extended beyond computed tomography, magnetic resonance imaging (MRI), ultrasonography (US), and pathology slides to include diagnoses or determinations of disease severity using endoscopy, including optic and intestinal images [2–5]. Diverse companies have been developing and commercializing AI-based video platforms. The aim of this review was to introduce the AI technologies that are being researched and developed in the field of obstetrics.

Artificial Intelligence

AI was first introduced in the 1960s with the goal of using complex machines (i.e., computers) to simulate human intelligence. AI technology has been advanced through machine learning and deep learning [6]. Machine learning is a type of algorithm applied to process data. As a subtype of AI, machine learning focuses on the ability of machines to receive data and learn for themselves without being programmed with rules. Machine learning models can learn on their own, and then adjust and improve as they learn more about the information being processed. Machine learning algorithms are classified into supervised and unsupervised learning algorithms. Supervised learning can be further classified into classification (decision trees, support vector machines, etc.) and regression (typical regression algorithms). Unsupervised learning algorithms receive random samples, and discovering main patterns or similarities in these samples is the main goal. These approaches can be further classified into clustering (K-means or Gaussian mixture models) and dimensionality reduction. The former type of approach is commonly used when the final goal is set at the time of learning, and the latter is used as an exploratory method for which the final goal comes after the analysis [7].
Deep learning, an advanced form of machine learning, involves models constructed to analyze large amounts of data using ANNs, which are structured into multiple neural nodes, resembling the arrangement of neurons in the brain [1]. An ANN consists of a dependable mathematical system that can interpret multifactorial data. These neurons connect through multiple synapses to exchange data with each other, and in doing so, derive the most probable answer. By creating these multiple connections, computers can identify the most probable answers to problems by mimicking cognitive functions such as inference processes. This complex algorithmic AI software is now being used in medicine to analyze large amounts of data, which can help prevent, diagnose and monitor patients' diseases. For video and image processing, convolutional neural networks have been applied in deep learning; these contain deeper networks with more convolutional layers and have the ability to integrate information more deeply for image processing [8].

Medicine

In medicine, AI has gained popularity in the field of diagnostic radiology, and its domain of usage has been expanding to include many different types of imaging-based diagnoses. We would like to change as follows. AI predictive and classification models have been developed that include not only simple radiological findings (e.g., for the prostate) from breast and heart imaging, bone age, chest radiology, and laboratory results, but also combinations of above findings and pathologic images, previous medical images and omics.
Many AI methods have been used to improve the diagnostic process and to diagnose certain diseases, which humans have been doing for a long time. These methods reduce the duration of image acquisition, enable optimization of personnel requirements, and lead to diagnostic and economic benefits that result in cost-saving. One study compared 100 manual biometric measurements to 100 automated measurements and showed time-savings of about 20 seconds and seven steps in each 20-minute anatomic survey [9]. These methods are also very valuable for doctors regarding the diagnosis of patients and reduce medico-legal issues, directly and indirectly. Unfortunately, in the field of obstetrics and gynecology, the application of AI has been slower, although some AI technologies have been utilized in obstetrics. Improvements in AI are closely related to the amount of available data, which currently is linked to the frequency of imaging performed in women visiting the hospital

Fetal Cardiotocography

Cardiotocography (CTG) was an early development in the field of obstetrics. CTG is the most important device for evaluating fetal well-being through measurements of the fetal heart rate and uterine contractions. The fetal heart rate pattern reflects fetal cardiac and central nervous system responses to hemodynamic changes. Abundant research on CTG has been conducted since 1980. A recent meta-analysis indicated that a 50% reduction in neonatal seizures was associated with continuous CTG monitoring [10]. Despite its clinical importance, it is difficult to ensure objectivity between interpretations, and above all, regular observations are necessary in order to avoid a long gap between detection of suspicious patterns and intervention. To overcome limitations in the interpretation of CTG by humans, AI using modern computer systems has been applied to CTG interpretation, and many experiments are underway. AI systems are not influenced by human limitations such as fatigue, distraction, bias, poor communication, cognitive overload, or fear of doing harm.
Since the first study of CTG interpretation using machines in 1989 by Bassil et al., there have been many studies on AI and CTG interpretation, including randomized controlled trials and retrospective cohort studies. Three randomized controlled trials including over 50,000 patients yielded inconsistent outcomes regarding risk identification and the reduction of adverse outcomes. Nevertheless, retrospective studies using traditional machine learning methods showed improved sensitivity in the detection of compromised fetuses (Table 1). In addition to these studies, 21 diagnostic features were extracted based on data from 2,126 CTG examinations, and a deep learning-based automatic classification algorithm of CTG showed a sensitivity 99.716%, a specificity 97.500%, and an accuracy 99.503% in 2019 [19]. Another study reported an algorithm that could predict the risk of a pH of 7.15 or less in umbilical cord blood with an accuracy of 98.34%, a sensitivity of 98.22%, and a specificity of 94.87% using CTG data from 552 cases of labor [20]. However, a recent systematic review concluded that machine learning interpretation of CTG during labor did not improve neonatal outcomes in terms of neonatal acidosis, cord blood pH <7.15–7.20, 5-minute Apgar score <7, mode of delivery, neonatal intensive care unit admission, neonatal seizures, or perinatal death, and had limited reliability compared to experts [21]. A plausible reason for the limited efficacy is that the training for ML models of CTG was based on human interpretation. Therefore, an alternative approach that does not include human interpretation or guidelines in system development has been investigated in the context of feature engineering theory (Table 1). Based on these technologies, it is expected that a future program that automatically analyzes CTG and informs clinicians of risks with more advanced computer systems will be commercialized and will be useful in clinical practice.

Ultrasonography

Above all, AI is being applied and actively studied in obstetrics for the analysis of US, which generates standardized data. US is a safe, non-invasive checkup method for prenatal diagnosis. Despite the standard application of US, measurements are challenging in circumstances such as maternal obesity, motion blurring, missing boundaries, acoustic shadow, speckle noise and a low signal-to-noise ratio [22]. In addition, manual US screening is slow and susceptible to mistakes, and two-dimensional images are mostly stored in databases. Therefore, the use of new technologies to improve the primary acquired images or help extract and standardize measurements is of great importance.
Machine learning was first applied to US images of fetuses several years ago. In particular, it has become possible to acquire and distinguish different body parts of fetuses through machine learning; therefore, many studies have presented algorithms that automatically extracted and measured fetal structures and fetal biometry from US images [23]. There currently exists a semi-automatic program for fetal ultrasound analysis (it is a semi-automatic program because the program automatically performs body measurements using an AI algorithm after the sonographer or doctor selects an appropriate image of each body part). This program is already in service, and several companies are preparing to launch related services. One example involves the automatic acquisition of a standard scan plane demonstrated by assessing two-dimensional transventricular US images of the fetal brain and three-dimensional transthalamic plane US measuring the fetal biparietal diameter and head circumference [24,25]. Other studies reported efficacy using machine learning in the identification of fetal structures and organs to find congenital abnormalities [26–31]. Table 2 summarizes research on deep learning applications for fetal biometry and gross fetal imaging, including the heart and short cervix.
To obtain high-quality images within an appropriate amount of time, personnel should be trained in the skilled procedures involved in the obstetric image scanning process. Freehand ultrasound plane acquisition has been developed, but not yet standardized; however, a recent study demonstrated that a probe guidance system provided a useful navigation signal towards targets such as the standard plane of certain structures [32]. In addition, Noble et al. have been working on deep learning and interactions involved in scanning for more than a decade and developed a multi-cue data capture system for the acquisition of data based on sonographers’ perceptions and actions through the application of a deep learning model [33–35]. This is accomplished by analyzing the pupillary response, voices, and actions of sonographers; furthermore, this sensation and motion tracking system would be accompanied by information about safety issues, such as the thermal index.

Fetal Echocardiography

Fetal echocardiography (ECG) has only been used for 15 years; nonetheless, this imaging modality is essential for perinatal care in that it is very useful for diagnosing and monitoring intrauterine growth restriction, twin-to-twin transfusion syndrome, and congenital heart anomalies. Monitoring fetal cardiac function with US is challenging due to involuntary movements of the fetus, the small fetal heart, the fast fetal heart rate, limited access to the fetus, and the lack of experts in fetal echocardiography. Automatic calculation of the fetal heartbeat has been carried out in many studies that have extracted the fetal heart rate from CTG using dimensionality reduction [38,41] or measured fetal QRS complexes from maternal ECG recordings using ANN [42] and pulse-wave Doppler envelope signals extracted from B-mode videos [43]. For cases with congenital heart anomalies, an intelligent navigation method referred to as "FINE" was developed and this can detect four types of abnormalities [44]. Arnaout et al. [39] demonstrated a deep learning method identifying the five most essential views of the fetal heart and segmentation of cardiac structures. They found that hypoplastic left heart syndrome was the most frequently distinguished anomaly compared to normal structures and tetralogy of Fallot at the gestational age of 18 to 24 weeks (Table 2) [39,45]. Although progress has been made towards obtaining optimal images within a short period of time, with a minimal learning curve and extraction of standardized planes using large databases with AI technology, limitations remain in the assessment of appropriate images followed by clinical decision-making in the realm of the fetal heart compared to other organs [46]. Further studies using appropriate AI technology for different types of congenital heart diseases and with more subjects and various US machines are warranted.

Others

Estimation of gestational age [36] and prediction of preterm birth [47,48], aneuploidy [49,50], and asymptomatic short cervical length [40] have been investigated using machine learning algorithms. An effective system was developed for predicting fetal brain abnormality [51]. A recent systematic review demonstrated that a model for the prediction of prematurity using the support vector machine technique performed best among 31 studies analyzed, with an accuracy of 95.7% [52]. Deep learning-based automatic measurement programs for parameters indicating the progression of labor (e.g., the angle of progression) are currently applied. AI-based programs have advantages in terms of obtaining more objective results and may be helpful for parameters that are clinically important but may have errors between measurers. For example, measurements of amniotic fluid are susceptible to errors between measurements, which can affect treatment decisions, and AI-based programs that can automatically measure these items are being developed (Fig. 1). A recently published paper reported that an automated method based on deep learning was very useful for measuring amniotic fluid [53]. In addition, AI-based programs can be helpful for measurements that require evaluators to be fully trained and experienced, such as nuchal translucency [37,54,55], and these techniques may be combined with a robot arm that performs the scanning to automatically extract standardized fetal imaging views. Related research is actively underway, and the results are expected to be available in clinical practice in the relatively near future.
In 2020, the International Society of Ultrasound in Obstetrics and Gynecology introduced the latest research trends being developed in the field of obstetrics and gynecology through an "Artificial Intelligence in Imaging" session. The most surprising finding was that, in addition to providing the function of extracting and measuring the desired organ or part after analyzing data within a given image (as discussed above), systems have been developed that suggest a suspected diagnosis based on the measured values. The day is not far away in the future, when an ultrasound probe placed on a mother's abdomen will not only measure basic parameters, but also provide related diagnoses and further treatment directions.

Magnetic Resonance Imaging

MRI is being actively studied along with US. In the obstetric field, MRI is usually performed to discriminate fetal brain diseases and the severity of placenta previa. In one study, the brain structure of the fetus was automatically extracted and analyzed through MRI scans of 45 pregnant women, and furthermore, the volume was automatically measured [56]. In another study, through the analysis of 59 MRI scans with fetal ventriculomegaly through various AI techniques, the need for additional treatment required after birth, such as cerebrospinal fluid diversion, was predicted with 91% accuracy [57]. In other words, AI can provide information on whether related treatments are needed along with a diagnosis using MRI scans. In addition, MRI applications related to the placenta have been widely studied. The presence of placenta adhesions was diagnosed with 100% sensitivity, 88.8% specificity, and 95% accuracy by applying AI techniques through MRI scans of 99 pregnant women diagnosed with placenta previa [58]. MRI scans of 44 pregnant women, including those with twins, were used to accurately measure the volume of the placenta and the distribution of vessels on the surface of the placenta [59]. These results will provide important information for understanding and treating twin-to-twin transfusion syndrome.

Conclusion

AI is no longer a temporary social phenomenon or a topic only for specific scientific fields. Instead, it is a technical field that can help in improving diagnosis, treatment strategy, and clinical outcomes and overcoming various problems related to diagnosis even in the obstetric field. It would not be an exaggeration to say that it already occupies an important position in our field of care. Using AI methods in medical care could facilitate individual pregnancy management and improve public health, especially in low- and middle-income countries. In the future, various efforts will be required for different applications in obstetrics, and above all, the generation of data for the development of appropriate models and algorithms in various fields will be of paramount importance. Further studies need to be done to reduce bias when creating algorithms and to increase adaptability in the system, enabling the incorporation of new medical knowledge as new technology surfaces. The authors look forward to ongoing developments in AI applications for obstetrics, and above all, it is emphasized once again that AI technology is not a substitute for medical staff, but plays the role of an assistant in medical practice.

Notes

Author Contributions

Conceptualization: Cho GJ. Data acquisition: Kwon HS. Data analysis or interpretation: Kim HY. Drafting of the manuscript: Kim HY. Critical revision of the manuscript: Cho GJ, Kwon HS. Approval of the final version of the manuscript: all authors.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

1. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521:436–444.
crossref pmid pdf
2. Liu F, Zhou Z, Samsonov A, Blankenbaker D, Larison W, Kanarek A, et al. Deep learning approach for evaluating knee MR images: achieving high diagnostic performance for cartilage lesion detection. Radiology 2018;289:160–169.
crossref pmid pmc
3. Lakhani P, Sundaram B. Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 2017;284:574–582.
crossref pmid
4. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542:115–118.
crossref pmid pmc pdf
5. Arcadu F, Benmansour F, Maunz A, Willis J, Haskova Z, Prunotto M. Deep learning algorithm predicts diabetic retinopathy progression in individual patients. NPJ Digit Med 2019;2:92.
crossref pmid pmc pdf
6. Tan PN, Steinbach M, Karpantne A, Kumar V. Introduction to data mining. 2nd ed. London: Pearson, 2018.

7. Deo RC. Machine Learning in medicine. Circulation 2015;132:1920–1930.
crossref pmid pmc
8. Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, et al. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2021;40:7–22.
crossref pmid pmc pdf
9. Espinoza J, Good S, Russell E, Lee W. Does the use of automated fetal biometry improve clinical work flow efficiency? J Ultrasound Med 2013;32:847–850.
crossref pmid pdf
10. Alfirevic Z, Devane D, Gyte GM, Cuthbert A. Continuous cardiotocography (CTG) as a form of electronic fetal monitoring (EFM) for fetal assessment during labour. Cochrane Database Syst Rev 2017;2:CD006066.
crossref pmid
11. Ignatov PN, Lutomski JE. Quantitative cardiotocography to improve fetal assessment during labor: a preliminary randomized controlled trial. Eur J Obstet Gynecol Reprod Biol 2016;205:91–97.
crossref pmid
12. Nunes I, Ayres-de-Campos D, Ugwumadu A, Amin P, Banfield P, Nicoll A, et al. Central fetal monitoring with and without computer analysis: a randomized controlled trial. Obstet Gynecol 2017;129:83–90.
pmid
13. Brocklehurst P, Field DJ, Juszczak E, Kenyon S, Linsell L, Newburn M, et al. The INFANT trial. Lancet 2017;390:28.
crossref
14. Warrick PA, Hamilton EF, Precup D, Kearney RE. Identification of the dynamic relationship between intrapartum uterine pressure and fetal heart rate for normal and hypoxic fetuses. IEEE Trans Biomed Eng 2009;56:1587–1597.
crossref pmid
15. Zhao Z, Zhang Y, Deng Y. A comprehensive feature analysis of the fetal heart rate signal for the intelligent assessment of fetal state. J Clin Med 2018;7:223.
crossref pmid pmc
16. Ogasawara J, Ikenoue S, Yamamoto H, Sato M, Kasuga Y, Mitsukura Y, et al. Deep neural network-based classification of cardiotocograms outperformed conventional algorithms. Sci Rep 2021;11:13367.
crossref pmid pmc pdf
17. Georgieva A, Papageorghiou AT, Payne SJ, Moulden M, Redman CW. Phase-rectified signal averaging for intrapartum electronic fetal heart rate monitoring is related to acidaemia at birth. BJOG 2014;121:889–894.
crossref pmid
18. Liu LC, Tsai YH, Chou YC, Jheng YC, Lin CK, Lyu NY, et al. Concordance analysis of intrapartum cardiotocography between physicians and artificial intelligence-based technique using modified one-dimensional fully convolutional networks. J Chin Med Assoc 2021;84:158–164.
crossref pmid
19. Iraji MS. Prediction of fetal state from the cardiotocogram recordings using neural network models. Artif Intell Med 2019;96:33–44.
crossref pmid
20. Zhao Z, Deng Y, Zhang Y, Zhang Y, Zhang X, Shao L. DeepFHR: intelligent prediction of fetal Acidemia using fetal heart rate signals based on convolutional neural network. BMC Med Inform Decis Mak 2019;19:286.
crossref pmid pmc pdf
21. Balayla J, Shrem G. Use of artificial intelligence (AI) in the interpretation of intrapartum fetal heart rate (FHR) tracings: a systematic review and meta-analysis. Arch Gynecol Obstet 2019;300:7–14.
crossref pmid pdf
22. Benacerraf BR, Minton KK, Benson CB, Bromley BS, Coley BD, Doubilet PM, et al. Proceedings: Beyond Ultrasound First Forum on improving the quality of ultrasound imaging in obstetrics and gynecology. Am J Obstet Gynecol 2018;218:19–28.
crossref pmid
23. Sobhaninia Z, Rafiei S, Emami A, Karimi N, Najarian K, Samavi S, et al. Fetal ultrasound image segmentation for measuring biometric parameters using multi-task deep learning. Annu Int Conf IEEE Eng Med Biol Soc 2019;2019:6545–6548.
crossref pmid
24. Yaqub M, Kelly B, Papageorghiou AT, Noble JA. A deep learning solution for automatic fetal neurosonographic diagnostic plane verification using clinical standard constraints. Ultrasound Med Biol 2017;43:2925–2933.
crossref pmid
25. Ambroise Grandjean G, Hossu G, Bertholdt C, Noble P, Morel O, Grange G. Artificial intelligence assistance for fetal head biometry: assessment of automated measurement software. Diagn Interv Imaging 2018;99:709–716.
crossref pmid
26. Namburete AI, Yaqub M, Kemp B, Papageorghiou AT, Noble JA. Predicting fetal neurodevelopmental age from ultrasound images. Med Image Comput Comput Assist Interv 2014;17:260–267.
crossref pmid
27. Yan L, Rong X, Jun O, Iwata H. Automatic fetal body and amniotic fluid segmentation from fetal ultrasound images by encoderdecoder network with inner layers. Annu Int Conf IEEE Eng Med Biol Soc 2017;2017:1485–1488.
pmid
28. Rajchl M, Lee MC, Oktay O, Kamnitsas K, Passerat-Palmbach J, Bai W, et al. DeepCut: object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans Med Imaging 2017;36:674–683.
crossref pmid pmc
29. Burgos-Artizzu XP, Perez-Moreno A, Coronado-Gutierrez D, Gratacos E, Palacio M. Evaluation of an improved tool for noninvasive prediction of neonatal respiratory morbidity based on fully automated fetal lung ultrasound analysis. Sci Rep 2019;9:1950.
crossref pmid pmc pdf
30. Wang G, Li W, Zuluaga MA, Pratt R, Patel PA, Aertsen M, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging 2018;37:1562–1573.
crossref pmid pmc
31. Burgos-Artizzu XP, Coronado-Gutierrez D, Valenzuela-Alcaraz B, Bonet-Carne E, Eixarch E, Crispi F, et al. Evaluation of deep convolutional neural networks for automatic classification of common maternal fetal ultrasound planes. Sci Rep 2020;10:10200.
crossref pmid pmc pdf
32. Droste R, Drukker L, Papageorghiou AT, Noble JA. Automatic probe movement guidance for freehand obstetric ultrasound. Med Image Comput Comput Assist Interv 2020;12263:583–592.
crossref pmid pmc
33. Drukker L, Sharma H, Droste R, Alsharid M, Chatelain P, Noble JA, et al. Transforming obstetric ultrasound into data science using eye tracking, voice recording, transducer motion and ultrasound video. Sci Rep 2021;11:14109.
crossref pmid pmc pdf
34. Cai Y, Droste R, Sharma H, Chatelain P, Drukker L, Papageorghiou AT, et al. Spatio-temporal visual attention modelling of standard biometry plane-finding navigation. Med Image Anal 2020;65:101762.
crossref pmid
35. Sharma H, Drukker L, Papageorghiou AT, Noble JA. Machine learning-based analysis of operator pupillary response to assess cognitive workload in clinical ultrasound imaging. Comput Biol Med 2021;135:104589.
crossref pmid pmc
36. Papageorghiou AT, Kemp B, Stones W, Ohuma EO, Kennedy SH, Purwar M, et al. Ultrasound-based gestational-age estimation in late pregnancy. Ultrasound Obstet Gynecol 2016;48:719–726.
crossref pmid pmc pdf
37. Sciortino G, Tegolo D, Valenti C. Automatic detection and measurement of nuchal translucency. Comput Biol Med 2017;82:12–20.
crossref pmid
38. Sulas E, Ortu E, Urru M, Tumbarello R, Raffo L, Solinas G, et al. Impact of pulsed-wave-Doppler velocity-envelope tracing techniques on classification of complete fetal cardiac cycles. PLoS One 2021;16:e0248114.
crossref pmid pmc
39. Arnaout R, Curran L, Zhao Y, Levine JC, Chinn E, Moon-Grady AJ. An ensemble of neural networks provides expert-level prenatal detection of complex congenital heart disease. Nat Med 2021;27:882–891.
crossref pmid pmc pdf
40. Bahado-Singh RO, Sonek J, McKenna D, Cool D, Aydas B, Turkoglu O, et al. Artificial intelligence and amniotic fluid multiomics: prediction of perinatal outcome in asymptomatic women with short cervix. Ultrasound Obstet Gynecol 2019;54:110–118.
crossref pmid pdf
41. Krupa N, Ali M, Zahedi E, Ahmed S, Hassan FM. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine. Biomed Eng Online 2011;10:6.
crossref pmid pmc
42. Lukosevicius M, Marozas V. Noninvasive fetal QRS detection using an echo state network and dynamic programming. Physiol Meas 2014;35:1685–1697.
crossref pmid pdf
43. Sulas E, Ortu E, Raffo L, Urru M, Tumbarello R, Pani D. Automatic recognition of complete atrioventricular activity in fetal pulsedwave Doppler signals. Annu Int Conf IEEE Eng Med Biol Soc 2018;2018:917–920.
crossref pmid
44. Yeo L, Romero R. Fetal Intelligent Navigation Echocardiography (FINE): a novel method for rapid, simple, and automatic examination of the fetal heart. Ultrasound Obstet Gynecol 2013;42:268–284.
crossref pmid pmc
45. Madani A, Arnaout R, Mofrad M, Arnaout R. Fast and accurate view classification of echocardiograms using deep learning. NPJ Digit Med 2018;1:6.
crossref pmid pmc pdf
46. Garcia-Canadilla P, Sanchez-Martinez S, Crispi F, Bijnens B. Machine learning in fetal cardiology: what to expect. Fetal Diagn Ther 2020;47:363–372.
crossref pmid pdf
47. Rittenhouse KJ, Vwalika B, Keil A, Winston J, Stoner M, Price JT, et al. Improving preterm newborn identification in low-resource settings with machine learning. PLoS One 2019;14:e0198919.
crossref pmid pmc
48. Hamilton EF, Dyachenko A, Ciampi A, Maurel K, Warrick PA, Garite TJ. Estimating risk of severe neonatal morbidity in preterm births under 32 weeks of gestation. J Matern Fetal Neonatal Med 2020;33:73–80.
crossref pmid
49. Neocleous AC, Nicolaides KH, Schizas CN. First trimester noninvasive prenatal diagnosis: a computational intelligence approach. IEEE J Biomed Health Inform 2016;20:1427–1438.
crossref pmid
50. Neocleous AC, Nicolaides KH, Schizas CN. Intelligent noninvasive diagnosis of aneuploidy: raw values and highly imbalanced dataset. IEEE J Biomed Health Inform 2017;21:1271–1279.
crossref pmid
51. Xie HN, Wang N, He M, Zhang LH, Cai HM, Xian JB, et al. Using deep-learning algorithms to classify fetal brain ultrasound images as normal or abnormal. Ultrasound Obstet Gynecol 2020;56:579–587.
crossref pmid pdf
52. Bertini A, Salas R, Chabert S, Sobrevia L, Pardo F. Using machine larning to predict complications in pregnancy: a systematic review. Front Bioeng Biotechnol 2021;9:780389.
pmid
53. Cho HC, Sun S, Hyun CM, Kwon JY, Kim B, Park Y, et al. Automated ultrasound assessment of amniotic fluid index using deep learning. Med Image Anal 2021;69:101951.
crossref pmid
54. Moratalla J, Pintoffl K, Minekawa R, Lachmann R, Wright D, Nicolaides KH. Semi-automated system for measurement of nuchal translucency thickness. Ultrasound Obstet Gynecol 2010;36:412–416.
crossref pmid
55. Nie S, Yu J, Chen P, Wang Y, Zhang JQ. Automatic detection of standard sagittal plane in the first trimester of pregnancy using 3-D ultrasound data. Ultrasound Med Biol 2017;43:286–300.
crossref pmid
56. Khalili N, Turk E, Benders M, Moeskops P, Claessens NH, de Heus R, et al. Automatic extraction of the intracranial volume in fetal and neonatal MR scans using convolutional neural networks. Neuroimage Clin 2019;24:102061.
crossref pmid pmc
57. Pisapia JM, Akbari H, Rozycki M, Goldstein H, Bakas S, Rathore S, et al. Use of fetal magnetic resonance image analysis and machine learning to predict the need for postnatal cerebrospinal fluid diversion in fetal ventriculomegaly. JAMA Pediatr 2018;172:128–135.
crossref pmid pmc
58. Sun H, Qu H, Chen L, Wang W, Liao Y, Zou L, et al. Identification of suspicious invasive placentation based on clinical MRI data using textural features and automated machine learning. Eur Radiol 2019;29:6152–6162.
crossref pmid pdf
59. Torrents-Barrena J, Piella G, Masoller N, Gratacos E, Eixarch E, Ceresa M, et al. Fetal MRI synthesis via balanced auto-encoder based generative adversarial networks. Annu Int Conf IEEE Eng Med Biol Soc 2018;2018:2599–2602.
crossref pmid

Artificial intelligence–based automatic amniotic fluid measurement program using deep learning.

The amniotic fluid part is automatically extracted from the given image and the deepest vertical depth of the amniotic fluid part; that is, the amniotic fluid index is automatically calculated.
usg-22063f1.jpg
Fig. 1.
Table 1.
Summary table of machine learning interpretations of CTG to determine neonatal outcomes
Study No. of patients AI technology Inclusion criteria Outcomes
Randomized controlled trials: baseline, variability, and deceleration-based
Ignatov and Lutomski (2016) [11] 720 Quantitative CTG decision-support system; Nexus-obstetrics Singleton, >18 years of age Reduced risk in the interventional arm compared to control
Nunes et al. (2017) [12] 7,730 Omniview-SisPorto Singleton, >16 years of age, >36 weeks of gestation While a very low rate of acidosis was observed, there was no statistically significant reduction in the rate of acidosis and obstetric intervention
Brocklehurst et al. (2017) (The INFANT Trial) [13] 46,042 Infant-K2 Singleton or twin, >16 years of age, >35 weeks of gestation Effective in identification of abnormal CTG, clinical outcomes not improved
Feature engineering theory: traditional machine learning, retrospective
Warrick et al. (2009) [14] 220 cases Support vector machine Death, HIE, base deficit >12 mmol Detected 50% of pathological cases with FPR 7.5%
Zhao et al. (2018) [15] 552 intrapartum CTG recordings 8-layer deep 2D CNN Classify normal and pathological CTGs
Feature engineering theory: deep learning, retrospective
Ogasawara et al. (2021) [16] 324 CTG recordings CNN models Umbilical artery pH <7.20 or Apgar score at 1 min <7 AUC 0.73±0.04, early detection of compromised fetuses
Georgieva et al. (2014) [17] 22,790 Cases Acidemia (pH <7.05) and severe compromise (stillbirth, neonatal death, HIE, NICU admission) Improved sensitivity and FPR in detecting acidemia/compromise compared with clinical practice
Liu et al. (2021) [18] 3,239 CTG recordings Fully convolutional networks 292 singletons, ≥36 weeks of gestation Higher sensitivity for predicting fetal compromise but a higher FPR compared with clinical practice

CTG, cardiotocography; AI, artificial intelligence; HIE, hypoxic ischemic encephalopathy; FPR, false-positive rate; CNN, convolutional neural network; NICU, neonatal intensive care unit; AUC, area under the curve.

Table 2.
Deep learning research on obstetric ultrasonography
Study Field Total number of patients/ images AI technology Outcomes
Fetal biometry and gross imaging
Burgos-Artizzu et al. (2020) [31] Fetal anatomic planes (abdomen, brain, femur and thorax), cervix 1,792 Patients; 12,400 images CNN Similar performance compared to humans but limited fine-grained plane categorization
Papageorghiou et al. (2016) [36] Gestational age estimation 4,229 Patients Generic algorithm Head circumference and femur length at second trimester: accurate estimation of GA
Sciortino et al. (2017) [37] Nuchal translucency 12 Patients; 382 frames Wavelet and multi-resolution analysis True positive rate 99.5%, 64% of measurements with an error equal to 1 pixel
Heart
Sulas et al. (2021) [38] FHR 25 Patients; 43 images, 174,319 PWD segments 7 Envelope tracing techniques and 23 processing steps 98% Accuracy
Arnaout et al. (2021) [39] Fetal heart imaging at 18– 24 weeks of gestations 107,823 images Deep learning: segmentation model AUC of 0.99, 95% sensitivity, 96% specificity
Short cervix
Bahado-Singh et al. (2019) [40] Cervical length (<15 mm) combined omics, demographic and clinical data 26 Patients Deep learning performed best among six different machine learning techniques AUC of 0.890 delivery <34 weeks GA; 0.890 delivery <28 weeks GA after amniocentesis; 0.792 NICU admission

AI, artificial intelligence; CNN, convolutional neural network; GA, gestational age; FHR, fetal heart rate; PWD, pulsed-wave Doppler; AUC, area under the curve; NICU, neonatal intensive care unit.

TOOLS
METRICS
13
Web of Science
17
Crossref
14
Scopus
5,711
View
324
Download
Editorial Office
A-304 Mapo Trapalace, 53 Mapo-daero, Mapo-gu, Seoul 04158, Korea
TEL : +82-2-763-5627   FAX : +82-2-763-6909   E-mail : office@ultrasound.or.kr
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © Korean Society of Ultrasound in Medicine.                 Developed in M2PI
Zoom in Close layer