Publications

Google Scholar

Kalantar, R., Lin, G., Winfield, J.M., Messiou, C., Lalondrelle, S., Blackledge, M.D. and Koh, D.M., 2021. Automatic segmentation of pelvic cancers using deep learning: state-of-the-art approaches and challenges. Diagnostics, 11(11), p.1964.

Abstract:  The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.  [pdf]

Kalantar, R., Messiou, C., Winfield, J.M., Renn, A., Latifoltojar, A., Downey, K., Sohaib, A., Lalondrelle, S., Koh, D.M. and Blackledge, M.D., 2021. CT-Based Pelvic T1-Weighted MR Image Synthesis Using UNet, UNet++ and Cycle-Consistent Generative Adversarial Network (Cycle-GAN). Frontiers in Oncology, 11, p.665807.

Abstract: 

Background: Computed tomography (CT) and magnetic resonance imaging (MRI) are the mainstay imaging modalities in radiotherapy planning. In MR-Linac treatment, manual annotation of organs-at-risk (OARs) and clinical volumes requires a significant clinician interaction and is a major challenge. Currently, there is a lack of available pre-annotated MRI data for training supervised segmentation algorithms. This study aimed to develop a deep learning (DL)-based framework to synthesize pelvic T1-weighted MRI from a pre-existing repository of clinical planning CTs.

Methods: MRI synthesis was performed using UNet++ and cycle-consistent generative adversarial network (Cycle-GAN), and the predictions were compared qualitatively and quantitatively against a baseline UNet model using pixel-wise and perceptual loss functions. Additionally, the Cycle-GAN predictions were evaluated through qualitative expert testing (4 radiologists), and a pelvic bone segmentation routine based on a UNet architecture was trained on synthetic MRI using CT-propagated contours and subsequently tested on real pelvic T1 weighted MRI scans.

Results: In our experiments, Cycle-GAN generated sharp images for all pelvic slices whilst UNet and UNet++ predictions suffered from poorer spatial resolution within deformable soft-tissues (e.g. bladder, bowel). Qualitative radiologist assessment showed inter-expert variabilities in the test scores; each of the four radiologists correctly identified images as acquired/synthetic with 67%, 100%, 86% and 94% accuracy. Unsupervised segmentation of pelvic bone on T1-weighted images was successful in a number of test cases.

Conclusion: Pelvic MRI synthesis is a challenging task due to the absence of soft-tissue contrast on CT. Our study showed the potential of deep learning models for synthesizing realistic MR images from CT, and transferring cross-domain knowledge which may help to expand training datasets for 21 development of MR-only segmentation models.   [pdf]

Vaid, S., Kalantar, R. and Bhandari, M., 2020. Deep learning COVID-19 detection bias: accuracy through artificial intelligence. International Orthopaedics, 44(8), pp.1539-1542.

Abstract: 

Background

Detection of COVID-19 cases’ accuracy is posing a conundrum for scientists, physicians, and policy-makers. As of April 23, 2020, 2.7 million cases have been confirmed, over 190,000 people are dead, and about 750,000 people are reported recovered. Yet, there is no publicly available data on tests that could be missing infections. Complicating matters and furthering anxiety are specific instances of false-negative tests.

Methods

We developed a deep learning model to improve accuracy of reported cases and to precisely predict the disease from chest X-ray scans. Our model relied on convolutional neural networks (CNNs) to detect structural abnormalities and disease categorization that were keys to uncovering hidden patterns. To do so, a transfer learning approach was deployed to perform detections from the chest anterior-posterior radiographs of patients. We used publicly available datasets to achieve this.

Results

Our results offer very high accuracy (96.3%) and loss (0.151 binary cross-entropy) using the public dataset consisting of patients from different countries worldwide. As the confusion matrix indicates, our model is able to accurately identify true negatives (74) and true positives (32); this deep learning model identified three cases of false-positive and one false-negative finding from the healthy patient scans.

Conclusions

Our COVID-19 detection model minimizes manual interaction dependent on radiologists as it automates identification of structural abnormalities in patient’s CXRs, and our deep learning model is likely to detect true positives and true negatives and weed out false positive and false negatives with > 96.3% accuracy.  [pdf]