Child pages
  • Computed Tomography Images from Large Head and Neck Cohort (RADCURE)

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Summary

In recent years, advancements in machine learning and deep learning have led to the development of novel and advanced image processing techniques for medical imaging applications. These methods, in particular deep neural networks, must be trained on vast amounts of image data and are typically not robust to “unclean” or highly heterogeneous data. Computed tomography (CT) images are a commonly used medical imaging modality. Currently, with advances in AI, CT-derived quantitative features (i.e. radiomic features) have shown promising results in personalized medicine (Ha 2019), and when combined with machine learning, have potential utility in diagnostic and prognostic applications. Unfortunately, a big drawback in these images is their potential sensitivity to high-density materials such as metal prosthesis or dental fillings; the latter commonly causes dental artifacts which pose a problem for imaging head and neck CT images. Dental fillings have much larger atomic numbers than soft tissues, resulting in significantly higher attenuation for x-ray beams passing through the metal. As a result, these artifacts present as bright and dark streaks on the reconstruction image. These artifacts not only obscure large portions of the image’s reconstructed pixels, but studies have also shown that dental artifacts alter radiomic features in CT images (Leijenaar et al. 2015; Ger et al. 2018), effect target volume delineation (Hansen et al. 2017), and radiation therapy dose calculation accuracy(Kim et al. 2006). This suggests a need to account for artifacts during the data cleaning process

Acknowledgements

We would like to acknowledge the individuals and institutions that have provided data for this collection:

  • University Health Network (UHN), Toronto, Ontario, Canada - Special thanks to  Scott Bratman, PhD, Department of Radiation Oncology and Medical Biophysics, University of Toronto. 

Data Access

Data TypeDownload all or Query/FilterLicense

Images and Radiation Therapy Structures (DICOM, XX.X GB)

(Download requires the NBIA Data Retriever)

Clinical data (CSV)


Click the Versions tab for more info about data releases.

Please contact help@cancerimagingarchive.net  with any questions regarding usage.

Detailed Description

Image Statistics


Modalities

CT, RTSTRUCT

Number of Patients

4130

Number of Studies

4130

Number of Series


Number of Images


Images Size (GB)


Several studies have tried to address this data cleaning challenge using different approaches (Ger et al. 2018; Gjesteby et al. 2016). Recently, a convolutional neural network (CNN) was used to detect patient CT volumes containing artifacts with a precision-recall area under the curve (AUC) of (0.92) and accuracy of (98.4%) (Welch et al. 2020). However, to the author’s knowledge, no work has been done to differentiate between dental artifacts (DA) of different magnitudes or to quantify how the location of DAs could affect quantitative imaging features used to train radiomic models. Furthermore, previous DA detection studies have classified hand-drawn regions of interest (ROI) as DA positive or DA negative (REF) but have not examined the correlation between radiomic features in a given ROI and its distance from the DA source. These methods, even if effective at screening datasets for artifacts, could cause vast amounts of data to be unnecessarily marked as unclean, even if the artifacts do not homogeneously affect radiomic features in the patient’s image volume. In this study, we propose a novel two-step combinatorial algorithm to detect DAs on a per-slice basis in CT image datasets. Conventional image processing methods based on histogram-based thresholding and the CT sinogram are combined with a previously-published CNN network in order to create a three-class DA classifier and DA location detector for large radiomic datasets. This algorithm works on patient CT volumes with minimal preprocessing or manual annotation. Finally, we examined the correlation between quantitative imaging features and the physical distance between the DA and the gross tumour volume (GTV).

Inclusion: The dataset used for this study consists of 4130 head and neck cancer CT image volumes collected from 2005 to 2007 treated with definitive RT at the University Health Network (UHN) in Toronto, Canada

Citations & Data Usage Policy

Users of this data must abide by the TCIA Data Usage Policy and the Creative Commons Attribution 4.0 International License under which it has been published. Attribution should include references to the following citations:

Data Citation

DOI goes here. 

Publication Citation


TCIA Citation

Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December, 2013, pp 1045-1057. DOI: 10.1007/s10278-013-9622-7

Other Publications Using This Data

TCIA maintains a list of publications which leverage TCIA data. If you have a manuscript you'd like to add please contact the TCIA Helpdesk.

Version X (Current): Updated yyyy/mm/dd

Data TypeDownload all or Query/Filter
Images (DICOM, xx.x GB)

(Requires NBIA Data Retriever.)

Clinical Data (CSV)Link



  • No labels