Child pages
  • Computed Tomography Images from Large Head and Neck Cohort (RADCURE)

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Summary

Excerpt

The RADCURE dataset was collected clinically for radiation therapy treatment planning and retrospectively reconstructed for quantitative imaging research.

Acquisition and Validation Methods: RADCURE is comprised of data for 2,745 patients and contains computed tomography (CT) images with corresponding normal and non-normal tissue contours. CT scans were collected using systems from three different manufacturers. Standard clinical imaging protocols were followed, and contours were generated and reviewed at weekly quality assurance rounds. RADCURE imaging and structure set data was extracted from our institution’s radiation treatment planning and oncology systems using an in-house data mining and processing system. Furthermore, images and linked to the existing Anthology of Outcomes data for each patient and includes demographic, clinical and treatment information based on the 7th edition TNM staging system. The median patient age is 63, with the final dataset including 80% males. Oropharyngeal cancer makes of 50% of the population with larynx, nasopharynx, and hypopharynx comprising 25, 12, and 5% respectively. Median follow-up was 5 years with 60% of the patients alive at last follow-up.  

Data Format and Usage Notes: During extraction of images and contours from our institution’s radiation treatment planning and oncology systems the data was converted to DICOM and RTSTRUCT formats, respectively. To improve the usability of the RTSTRUCT files, individual contour names were standardized for primary tumor volumes, and 19 organs-at-risk. Demographic, clinical, and treatment information is provided as a comma-separated values file. 

Potential Applications: The availability of imaging, clinical, demographic and treatment data in RADCURE makes it a viable option for a variety of quantitative image analysis research initiatives. This includes the application of machine learning or artificial intelligence methods to expedite routine clinical practices, discover new non-invasive biomarkers, or develop prognostic models.  

In recent years, advancements in machine learning and deep learning have led to the development of novel and advanced image processing techniques for medical imaging applications. These methods, in particular deep neural networks, must be trained on vast amounts of image data and are typically not robust to “unclean” or highly heterogeneous data. Computed tomography (CT) images are a commonly used medical imaging modality. Currently, with advances in AI, CT-derived quantitative features (i.e. radiomic features) have shown promising results in personalized medicine (Ha 2019), and when combined with machine learning, have potential utility in diagnostic and prognostic applications. Unfortunately, a big drawback in these images is their potential sensitivity to high-density materials such as metal prosthesis or dental fillings; the latter commonly causes dental artifacts which pose a problem for imaging head and neck CT images. Dental fillings have much larger atomic numbers than soft tissues, resulting in significantly higher attenuation for x-ray beams passing through the metal. As a result, these artifacts present as bright and dark streaks on the reconstruction image. These artifacts not only obscure large portions of the image’s reconstructed pixels, but studies have also shown that dental artifacts alter radiomic features in CT images (Leijenaar et al. 2015; Ger et al. 2018), effect target volume delineation (Hansen et al. 2017), and radiation therapy dose calculation accuracy(Kim et al. 2006). This suggests a need to account for artifacts during the data cleaning process

Acknowledgements

We would like to acknowledge the individuals and institutions that have provided data for this collection:

...