Central reading centers (CRCs) play important roles in the interpretation of imaging data for ophthalmic clinical trials by providing anonymized and standardized image analysis. Many trials currently employ optical coherence tomography as an outcome measure because OCT has become a standard imaging modality, aiding in the diagnosis and evaluation of treatment response for a variety of ophthalmic disorders, including exudative AMD and DME, among others.1
Central reading centers develop standard operating procedures (SOP) to interpret and analyze images using a standardized rubric and must make their evaluation independent of other clinical factors, such as exam characteristics, treatment group assignments (control vs treatment), or ophthalmic history.2 The CRC may play many roles during the course of a clinical trial. During the planning phase, the CRC may advise the sponsor on the appropriate imaging protocols to measure desired endpoints. At the initiation of the study, the CRC may help screen the patient and verify image-based eligibility. During the study, the CRC interprets the images and records the data to follow the treatment response. Raw data are delivered to the study sponsor for final analysis. Patient eligibility in clinical trials can be site driven or CRC driven. Some protocols require only the site to document and verify enrollment criteria. Other protocols rely on the CRC to screen the baseline images to verify eligibility. In the latter instance, the CRC may deem the patient ineligible if the patient does not meet all image-based eligibility criteria or if exclusion criteria are detected.
Clinicians participating in trials invest significant time and effort in recruitment. This is especially true for trials with strict eligibility criteria and where small enrollment numbers are expected from each site or in trials offering novel and paradigm-changing treatments. Trials involving novel treatments are exciting for both the clinician and patient. There is emotional buy-in for the study, and it can be disappointing for patients who are told they are eligible during their visit and later discover they are ineligible based on image review by the independent CRC. Clinicians working with these patients should be applauded for their management in these difficult situations.
For most patients screened by a CRC, there is concordance with the clinical site. However, disagreements with image interpretation can occur and are usually due to 4 broad categories discussed below.
SEGMENTATION ERRORS
Segmentation algorithms for different commercially available OCT machines vary and are responsible for the variability in automated measurements seen between different OCT machines.3,4 Segmentation algorithms also differ in performance depending on the type of scan and quality of scan performed. In a busy clinical practice, it is not feasible to scroll through entire macular volume scans to identify segmentation errors. Figure 1 shows a case in which manual analysis revealed a substantial increase in maximal macular thickness consistent with a segmentation error. Even if the site identified this segmentation error, manually adjusting segmentation lines could introduce bias to the study. With more detailed analysis including evaluation of segmentation and manual measurement of anatomic features, CRCs have a more comprehensive view not readily available to the clinician.
At times, the site recognizes a segmentation error and requests the CRC to manually correct it. Protocols to handle these errors can vary between different CRCs; however, it is important to limit the bias that could be introduced when manually adjusting segmentation lines and inconsistencies that could occur on future visits if the segmentation line is not placed in the same location.
ERRORS WITH FOVEAL ALIGNMENT
Similar to segmentation errors, different OCT machines have different algorithms for predicting the location of the fovea. Incorrect alignment can result in different values for automated macular thickness measures, as depicted in Figure 2. One can imagine if the eligibility cut-off for CST is 300 microns, this patient’s eligibility would depend on the subjective placement of the ETDRS grid. As with manual adjustments to segmentation errors, careful consideration should be given for manual adjustments to the foveal location to minimize potential bias.
DISCREPANCIES DUE TO REVIEW OF MACULAR VOLUME SCANS VS SELECT RASTER SCANS
Many office-based image review software systems do not allow for efficient review of each slice through a macular volume scan at an imaging workstation. Subtle pathology can sometimes be detected on more careful review. These discrepancies in interpretation are often easily remedied once identified by either the clinical site or CRC. However, this remains a common cause of disagreement between clinical interpretation and CRC interpretation of OCTs.
IMAGE INTERPRETATION WITHOUT CLINICAL INFORMATION
By design, graders for CRCs should not have any knowledge of the demographics, clinical history, treatment history, or clinical exam for each patient. This helps to protect the integrity of the study but is also a leading source for disagreement, even after careful review by both the clinical site and CRC. As clinicians, we review images utilizing all the information available. For example, as shown in Figure 3, a patient with longstanding extensive GA can show severe loss of tissue at the fovea resembling a full-thickness macular hole. The etiology for this defect is different from a true macular hole. However, without the full clinical picture, a grader in a CRC must grade the image as a full thickness macular hole. The GA is obvious on OCT, but the clinical history is not considered. This same patient could have had a history of macular hole prior to development of GA and may end up with a similar appearance.
To maintain consistency, the CRC should not consider the clinical history and should grade the OCT based on appearance alone. Often in these cases, the CRC agrees with the clinical assessment but cannot pass the image because of the strict grading rubric required by studies. Clinicians managing the patient have the complete clinical picture and therefore their clinical diagnosis should not be superseded by that of the CRC. Disagreements between the CRC and study site are not meant to call into question the clinical assessment of the managing physician. In most cases, the disagreement stems from the strict grading protocols set up by CRCs, which are designed to limit variability and bias while maximizing reproducibility between multiple graders.
SUMMARY
These categories are common sources of disagreement between clinic-based and CRC assessment of OCTs. Discrepancies due to incorrect foveal alignment, incorrect segmentation, and incomplete review of the macular volume are more easily reconciled because of the objective nature of these disagreements.
The last category is the most difficult to reconcile, because often, knowledge of the clinical history can influence our interpretation of images. For example, subretinal hyper-reflective material (SHRM) found in a patient with longstanding vision loss, but stable vision and no evidence of leakage on fluorescein angiography could be interpreted as scar tissue instead of choroidal neovascularization. If the CRC is tasked with determining eligibility for this patient and one of the exclusion criteria is presence of CNV on OCT, the CRC must consider the SHRM to be evidence of CNV and recommend that the patient be excluded. When considering the clinical diagnosis, the absence of clinical information in this scenario is an obvious disadvantage. Other disadvantages include the inability to examine the patient concurrently and the absence of surgical history. These disadvantages relate to obtaining a proper clinical diagnosis but do not influence the ability to objectively identify image features.
Despite these disadvantages, there are benefits to interpreting the images based on strict criteria without considering clinical information or history. CRC image interpretation results in more consistent and unbiased data and is vital to preserving the integrity of clinical trials. By clearly defining image features and measurement guidelines, direct comparison of data between patients and across different visits can be made without ambiguity. Intergrader variability is also minimized by relying on strict definitions based only on the image.
When disagreements cannot be resolved, most CRCs have a process to review disputed images as part of their SOP. During this review process, a senior grader or physician reviews the images, and the clinical history is sometimes revealed by the study site as evidence for their interpretation. However, it is important to emphasize that this review process is based solely on the image features and should not factor in this clinical information. This information is also not shared with the graders so that the masked nature of CRC image grading is maintained. Following this review, the CRC will make a recommendation based on its image interpretation, and the final decision is often left to the sponsor of the study. RP
REFERENCES
- Adhi M, Duker JS. Optical coherence tomography--current and future applications. Curr Opin Ophthalmol. 2013;24(3):213-221.
- Tan CS, Sadda SR. The role of central reading centers--current practices and future directions. Indian J Ophthalmol. 2015;63(5):404-405.
- Sander B, Al-Abiji HA, Kofod M, Jorgensen TM. Do different spectral domain OCT hardwares measure the same? Comparison of retinal thickness using third-party software. Graefes Arch Clin Exp Ophthalmol. 2015;253(11):1915-1921.
- Waldstein SM, Gerendas BS, Montuoro A, Simader C, Schmidt-Erfurth U. Quantitative comparison of macular segmentation performance using identical retinal regions across multiple spectral-domain optical coherence tomography instruments. Br J Ophthalmol. 2015;99(6):794-800.