Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Medical Devices R&D and Inventions’


The importance of spatially-localized and quantified image interpretation in cancer management

Writer & reporter: Dror Nir, PhD

I became involved in the development of quantified imaging-based tissue characterization more than a decade ago. From the start, it was clear to me that what clinicians needs will not be answered by just identifying whether a certain organ harbors cancer. If imaging devices are to play a significant role in future medicine, as a complementary source of information to bio-markers and gene sequencing the minimum value expected of them is accurate directing of biopsy needles and treatment tools to the malignant locations in the organ.  Therefore, the design goal of the first Prostate-HistoScanning (“PHS”) version I went into the trouble of characterizing localized volume of tissue at the level of approximately 0.1cc (1x1x1 mm). Thanks to that, the imaging-interpretation overlay of PHS localizes the suspicious lesions with accuracy of 5mm within the prostate gland; Detection, localisation and characterisation of prostate cancer by prostate HistoScanning(™).

I then started a more ambitious research aiming to explore the feasibility of identifying sub-structures within the cancer lesion itself. The preliminary results of this exploration were so promising that it surprised not only the clinicians I was working with but also myself. It seems, that using quality ultrasound, one can find Imaging-Biomarkers that allows differentiation of inside structures of a cancerous lesions. Unfortunately, for everyone involved in this work, including me, this scientific effort was interrupted by financial constrains before reaching maturity.

My short introduction was made to explain why I find the publication below important enough to post and bring to your attention.

I hope for your agreement on the matter.

Quantitative Imaging in Cancer Evolution and Ecology

Robert A. Gatenby, MD, Olya Grove, PhD and Robert J. Gillies, PhD

From the Departments of Radiology and Cancer Imaging and Metabolism, Moffitt Cancer Center, 12902 Magnolia Dr, Tampa, FL 33612. Address correspondence to  R.A.G. (e-mail: Robert.Gatenby@Moffitt.org).

Abstract

Cancer therapy, even when highly targeted, typically fails because of the remarkable capacity of malignant cells to evolve effective adaptations. These evolutionary dynamics are both a cause and a consequence of cancer system heterogeneity at many scales, ranging from genetic properties of individual cells to large-scale imaging features. Tumors of the same organ and cell type can have remarkably diverse appearances in different patients. Furthermore, even within a single tumor, marked variations in imaging features, such as necrosis or contrast enhancement, are common. Similar spatial variations recently have been reported in genetic profiles. Radiologic heterogeneity within tumors is usually governed by variations in blood flow, whereas genetic heterogeneity is typically ascribed to random mutations. However, evolution within tumors, as in all living systems, is subject to Darwinian principles; thus, it is governed by predictable and reproducible interactions between environmental selection forces and cell phenotype (not genotype). This link between regional variations in environmental properties and cellular adaptive strategies may permit clinical imaging to be used to assess and monitor intratumoral evolution in individual patients. This approach is enabled by new methods that extract, report, and analyze quantitative, reproducible, and mineable clinical imaging data. However, most current quantitative metrics lack spatialness, expressing quantitative radiologic features as a single value for a region of interest encompassing the whole tumor. In contrast, spatially explicit image analysis recognizes that tumors are heterogeneous but not well mixed and defines regionally distinct habitats, some of which appear to harbor tumor populations that are more aggressive and less treatable than others. By identifying regional variations in key environmental selection forces and evidence of cellular adaptation, clinical imaging can enable us to define intratumoral Darwinian dynamics before and during therapy. Advances in image analysis will place clinical imaging in an increasingly central role in the development of evolution-based patient-specific cancer therapy.

© RSNA, 2013

 

Introduction

Cancers are heterogeneous across a wide range of temporal and spatial scales. Morphologic heterogeneity between and within cancers is readily apparent in clinical imaging, and subjective descriptors of these differences, such as necrotic, spiculated, and enhancing, are common in the radiology lexicon. In the past several years, radiology research has increasingly focused on quantifying these imaging variations in an effort to understand their clinical and biologic implications (1,2). In parallel, technical advances now permit extensive molecular characterization of tumor cells in individual patients. This has led to increasing emphasis on personalized cancer therapy, in which treatment is based on the presence of specific molecular targets (3). However, recent studies (4,5) have shown that multiple genetic subpopulations coexist within cancers, reflecting extensive intratumoral somatic evolution. This heterogeneity is a clear barrier to therapy based on molecular targets, since the identified targets do not always represent the entire population of tumor cells in a patient (6,7). It is ironic that cancer, a disease extensively and primarily analyzed genetically, is also the most genetically flexible of all diseases and, therefore, least amenable to such an approach.

Genetic variations in tumors are typically ascribed to a mutator phenotype that generates new clones, some of which expand into large populations (8). However, although identification of genotypes is of substantial interest, it is insufficient for complete characterization of tumor dynamics because evolution is governed by the interactions of environmental selection forces with the phenotypic, not genotypic, properties of populations as shown, for example, by evolutionary convergence to identical phenotypes among cave fish even when they are from different species (911). This connection between tissue selection forces and cellular properties has the potential to provide a strong bridge between medical imaging and the cellular and molecular properties of cancers.

We postulate that differences within tumors at different spatial scales (ie, at the radiologic, cellular, and molecular [genetic] levels) are related. Tumor characteristics observable at clinical imaging reflect molecular-, cellular-, and tissue-level dynamics; thus, they may be useful in understanding the underlying evolving biology in individual patients. A challenge is that such mapping across spatial and temporal scales requires not only objective reproducible metrics for imaging features but also a theoretical construct that bridges those scales (Fig 1).

P1a

Figure 1a: Computed tomographic (CT) scan of right upper lobe lung cancer in a 50-year-old woman.

P1b

Figure 1b: Isoattenuation map shows regional heterogeneity at the tissue scale (measured in centimeters).

 cd

Figure 1c & 1d: (c, d)Whole-slide digital images (original magnification, ×3) of a histologic slice of the same tumor at the mesoscopic scale (measured in millimeters) (c) coupled with a masked image of regional morphologic differences showing spatial heterogeneity (d). 

p1e

Figure 1e: Subsegment of the whole slide image shows the microscopic scale (measured in micrometers) (original magnification, ×50).

p1f

Figure 1f: Pattern recognition masked image shows regional heterogeneity. In a, the CT image of non–small cell lung cancer can be analyzed to display gradients of attenuation, which reveals heterogeneous and spatially distinct environments (b). Histologic images in the same patient (c, e) reveal heterogeneities in tissue structure and density on the same scale as seen in the CT images. These images can be analyzed at much higher definition to identify differences in morphologies of individual cells (3), and these analyses reveal clusters of cells with similar morphologic features (d, f). An important goal of radiomics is to bridge radiologic data with cellular and molecular characteristics observed microscopically.

To promote the development and implementation of quantitative imaging methods, protocols, and software tools, the National Cancer Institute has established the Quantitative Imaging Network. One goal of this program is to identify reproducible quantifiable imaging features of tumors that will permit data mining and explicit examination of links between the imaging findings and the underlying molecular and cellular characteristics of the tumors. In the quest for more personalized cancer treatments, these quantitative radiologic features potentially represent nondestructive temporally and spatially variable predictive and prognostic biomarkers that readily can be obtained in each patient before, during, and after therapy.

Quantitative imaging requires computational technologies that can be used to reliably extract mineable data from radiographic images. This feature information can then be correlated with molecular and cellular properties by using bioinformatics methods. Most existing methods are agnostic and focus on statistical descriptions of existing data, without presupposing the existence of specific relationships. Although this is a valid approach, a more profound understanding of quantitative imaging information may be obtained with a theoretical hypothesis-driven framework. Such models use links between observable tumor characteristics and microenvironmental selection factors to make testable predictions about emergent phenotypes. One such theoretical framework is the developing paradigm of cancer as an ecologic and evolutionary process.

For decades, landscape ecologists have studied the effects of heterogeneity in physical features on interactions between populations of organisms and their environments, often by using observation and quantification of images at various scales (1214). We propose that analytic models of this type can easily be applied to radiologic studies of cancer to uncover underlying molecular, cellular, and microenvironmental drivers of tumor behavior and specifically, tumor adaptations and responses to therapy (15).

In this article, we review recent developments in quantitative imaging metrics and discuss how they correlate with underlying genetic data and clinical outcomes. We then introduce the concept of using ecology and evolutionary models for spatially explicit image analysis as an exciting potential avenue of investigation.

 

Quantitative Imaging and Radiomics

In patients with cancer, quantitative measurements are commonly limited to measurement of tumor size with one-dimensional (Response Evaluation Criteria in Solid Tumors [or RECIST]) or two-dimensional (World Health Organization) long-axis measurements (16). These measures do not reflect the complexity of tumor morphology or behavior, and in many cases, changes in these measures are not predictive of therapeutic benefit (17). In contrast, radiomics (18) is a high-throughput process in which a large number of shape, edge, and texture imaging features are extracted, quantified, and stored in databases in an objective, reproducible, and mineable form (Figs 12). Once transformed into a quantitative form, radiologic tumor properties can be linked to underlying genetic alterations (the field is called radiogenomics) (1921) and to medical outcomes (2227). Researchers are currently working to develop both a standardized lexicon to describe tumor features (28,29) and a standard method to convert these descriptors into quantitative mineable data (30,31) (Fig 3).

p2

Figure 2: Contrast-enhanced CT scans show non–small cell lung cancer (left) and corresponding cluster map (right). Subregions within the tumor are identified by clustering pixels based on the attenuation of pixels and their cumulative standard deviation across the region. While the entire region of interest of the tumor, lacking the spatial information, yields a weighted mean attenuation of 859.5 HU with a large and skewed standard deviation of 243.64 HU, the identified subregions have vastly different statistics. Mean attenuation was 438.9 HU ± 45 in the blue subregion, 210.91 HU ± 79 in the yellow subregion, and 1077.6 HU ± 18 in the red subregion.

 

p3

Figure 3: Chart shows the five processes in radiomics.

Several recent articles underscore the potential power of feature analysis. After manually extracting more than 100 CT image features, Segal and colleagues found that a subset of 14 features predicted 80% of the gene expression pattern in patients with hepatocellular carcinoma (21). A similar extraction of features from contrast agent–enhanced magnetic resonance (MR) images of glioblastoma was used to predict immunohistochemically identified protein expression patterns (22). Other radiomic features, such as texture, can be used to predict response to therapy in patients with renal cancer (32) and prognosis in those with metastatic colon cancer (33).

These pioneering studies were relatively small because the image analysis was performed manually, and the studies were consequently underpowered. Thus, recent work in radiomics has focused on technical developments that permit automated extraction of image features with the potential for high throughput. Such methods, which rely heavily on novel machine learning algorithms, can more completely cover the range of quantitative features that can describe tumor heterogeneity, such as texture, shape, or margin gradients or, importantly, different environments, or niches, within the tumors.

Generally speaking, texture in a biomedical image is quantified by identifying repeating patterns. Texture analyses fall into two broad categories based on the concepts of first- and second-order spatial statistics. First-order statistics are computed by using individual pixel values, and no relationships between neighboring pixels are assumed or evaluated. Texture analysis methods based on first-order statistics usually involve calculating cumulative statistics of pixel values and their histograms across the region of interest. Second-order statistics, on the other hand, are used to evaluate the likelihood of observing spatially correlated pixels (34). Hence, second-order texture analyses focus on the detection and quantification of nonrandom distributions of pixels throughout the region of interest.

The technical developments that permit second-order texture analysis in tumors by using regional enhancement patterns on dynamic contrast-enhanced MR images were reviewed recently (35). One such technique that is used to measure heterogeneity of contrast enhancement uses the Factor Analysis of Medical Image Sequences (or FAMIS) algorithm, which divides tumors into regions based on their patterns of enhancement (36). Factor Analysis of Medical Image Sequences–based analyses yielded better prognostic information when compared with region of interest–based methods in numerous cancer types (1921,3739), and they were a precursor to the Food and Drug Administration–approved three-time-point method (40). A number of additional promising methods have been developed. Rose and colleagues showed that a structured fractal-based approach to texture analysis improved differentiation between low- and high-grade brain cancers by orders of magnitude (41). Ahmed and colleagues used gray level co-occurrence matrix analyses of dynamic contrast-enhanced images to distinguish benign from malignant breast masses with high diagnostic accuracy (area under the receiver operating characteristic curve, 0.92) (26). Others have shown that Minkowski functional structured methods that convolve images with differently kernelled masks can be used to distinguish subtle differences in contrast enhancement patterns and can enable significant differentiation between treatment groups (42).

It is not surprising that analyses of heterogeneity in enhancement patterns can improve diagnosis and prognosis, as this heterogeneity is fundamentally based on perfusion deficits, which generate significant microenvironmental selection pressures. However, texture analysis is not limited to enhancement patterns. For example, measures of heterogeneity in diffusion-weighted MR images can reveal differences in cellular density in tumors, which can be matched to histologic findings (43). Measures of heterogeneity in T1- and T2-weighted images can be used to distinguish benign from malignant soft-tissue masses (23). CT-based texture features have been shown to be highly significant independent predictors of survival in patients with non–small cell lung cancer (24).

Texture analyses can also be applied to positron emission tomographic (PET) data, where they can provide information about metabolic heterogeneity (25,26). In a recent study, Nair and colleagues identified 14 quantitative PET imaging features that correlated with gene expression (19). This led to an association of metagene clusters to imaging features and yielded prognostic models with hazard ratios near 6. In a study of esophageal cancer, in which 38 quantitative features describing fluorodeoxyglucose uptake were extracted, measures of metabolic heterogeneity at baseline enabled prediction of response with significantly higher sensitivity than any whole region of interest standardized uptake value measurement (22). It is also notable that these extensive texture-based features are generally more reproducible than simple measures of the standardized uptake value (27), which can be highly variable in a clinical setting (44).

 

Spatially Explicit Analysis of Tumor Heterogeneity

Although radiomic analyses have shown high prognostic power, they are not inherently spatially explicit. Quantitative border, shape, and texture features are typically generated over a region of interest that comprises the entire tumor (45). This approach implicitly assumes that tumors are heterogeneous but well mixed. However, spatially explicit subregions of cancers are readily apparent on contrast-enhanced MR or CT images, as perfusion can vary markedly within the tumor, even over short distances, with changes in tumor cell density and necrosis.

An example is shown in Figure 2, which shows a contrast-enhanced CT scan of non–small cell lung cancer. Note that there are many subregions within this tumor that can be identified with attenuation gradient (attenuation per centimeter) edge detection algorithms. Each subregion has a characteristic quantitative attenuation, with a narrow standard deviation, whereas the mean attenuation over the entire region of interest is a weighted average of the values across all subregions, with a correspondingly large and skewed distribution. We contend that these subregions represent distinct habitats within the tumor, each with a distinct set of environmental selection forces.

These observations, along with the recent identification of regional variations in the genetic properties of tumor cells, indicate the need to abandon the conceptual model of cancers as bounded organlike structures. Rather than a single self-organized system, cancers represent a patchwork of habitats, each with a unique set of environmental selection forces and cellular evolution strategies. For example, regions of the tumor that are poorly perfused can be populated by only those cells that are well adapted to low-oxygen, low-glucose, and high-acid environmental conditions. Such adaptive responses to regional heterogeneity result in microenvironmental selection and hence, emergence of genetic variations within tumors. The concept of adaptive response is an important departure from the traditional view that genetic heterogeneity is the product of increased random mutations, which implies that molecular heterogeneity is fundamentally unpredictable and, therefore, chaotic. The Darwinian model proposes that genetic heterogeneity is the result of a predictable and reproducible selection of successful adaptive strategies to local microenvironmental conditions.

Current cross-sectional imaging modalities can be used to identify regional variations in selection forces by using contrast-enhanced, cell density–based, or metabolic features. Clinical imaging can also be used to identify evidence of cellular adaptation. For example, if a region of low perfusion on a contrast-enhanced study is necrotic, then an adaptive population is absent or minimal. However, if the poorly perfused area is cellular, then there is presumptive evidence of an adapted proliferating population. While the specific genetic properties of this population cannot be determined, the phenotype of the adaptive strategy is predictable since the environmental conditions are more or less known. Thus, standard medical images can be used to infer specific emergent phenotypes and, with ongoing research, these phenotypes can be associated with underlying genetic changes.

This area of investigation will likely be challenging. As noted earlier, the most obvious spatially heterogeneous imaging feature in tumors is perfusion heterogeneity on contrast-enhanced CT or MR images. It generally has been assumed that the links between contrast enhancement, blood flow, perfusion, and tumor cell characteristics are straightforward. That is, tumor regions with decreased blood flow will exhibit low perfusion, low cell density, and high necrosis. In reality, however, the dynamics are actually much more complex. As shown in Figure 4, when using multiple superimposed sequences from MR imaging of malignant gliomas, regions of tumor that are poorly perfused on contrast-enhanced T1-weighted images may exhibit areas of low or high water content on T2-weighted images and low or high diffusion on diffusion-weighted images. Thus, high or low cell densities can coexist in poorly perfused volumes, creating perfusion-diffusion mismatches. Regions with poor perfusion with high cell density are of particular clinical interest because they represent a cell population that is apparently adapted to microenvironmental conditions associated with poor perfusion. The associated hypoxia, acidosis, and nutrient deprivation select for cells that are resistant to apoptosis and thus are likely to be resistant to therapy (46,47).

p4

Figure 4: Left: Contrast-enhanced T1 image from subject TCGA-02-0034 in The Cancer Genome Atlas–Glioblastoma Multiforme repository of MR volumes of glioblastoma multiforme cases. Right: Spatial distribution of MR imaging–defined habitats within the tumor. The blue region (low T1 postgadolinium, low fluid-attenuated inversion recovery) is particularly notable because it presumably represents a habitat with low blood flow but high cell density, indicating a population presumably adapted to hypoxic acidic conditions.

Furthermore, other selection forces not related to perfusion are likely to be present within tumors. For example, evolutionary models suggest that cancer cells, even in stable microenvironments, tend to speciate into “engineers” that maximize tumor cell growth by promoting angiogenesis and “pioneers” that proliferate by invading normal issue and co-opting the blood supply. These invasive tumor phenotypes can exist only at the tumor edge, where movement into a normal tissue microenvironment can be rewarded by increased proliferation. This evolutionary dynamic may contribute to distinct differences between the tumor edges and the tumor cores, which frequently can be seen at analysis of cross-sectional images (Fig 5).

p5a

Figure 5a: CT images obtained with conventional entropy filtering in two patients with non–small cell lung cancer with no apparent textural differences show similar entropy values across all sections. 

p5b

Figure 5b: Contour plots obtained after the CT scans were convolved with the entropy filter. Further subdividing each section in the tumor stack into tumor edge and core regions (dotted black contour) reveals varying textural behavior across sections. Two distinct patterns have emerged, and preliminary analysis shows that the change of mean entropy value between core and edge regions correlates negatively with survival.

Interpretation of the subsegmentation of tumors will require computational models to understand and predict the complex nonlinear dynamics that lead to heterogeneous combinations of radiographic features. We have exploited ecologic methods and models to investigate regional variations in cancer environmental and cellular properties that lead to specific imaging characteristics. Conceptually, this approach assumes that regional variations in tumors can be viewed as a coalition of distinct ecologic communities or habitats of cells in which the environment is governed, at least to first order, by variations in vascular density and blood flow. The environmental conditions that result from alterations in blood flow, such as hypoxia, acidosis, immune response, growth factors, and glucose, represent evolutionary selection forces that give rise to local-regional phenotypic adaptations. Phenotypic alterations can result from epigenetic, genetic, or chromosomal rearrangements, and these in turn will affect prognosis and response to therapy. Changes in habitats or the relative abundance of specific ecologic communities over time and in response to therapy may be a valuable metric with which to measure treatment efficacy and emergence of resistant populations.

 

Emerging Strategies for Tumor Habitat Characterization

A method for converting images to spatially explicit tumor habitats is shown in Figure 4. Here, three-dimensional MR imaging data sets from a glioblastoma are segmented. Each voxel in the tumor is defined by a scale that includes its image intensity in different sequences. In this case, the imaging sets are from (a) a contrast-enhanced T1 sequence, (b) a fast spin-echo T2 sequence, and (c) a fluid-attenuated inversion-recovery (or FLAIR) sequence. Voxels in each sequence can be defined as high or low based on their value compared with the mean signal value. By using just two sequences, a contrast-enhanced T1 sequence and a fluid-attenuated inversion-recovery sequence, we can define four habitats: high or low postgadolinium T1 divided into high or low fluid-attenuated inversion recovery. When these voxel habitats are projected into the tumor volume, we find they cluster into spatially distinct regions. These habitats can be evaluated both in terms of their relative contributions to the total tumor volume and in terms of their interactions with each other, based on the imaging characteristics at the interfaces between regions. Similar spatially explicit analysis can be performed with CT scans (Fig 5).

Analysis of spatial patterns in cross-sectional images will ultimately require methods that bridge spatial scales from microns to millimeters. One possible method is a general class of numeric tools that is already widely used in terrestrial and marine ecology research to link species occurrence or abundance with environmental parameters. Species distribution models (4851) are used to gain ecologic and evolutionary insights and to predict distributions of species or morphs across landscapes, sometimes extrapolating in space and time. They can easily be used to link the environmental selection forces in MR imaging-defined habitats to the evolutionary dynamics of cancer cells.

Summary

Imaging can have an enormous role in the development and implementation of patient-specific therapies in cancer. The achievement of this goal will require new methods that expand and ultimately replace the current subjective qualitative assessments of tumor characteristics. The need for quantitative imaging has been clearly recognized by the National Cancer Institute and has resulted in formation of the Quantitative Imaging Network. A critical objective of this imaging consortium is to use objective, reproducible, and quantitative feature metrics extracted from clinical images to develop patient-specific imaging-based prognostic models and personalized cancer therapies.

It is increasingly clear that tumors are not homogeneous organlike systems. Rather, they contain regional coalitions of ecologic communities that consist of evolving cancer, stroma, and immune cell populations. The clinical consequence of such niche variations is that spatial and temporal variations of tumor phenotypes will inevitably evolve and present substantial challenges to targeted therapies. Hence, future research in cancer imaging will likely focus on spatially explicit analysis of tumor regions.

Clinical imaging can readily characterize regional variations in blood flow, cell density, and necrosis. When viewed in a Darwinian evolutionary context, these features reflect regional variations in environmental selection forces and can, at least in principle, be used to predict the likely adaptive strategies of the local cancer population. Hence, analyses of radiologic data can be used to inform evolutionary models and then can be mapped to regional population dynamics. Ecologic and evolutionary principles may provide a theoretical framework to link imaging to the cellular and molecular features of cancer cells and ultimately lead to a more comprehensive understanding of specific cancer biology in individual patients.

 

Essentials

  • • Marked heterogeneity in genetic properties of different cells in the same tumor is typical and reflects ongoing intratumoral evolution.
  • • Evolution within tumors is governed by Darwinian dynamics, with identifiable environmental selection forces that interact with phenotypic (not genotypic) properties of tumor cells in a predictable and reproducible manner; clinical imaging is uniquely suited to measure temporal and spatial heterogeneity within tumors that is both a cause and a consequence of this evolution.
  • • Subjective radiologic descriptors of cancers are inadequate to capture this heterogeneity and must be replaced by quantitative metrics that enable statistical comparisons between features describing intratumoral heterogeneity and clinical outcomes and molecular properties.
  • • Spatially explicit mapping of tumor regions, for example by superimposing multiple imaging sequences, may permit patient-specific characterization of intratumoral evolution and ecology, leading to patient- and tumor-specific therapies.
  • • We summarize current information on quantitative analysis of radiologic images and propose future quantitative imaging must become spatially explicit to identify intratumoral habitats before and during therapy.

Disclosures of Conflicts of Interest: R.A.G. No relevant conflicts of interest to disclose. O.G. No relevant conflicts of interest to disclose.R.J.G. No relevant conflicts of interest to disclose.

 

Acknowledgments

The authors thank Mark Lloyd, MS; Joel Brown, PhD; Dmitry Goldgoff, PhD; and Larry Hall, PhD, for their input to image analysis and for their lively and informative discussions.

Footnotes

  • Received December 18, 2012; revision requested February 5, 2013; revision received March 11; accepted April 9; final version accepted April 29.
  • Funding: This research was supported by the National Institutes of Health (grants U54CA143970-01, U01CA143062; R01CA077575, andR01CA170595).

References

    1. Kurland BF,
    2. Gerstner ER,
    3. Mountz JM,
    4. et al

    . Promise and pitfalls of quantitative imaging in oncology clinical trials. Magn Reson Imaging2012;30(9):1301–1312.

    1. Levy MA,
    2. Freymann JB,
    3. Kirby JS,
    4. et al

    . Informatics methods to enable sharing of quantitative imaging research data. Magn Reson Imaging2012;30(9):1249–1256.

    1. Mirnezami R,
    2. Nicholson J,
    3. Darzi A

    . Preparing for precision medicine. N Engl J Med 2012;366(6):489–491.

    1. Yachida S,
    2. Jones S,
    3. Bozic I,
    4. et al

    . Distant metastasis occurs late during the genetic evolution of pancreatic cancer. Nature 2010;467(7319):1114–1117.

    1. Gerlinger M,
    2. Rowan AJ,
    3. Horswell S,
    4. et al

    . Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N Engl J Med2012;366(10):883–892.

    1. Gerlinger M,
    2. Swanton C

    . How Darwinian models inform therapeutic failure initiated by clonal heterogeneity in cancer medicine. Br J Cancer2010;103(8):1139–1143.

    1. Kern SE

    . Why your new cancer biomarker may never work: recurrent patterns and remarkable diversity in biomarker failures. Cancer Res2012;72(23):6097–6101.

    1. Nowell PC

    . The clonal evolution of tumor cell populations. Science1976;194(4260):23–28.

    1. Greaves M,
    2. Maley CC

    . Clonal evolution in cancer. Nature2012;481(7381):306–313.

    1. Vincent TL,
    2. Brown JS

    . Evolutionary game theory, natural selection and Darwinian dynamics. Cambridge, England: Cambridge University Press, 2005.

    1. Gatenby RA,
    2. Gillies RJ

    . A microenvironmental model of carcinogenesis. Nat Rev Cancer 2008;8(1):56–61.

    1. Bowers MA,
    2. Matter SF

    . Landscape ecology of mammals: relationships between density and patch size. J Mammal 1997;78(4):999–1013.

    1. Dorner BK,
    2. Lertzman KP,
    3. Fall J

    . Landscape pattern in topographically complex landscapes: issues and techniques for analysis. Landscape Ecol2002;17(8):729–743.

    1. González-García I,
    2. Solé RV,
    3. Costa J

    . Metapopulation dynamics and spatial heterogeneity in cancer. Proc Natl Acad Sci U S A2002;99(20):13085–13089.

    1. Patel LR,
    2. Nykter M,
    3. Chen K,
    4. Zhang W

    . Cancer genome sequencing: understanding malignancy as a disease of the genome, its conformation, and its evolution. Cancer Lett 2012 Oct 27. [Epub ahead of print]

    1. Jaffe CC

    . Measures of response: RECIST, WHO, and new alternatives. J Clin Oncol 2006;24(20):3245–3251.

    1. Burton A

    . RECIST: right time to renovate? Lancet Oncol2007;8(6):464–465.

    1. Lambin P,
    2. Rios-Velazquez E,
    3. Leijenaar R,
    4. et al

    . Radiomics: extracting more information from medical images using advanced feature analysis. Eur J Cancer 2012;48(4):441–446.

    1. Nair VS,
    2. Gevaert O,
    3. Davidzon G,
    4. et al

    . Prognostic PET 18F-FDG uptake imaging features are associated with major oncogenomic alterations in patients with resected non-small cell lung cancer. Cancer Res2012;72(15):3725–3734.

    1. Diehn M,
    2. Nardini C,
    3. Wang DS,
    4. et al

    . Identification of noninvasive imaging surrogates for brain tumor gene-expression modules. Proc Natl Acad Sci U S A 2008;105(13):5213–5218.

    1. Segal E,
    2. Sirlin CB,
    3. Ooi C,
    4. et al

    . Decoding global gene expression programs in liver cancer by noninvasive imaging. Nat Biotechnol 2007;25(6):675–680.

    1. Tixier F,
    2. Le Rest CC,
    3. Hatt M,
    4. et al

    . Intratumor heterogeneity characterized by textural features on baseline 18F-FDG PET images predicts response to concomitant radiochemotherapy in esophageal cancer. J Nucl Med2011;52(3):369–378.

    1. Pang KK,
    2. Hughes T

    . MR imaging of the musculoskeletal soft tissue mass: is heterogeneity a sign of malignancy? J Chin Med Assoc2003;66(11):655–661.

    1. Ganeshan B,
    2. Panayiotou E,
    3. Burnand K,
    4. Dizdarevic S,
    5. Miles K

    . Tumour heterogeneity in non-small cell lung carcinoma assessed by CT texture analysis: a potential marker of survival. Eur Radiol 2012;22(4):796–802.

    1. Asselin MC,
    2. O’Connor JP,
    3. Boellaard R,
    4. Thacker NA,
    5. Jackson A

    . Quantifying heterogeneity in human tumours using MRI and PET. Eur J Cancer2012;48(4):447–455.

    1. Ahmed A,
    2. Gibbs P,
    3. Pickles M,
    4. Turnbull L

    . Texture analysis in assessment and prediction of chemotherapy response in breast cancer. J Magn Reson Imaging doi:10.1002/jmri.23971 2012. Published online December 13, 2012.

    1. Kawata Y,
    2. Niki N,
    3. Ohmatsu H,
    4. et al

    . Quantitative classification based on CT histogram analysis of non-small cell lung cancer: correlation with histopathological characteristics and recurrence-free survival. Med Phys2012;39(2):988–1000.

    1. Rubin DL

    . Creating and curating a terminology for radiology: ontology modeling and analysis. J Digit Imaging 2008;21(4):355–362.

    1. Opulencia P,
    2. Channin DS,
    3. Raicu DS,
    4. Furst JD

    . Mapping LIDC, RadLex™, and lung nodule image features. J Digit Imaging 2011;24(2):256–270.

    1. Channin DS,
    2. Mongkolwat P,
    3. Kleper V,
    4. Rubin DL

    . The Annotation and Image Mark-up project. Radiology 2009;253(3):590–592.

    1. Rubin DL,
    2. Mongkolwat P,
    3. Kleper V,
    4. Supekar K,
    5. Channin DS

    . Medical imaging on the semantic web: annotation and image markup. Presented at the AAAI Spring Symposium Series, Semantic Scientific Knowledge Integration, Palo Alto, Calif, March 26–28, 2008.

    1. Goh V,
    2. Ganeshan B,
    3. Nathan P,
    4. Juttla JK,
    5. Vinayan A,
    6. Miles KA

    . Assessment of response to tyrosine kinase inhibitors in metastatic renal cell cancer: CT texture as a predictive biomarker. Radiology 2011;261(1):165–171.

    1. Miles KA,
    2. Ganeshan B,
    3. Griffiths MR,
    4. Young RC,
    5. Chatwin CR

    . Colorectal cancer: texture analysis of portal phase hepatic CT images as a potential marker of survival. Radiology 2009;250(2):444–452.

    1. Haralick RM,
    2. Shanmugam K,
    3. Dinstein I

    . Textural features for image classification. IEEE Trans Syst Man Cybern 1973;3(6):610–621.

    1. Yang X,
    2. Knopp MV

    . Quantifying tumor vascular heterogeneity with dynamic contrast-enhanced magnetic resonance imaging: a review. J Biomed Biotechnol 2011;2011:732848.

    1. Frouin F,
    2. Bazin JP,
    3. Di Paola M,
    4. Jolivet O,
    5. Di Paola R

    . FAMIS: a software package for functional feature extraction from biomedical multidimensional images. Comput Med Imaging Graph 1992;16(2):81–91.

    1. Frouge C,
    2. Guinebretière JM,
    3. Contesso G,
    4. Di Paola R,
    5. Bléry M

    . Correlation between contrast enhancement in dynamic magnetic resonance imaging of the breast and tumor angiogenesis. Invest Radiol 1994;29(12):1043–1049.

    1. Zagdanski AM,
    2. Sigal R,
    3. Bosq J,
    4. Bazin JP,
    5. Vanel D,
    6. Di Paola R

    . Factor analysis of medical image sequences in MR of head and neck tumors. AJNR Am J Neuroradiol 1994;15(7):1359–1368.

    1. Bonnerot V,
    2. Charpentier A,
    3. Frouin F,
    4. Kalifa C,
    5. Vanel D,
    6. Di Paola R

    . Factor analysis of dynamic magnetic resonance imaging in predicting the response of osteosarcoma to chemotherapy. Invest Radiol 1992;27(10):847–855.

    1. Furman-Haran E,
    2. Grobgeld D,
    3. Kelcz F,
    4. Degani H

    . Critical role of spatial resolution in dynamic contrast-enhanced breast MRI. J Magn Reson Imaging2001;13(6):862–867.

    1. Rose CJ,
    2. Mills SJ,
    3. O’Connor JPB,
    4. et al

    . Quantifying spatial heterogeneity in dynamic contrast-enhanced MRI parameter maps. Magn Reson Med2009;62(2):488–499.

    1. Canuto HC,
    2. McLachlan C,
    3. Kettunen MI,
    4. et al

    . Characterization of image heterogeneity using 2D Minkowski functionals increases the sensitivity of detection of a targeted MRI contrast agent. Magn Reson Med2009;61(5):1218–1224.

    1. Lloyd MC,
    2. Allam-Nandyala P,
    3. Purohit CN,
    4. Burke N,
    5. Coppola D,
    6. Bui MM

    . Using image analysis as a tool for assessment of prognostic and predictive biomarkers for breast cancer: how reliable is it? J Pathol Inform2010;1:29–36.

    1. Kumar V,
    2. Nath K,
    3. Berman CG,
    4. et al

    . Variance of SUVs for FDG-PET/CT is greater in clinical practice than under ideal study settings. Clin Nucl Med2013;38(3):175–182.

    1. Walker-Samuel S,
    2. Orton M,
    3. Boult JK,
    4. Robinson SP

    . Improving apparent diffusion coefficient estimates and elucidating tumor heterogeneity using Bayesian adaptive smoothing. Magn Reson Med 2011;65(2):438–447.

    1. Thews O,
    2. Nowak M,
    3. Sauvant C,
    4. Gekle M

    . Hypoxia-induced extracellular acidosis increases p-glycoprotein activity and chemoresistance in tumors in vivo via p38 signaling pathway. Adv Exp Med Biol 2011;701:115–122.

    1. Thews O,
    2. Dillenburg W,
    3. Rösch F,
    4. Fellner M

    . PET imaging of the impact of extracellular pH and MAP kinases on the p-glycoprotein (Pgp) activity. Adv Exp Med Biol 2013;765:279–286.

    1. Araújo MB,
    2. Peterson AT

    . Uses and misuses of bioclimatic envelope modeling. Ecology 2012;93(7):1527–1539.

    1. Larsen PE,
    2. Gibbons SM,
    3. Gilbert JA

    . Modeling microbial community structure and functional diversity across time and space. FEMS Microbiol Lett2012;332(2):91–98.

    1. Shenton W,
    2. Bond NR,
    3. Yen JD,
    4. Mac Nally R

    . Putting the “ecology” into environmental flows: ecological dynamics and demographic modelling. Environ Manage 2012;50(1):1–10.

    1. Clark MC,
    2. Hall LO,
    3. Goldgof DB,
    4. Velthuizen R,
    5. Murtagh FR,
    6. Silbiger MS

    .Automatic tumor segmentation using knowledge-based techniques. IEEE Trans Med Imaging 1998;17(2):187–201.

Advertisements

Read Full Post »


Gaps, Tensions, and Conflicts in the FDA Approval Process: Implications for Clinical Practice

Reporter: Aviva Lev-Ari, PhD, RN

 

FDA 501(k) Approval Process

Posted by DCNGA » Wed Nov 03, 2010 4:24 pm

Medical Devices: Gaps, Tensions, and Conflicts in the FDA Approval Process: Medical Devices

Author: Richard A. Deyo, MD, MPH, Departments of Medicine and Health Services and the Center for Cost and Outcomes Research, University of Washington, Seattle

The FDA’s approach to approving medical devices differs substantially from the approach to drugs, being in some ways both more complex and less stringent.[13] The FDA’s authority over devices dates only to 1976. Device legislation was a response, in part, to public outcry over some well-publicized device failures. The most prominent was the Dalkon Shield—an intrauterine contraceptive device associated with serious infections.[14] In contrast, the FDA’s authority over drugs dates to 1938, although it existed in weaker form starting in 1906.[15]

With few exceptions, given the timing of the FDA’s authority, devices introduced before 1976 were never required to undergo rigorous evaluation of safety and efficacy. With the huge volume of “things” that suddenly fell under its purview, the FDA had to prioritize its resources and efforts.

One way of prioritizing was to focus first on safety. Evaluation of effectiveness, in many cases, was reduced to engineering performance: does the device hold up under its intended uses, does it deliver an electric current as advertised? The potential benefits for relieving pain, improving function, or ameliorating disease did not generally have to be demonstrated.

Another way of prioritizing was to assign categories of risk associated with the devices. Rubber gloves seemed less risky than cardiac pacemakers, for example. So the agency assigned devices to 1 of 3 levels of scrutiny. Class I devices have low risk; oversight, performed mainly by industry itself, is to maintain high manufacturing quality standards, assure proper labeling, and prevent adulteration. Latex gloves are an example.

At the other extreme, class III devices are the highest risk. These include many implantable devices, things that are life-supporting, and diagnostic and treatment devices that pose substantial risk. Artificial heart valves and electrical catheters for ablating arrhythmogenic foci in the heart are examples. This class also includes any new technology that the FDA does not recognize or understand. New components or materials, for example, may suggest to FDA that it should perform a more formal evaluation. In general, these devices require a “premarket approval,” including data on performance in people (not just animals), extensive safety information, and extensive data on effectiveness. This evaluation comes closest to that required of drugs. In fact, Dr. Kessler says, these applications “look a lot like a drug applications: big stacks of paper. They almost always require clinical data—almost always. And they often require randomized trials. Not always, but often” (L. Kessler, personal communication). These devices are often expensive and sometimes controversial because of their costs.

Class II devices are perhaps the most interesting. They comprise an intermediate group, generally requiring only performance standards. Examples would be biopsy forceps, surgical lasers, and some hip prostheses. The performance standards focus on the engineering characteristics of the device: does it deliver an electrical stimulus if it claims to, and is it in a safe range? Is it made of noncorrosive materials? Most of these devices get approved by the “510(k)” mechanism. The 510(k) approval requires demonstrating “substantial equivalence” to a device marketed before 1976. “And,” says Kessler, “the products that have been pushed through 510(k) are astonishing” (L. Kessler, personal communication).

Kessler points out, “For the first 5 to 10 years after 1976, this approach made sense. But in 2001, 25 years after the Medical Device Amendment, does it make sense? There was a lot of stuff on the market that wasn’t necessarily great in 1975—why would you put it back on the market now?” (L. Kessler, personal communication). The new device need not prove superiority to the older product—just functional equivalence. If a company wants to tout a new device as a breakthrough, why would it claim substantial equivalence to something 25 years old?

The reason is that the 510(k) process is easier and cheaper than seeking a premarket approval. The 510(k) process usually does not require clinical research. In the mid-1990s, a 510(k) application on average required 3 months for approval, and about $13 million. A premarket approval required, on average, about a year and $36 million. Both are modest compared with new drug approvals. The process by which the agency decides if something is “equivalent enough” to be approved by the 501(k) mechanism is subjective.

Because pre-1976 devices were not subject to any rigorous tests of clinical effectiveness, a newly approved device may be equivalent to something that has little or no therapeutic value. Doctors, patients, and payers therefore often have little ability to judge the value of new devices. As an example, the FDA still receives 510(k) applications for intermittent positive pressure breathing machines.[12] Yet a thorough review by the federal Agency for Health Care Policy and Research found that these devices offer no important benefits.[16]

How much do manufacturers take advantage of the easier 510(k) approach? Since 1976, nearly 98% of new devices entering the market in class II or III have been approved through the 510(k) process.[13] In 2002, the FDA reported 41 premarket approvals and 3708 approvals through the 510(k) process.[17]

“It is a good thing to learn caution from the misfortunes of others.”

“If you wish to succeed in life, make perseverance your bosom friend, experience your wise counselor, caution your elder brother, and hope your guardian genius.”

Dr. Richard A. Deyo, has published an article on this topic in 2004. His observations and references are most valuable for our Blog.

For fulll article go to:

JABFP March–April 2004 Vol.17 No.2 http://www.science.smith.edu/departments/Biochem/Chm_357/Articles/Drug%20Approval.pdf

 

HEALTH CARE POLICY

Author:  Richard A. Deyo, MD, MPH

Despite many successes, drug approval at the Food and Drug Administration (FDA) is subject to gaps, internal tensions, and conflicts of interest. Recalls of drugs and devices and studies demonstrating advantages of older drugs over newer ones highlight the importance of these limitations. The FDA does not compare competing drugs and rarely requires tests of clinical efficacy for new devices. It does not review advertisements before use, assess cost-effectiveness, or regulate surgery (except for devices). Many believe postmarketing surveillance of drugs and devices is inadequate. A source of tension within the agency is pressure for speedy approvals. This may have resulted in “burn-out” among medical officers and has prompted criticism that safety is ignored. Others argue, however, that the agency is unnecessarily slow and bureaucratic. Recent reports identify conflicts of interest (stock ownership, consulting fees, research grants) among some members of the FDA’s advisory committees. FDA review serves a critical function, but physicians should be aware that new drugs may not be as effective as old ones; that new drugs are likely to have undiscovered side effects at the time of marketing; that direct-to-consumer ads are sometimes misleading; that new devices generally have less rigorous evidence of efficacy than new drugs; and that value for money is not considered in approval. J Am Board Fam Pract 2004;17: 142–9.

The process of drug development and approval by the United States Food and Drug Administration (FDA) was recently reviewed by Lipsky and Sharp.1 Using clinical literature and web sites addressing FDA procedures, that review concisely described the FDA’s history, the official approval process, and recent developments in drug approval. However, it did not delve into common misconceptions about the FDA, tensions within the agency, or conflicts of interest in the drug approval process. The rapidly growing business of medical device development, distinct from the drug approval process, also was not addressed. Although most aspects of the FDA review process are highly successful, its limitations deserve careful consideration, because they may have important implications for choosing treatments in practice.

Recent recalls of drugs and devices call attention to limitations of the approval process.2–4 Recent news about complications of hormone replacement therapy5,6 and new data supporting the superiority of diuretic therapy over newer, more expensive alternatives for hypertension7 emphasize gaps in the process. Clinicians should be aware of regulatory limitations as they prescribe treatments and counsel patients, so they have realistic ideas about what FDA approval does and does not mean.

Because controversies relating to internal conflicts or political issues are infrequently reported in scientific journals, this discussion draws not only on scientific articles, but also internet resources, news accounts, and interviews.The goal was not to be exhaustive, but to provide examples of tensions, conflicts, and gaps in the FDA process. As Lipsky and Sharp noted, the FDA approves new drugs and devices (as well as assuring that foods and cosmetics are safe).It monitors over $1 trillion worth of products, which represents nearly a fourth of consumer spending.1 In the medical arena, the basic goal of the FDA is to prevent the marketing of treatments that are ineffective or harmful.

However, the agency faces limitations that result from many factors, including the agency’s legal mandate, pressures from industry, pressures from advocacy groups, funding constraints, and varied political pressures.

Pressures for Approval

Perhaps the biggest challenge and source of friction for the FDA is the speed of approvals for drugs and devices. Protecting the public from ineffective or harmful products would dictate a deliberate, cautious, thorough process. On the other hand, getting valuable new technology to the public—to save lives or improve quality of life—would argue for a speedy process. Some consumer protection groups claim the agency is far too hasty and lenient, bending to drug and device company pressure. On the other hand, manufacturers argue that the agency drags its feet and kills people waiting for new cures. Says Kessler: “That’s been the biggest fight between the industry, the Congress, and the FDA over the past decade: getting products out fast” (L. Kessler, personal communication).

To speed up the review process, Congress passed a law in 1992 that allowed the FDA to collect “user fees” from drug companies. This was in part a response to AIDS advocates, who demanded quick approval of experimental drugs that might offer even a ray of hope.These fees, over $300,000 for each new drug application, now account for about half the FDA’s budget for drug evaluation, and 12% of the agency’s overall $1.3 billion budget.18 The extra funds have indeed accelerated the approval process.By 1999, average approval time had dropped by about 20 months, to an average of a year.In 1988, only 4% of new drugs introduced worldwide were approved first by the FDA.By 1998, FDA was first in approving two thirds of new drugs introduced worldwide.The percentage of applications ultimately approved had also increased substantially.18 Nonetheless, industry complained that approval times slipped to about 14 months in 2001.19

In 2002, device makers announced an agreement with the FDA for similar user fees to expedite approval of new devices, and Congressional approval followed with the Medical Device User Fee and Modernization Act.20 Critics, such as 2 former editors of the New England Journal of Medicine, argue that the user fees create an obvious conflict of interest. So much of the FDA budget now comes from the industry it regulates that the agency must be careful not to alienate its corporate “sponsors.”21

FDA officials believe they remain careful but concede that user fees have imposed pressures that make review more difficult, according to The Wall Street Journal .22 An internal FDA report in 2002 indicated that a third of FDA employees felt uncomfortable expressing “contrary scientific opinions” to the conclusions reached in drug trials.Another third felt that negative actions against applications were “stigmatized.”

The report also said some drug reviewers stated “that decisions should be based more on science and less on corporate wishes.”22  The Los Angeles Times reported that agency drug reviewers felt if drugs were not approved, drug companies would complain to Congress, which might retaliate by failing to renew the users’ fees 18 (although they were just re-approved in summer, 2002).This in turn would hamstring FDA operations and probably cost jobs.

Another criticism is that the approval process has allowed many dangerous drugs to reach the market. A recent analysis showed that of all new drugs approved from 1975 to 1999, almost 3% were subsequently withdrawn for safety reasons, and 8% acquired “black box warnings” of potentially serious side effects. Projections based on the pace of these events suggested that 1 in 5 approved drugs would eventually receive a black box warning or be withdrawn. The authors of the analysis, from Harvard Medical School and Public Citizen Health Research Group, suggested that the FDA should raise the bar for new drug approval when safe and effective treatments are already available or when the drug is for a non–life-threatening condition.2

According to The Los Angeles Times, 7 drugs withdrawn between 1993 and 2000 had been approved while the FDA disregarded “danger signs or blunt warnings from its own specialists. Then, after receiving reports of significant harm to patients, the agency was slow to seek withdrawals.” These drugs were suspected in 1002 deaths reported to FDA. None were life-saving drugs.They included, for example, one for heartburn (cisapride), a diet pill (dexfenfluramine), and a painkiller (bromfenac). The Times reported that the 7 drugs had US sales of $5 billion before they were recalled.18

After analysis, FDA officials concluded that the accelerated drug approval process is unrelated to the drug withdrawals. They pointed out that the number of drugs on the market has risen dramatically, the number of applications has increased, and the population is using more medications.3  More withdrawals are not surprising, in their view. Dr. Janet Woodcock, director of the FDA’s drug review center and one of the analysts, argued that “All drugs have risks; most of them have serious risks.”

She believes the withdrawn drugs were valuable and that their removal from the market was a loss, even if the removal was necessary, according to The Los Angeles Times.18 Nonetheless, many believe the pressures for approval are so strong that they contribute to employee burnout at FDA.In August 2002, The Wall Street Journal reported that 15% of the agency’s medical officer jobs were unfilled.22 Their attrition rate is higher than for medical officers at the National Institutes of Health or the Centers for Disease Control and Prevention. The Journal reported that the reasons, among others, included pressure to increase the pace of drug approvals and an atmosphere that discourages negative actions on drug applications.

Attrition caused by employee “burnout” is now judged to threaten the speed of the approval process. In 2000, even Dr. Woodcock acknowledged a “sweatshop environment that’s causing high staffing turnover.”18 FDA medical and statistical staff have echoed the need for speed and described insufficient time to master details.18,19  An opposing view of FDA function is articulated in an editorial from The Wall Street Journal, by Robert Goldberg of the Manhattan Institute. He wrote that the agency “protects people from the drugs that can save their lives” and needs to shift its role to “speedily put into the market place… new miracle drugs and technologies…. ” He argues that increasing approval times for new treatments are a result of “careless scientific reasoning” and “bureaucratic incompetence,” and that the FDA should monitor the impact of new treatments after marketing rather than wait for “needless clinical trials” that delay approvals.23

Thus, the FDA faces a constant “damned if it does, damned if it doesn’t” environment. No one has undertaken a comprehensive study of the speed of drug or device approval to determine the appropriate metrics for this process, much less the optimal speed. It remains unclear how best to balance the benefits of making new products rapidly available with the risks of unanticipated complications and recalls.

Postmarketing Surveillance of New Products

Although user fees have facilitated pre-approval evaluation of new drugs, the money cannot be used to evaluate the safety of drugs after they are marketed. Experts point out that approximately half of approved drugs have serious side effects not known before approval, and only post-marketing surveillance can detect them. But in the opinion of some, FDA lacks the mandate, the money, and the staff to provide effective and efficient surveillance of over 5000 drugs already in the marketplace. 24 Although reporting of adverse effects by manufacturers is mandatory, late or non reporting of cases by drug companies are major problems. Some companies have been prosecuted for failure to report, and the

FDA has issued several warning letters as a result of late reporting. Spontaneous reporting by practitioners is estimated to capture only 1% to 13% of serious adverse events. 25  Widespread promotion of new drugs—before some of the serious effects are known—increases exposure of patients to the unknown risks. It is estimated that nearly 20 million patients (almost 10% of the US population) were exposed to the 5 drugs that were recalled in 1997 and 1998 alone.26 The new law allowing user fees for device manufacturers does not have the same restriction on post-marketing surveillance that has hampered drug surveillance.

Conflicts of Interest in the Approval Process

Another problem that has recently come to light in the FDA approval process is conflict of interest on the part of some members of the agency’s 18 drug advisory committees. These committees include about 300 members, and are influential in recommending whether drugs should be approved, whether they should remain on the market, how drug studies should be designed, and what warning labels should say. The decisions of these committees have enormous financial implications for drug makers.

A report by USA Today indicated that roughly half the experts on these panels had a direct financial interest in the drug or topic they were asked to evaluate. The conflicts of interest included stock ownership, consulting fees, and research grants from the companies whose products they were evaluating. In some cases, committee members had helped to develop the drugs they were evaluating. Although federal law tries to restrict the use of experts with conflicts of interest, USA Today reported that FDA had waived the rule more than 800 times between 1998 and 2000.

FDA does not reveal the magnitude of any financial interest or the drug companies involved.27 Nonetheless, USA Today reported that in considering 159 Advisory Committee meetings from 1998 through the first half of 2000, at least one member had a financial conflict of interest 92% of the time. Half or more of the members had conflicts at more than half the meetings. At 102 meetings that dealt specifically with drug approval, 33% of committee members had conflicts.27 The Los Angeles Times reported that such conflicts were present at committee reviews of some recently withdrawn drugs.18

The FDA official responsible for waiving the conflict-of-interest rules pointed out that the same experts who consult with industry are often the best for consulting with the FDA, because of their knowledge of certain drugs and diseases. But according to a summary of the USA Today survey reported in the electronic American Health Line, “even consumer and patient representatives on the committees often receive drug company money.”28  In 2001, Congressional staff from the House Government Reform Committee began examining the FDA advisory committees, to determine whether conflicts of interest were affecting the approval process.29

Conclusion

Despite derogatory comments from some politicians and some in the industries it regulates, the FDA does a credible job of trying to protect the public and to quickly review new drugs and devices. However, pressures for speed, conflicts of interest in decision-making, constrained legislative mandates, inadequate budgets, and often limited surveillance after products enter the market mean that scientific considerations are only part of the regulatory equation. These limitations can lead to misleading advertising of new drugs; promotion of less effective over more effective treatments; delays in identifying treatment risks; and perhaps unnecessary exposure of patients to treatments whose risks outweigh their benefits.

Regulatory approval provides many critical functions. However, it does not in itself help clinicians to identify the best treatment strategies. Physicians should be aware that new drugs may not be as effective as old ones; that new drugs are likely to have undiscovered side effects at the time they are marketed; that direct-to-consumer ads are sometimes misleading; that new devices generally have less rigorous evidence of efficacy than new drugs; and that value for money is not considered in the approval process. If clinicians are to practice evidence-based and cost-effective medicine, they must use additional skills and resources to evaluate new treatments. Depending exclusively on the regulatory process may lead to suboptimal care.

REFERENCES

1.Lipsky MS, Sharp LK. From idea to market: the drug approval process.J Am Board Fam Pract 2001; 14:362–7.

2.Lasser KE, Allen PD, Woolhandler SJ, Himmelstein DU, Wolfe SM, Bor DH.Timing of new black box warnings and withdrawals for prescription medications. JAMA 2002;287:2215–20.

3.Friedman MA, Woodcock J, Lumpkin MM, Shuren JE, Hass AE, Thompson LJ.The safety of newly approved medicines: do recent market removals mean there is a problem? JAMA 1999;281:1728 –34.

4.Maisel WH, Sweeney MO, Stevenson WG, Ellison KE, Epstein LM.Recalls and safety alerts involving pacemakers and implantable cardioverter-defibrillator devices. JAMA 2001;286:793–9.

5.Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results from the Women’s Health Initiative randomized controlled trial. JAMA 2002;288:321–33.

6.Grady D, Herrington D, Bittner V, et al. Cardiovascular disease outcomes during 68 years of hormone therapy: Heart and Estrogen/progestin Replacement Study Follow-up (HERS II). JAMA 2002;288:49–57.

7.ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group. The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial.Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA 2002;288:2981–97.

8.Echt DS, Liebson PR, Mitchell LB, et al. Mortality and morbidity in patients receiving encainide, flecainide, or placebo.The Cardiac Arrhythmia Suppression Trial. N Engl J Med 1991;324:781–8.

9.Moore TJ. Deadly medicine: why tens of thousands of heart patients died in America’s worst drug disaster. New York: Simon and Schuster; 1995.

10.Petersen M. Diuretics’ value drowned out by trumpeting of newer drugs. The New York Times 2002 Dec 18;Sect A:32.

11.Gorelick PB, Richardson D, Kelly M, et al. Aspirin and ticlopidine for prevention of recurrent stroke in black patients: a randomized trial. JAMA 2003;289: 2947–57.

12.Gahart MT, Duhamel LM, Dievler A, Price R. Examining the FDA’s oversight of direct-to-consumer advertising. Health Aff (Millwood) 2003 Suppl W3– 120–3.

13.Ramsey SD, Luce BR, Deyo R, Franklin G. The 148 JABFP March–April 2004 Vol.17 No.2  limited state of technology assessment for medical devices: facing the issues. Am J Manag Care 1998;4 Spec No:SP188–99.

14.Merrill RA. Modernizing the FDA: an incremental revolution. Health Aff (Millwood) 1999;18:96–111.

15.Milestones in US food and drug law history. United States Food and Drug Administration. http://www. fda.gov/opacom/backgrounders/miles.html, accessed 8/19/02.

16.Handelsman H. Intermittent positive pressure breathing (IPPB) therapy. Health Technol Assess Rep 1991;(1):1 9.

17.FDA Center for Devices and Radiological Health. Office of Device Evaluation annual report 2002. Available at: URL:http://www.fda.gov/cdrh/annual/ fy2002/ode/index.html.

18.Willman D. How a new policy led to seven deadly drugs. The Los Angeles Times 2000 Dec 20;Sect. A:1.

19.Adams C, Hensley S. Health and Technology: drug makers want FDA to move quicker. Wall Street Journal 2002 Jan 29; Sect.B:12.

20.Adams C. FDA may start assessing fees on makers of medical devices. The Wall Street Journal 2002 May 21;Sect.D:6.

21. Angell M, Relman AS.Prescription for profit. The Washington Post 2001 Jun 20; Sect.A:27.

22.Adams C. FDA searches for an elixir for agency’s attrition rate. The Wall Street Journal 2002 Aug 19;Sect.A:4.

23. Goldberg R.FDA needs a dose of reform.The Wall Street Journal 2002 Sep 30;Sect.A:16.Available at: URL: http://www.aei.brookings.org/policy/page. php?id113

24.Moore TJ, Psaty BM, Furberg CD. Time to act on drug safety. JAMA 1998;279:1571–3.

25.Ahmad SR. Adverse drug event monitoring at the Food and Drug Administration: your report can make a difference. J Gen Intern Med 2003;18:57–60.

26.Wood AJJ. The safety of new medicines: the importance of asking the right questions. JAMA 1999;281:

1753–54.

27. Cauchon D.FDA advisers tied to industry.USA Today 2000 Sep 25; Sect.A:1.

28.Cauchon, D. Number of drug experts available is limited. Many waivers granted for those who have conflicts of interest. USA Today 2000 Sep 25;Sect. A:10.

29.Gribbin A. House investigates panels involved with drug safety. Mismanagement claims spur action. The Washington Times 2001 Jun 18;Sect.A:1.

 

 

 

Read Full Post »