Feeds:
Posts
Comments

Archive for the ‘Population Health Management, Genetics & Pharmaceutical’ Category

Reporter: Aviva Lev-Ari, PhD, RN

International Consortium Finds 15 Novel Risk Loci for Coronary Artery Disease

“lipid metabolism and inflammation as key biological pathways involved in the genetic pathogenesis of CAD”

Themistocles Assimes from Stanford University Medical Center said in a statement that these findings begin to clear up its role. “Our network analysis of the top approximately 240 genetic signals in this study seems to provide evidence that genetic defects in some pathways related to inflammation are a cause,” he said.

On this Open Access Online Scientific Journal, lipid metabolism and inflammation were researched and exposed in the following entries.

However, it is ONLY,  these 15 Novel Risk Loci for Coronary Artery Disease published on 12/3/2012 that provides the genomics loci and the genetic explanation for the following empirical results obtained in the recent research on Cardiovascular diseases, as present in the second half of this post, below.

Special Considerations in Blood Lipoproteins, Viscosity, Assessment and Treatment

http://pharmaceuticalintelligence.com/2012/11/28/special-considerations-in-blood-lipoproteins-viscosity-assessment-and-treatment/

What is the role of plasma viscosity in hemostasis and vascular disease risk?

http://pharmaceuticalintelligence.com/2012/11/28/what-is-the-role-of-plasma-viscosity-in-hemostasis-and-vascular-disease-risk/

PIK3CA mutation in Colorectal Cancer may serve as a Predictive Molecular Biomarker for adjuvant Aspirin therapy

http://pharmaceuticalintelligence.com/2012/11/28/pik3ca-mutation-in-colorectal-cancer-may-serve-as-a-predictive-molecular-biomarker-for-adjuvant-aspirin-therapy/

Peroxisome proliferator-activated receptor (PPAR-gamma) Receptors Activation: PPARγ transrepression for Angiogenesis in Cardiovascular Disease and PPARγ transactivation for Treatment of Diabetes

http://pharmaceuticalintelligence.com/2012/11/13/peroxisome-proliferator-activated-receptor-ppar-gamma-receptors-activation-pparγ-transrepression-for-angiogenesis-in-cardiovascular-disease-and-pparγ-transactivation-for-treatment-of-dia/

Positioning a Therapeutic Concept for Endogenous Augmentation of cEPCs — Therapeutic Indications for Macrovascular Disease: Coronary, Cerebrovascular and Peripheral

http://pharmaceuticalintelligence.com/2012/08/29/positioning-a-therapeutic-concept-for-endogenous-augmentation-of-cepcs-therapeutic-indications-for-macrovascular-disease-coronary-cerebrovascular-and-peripheral/

Cardiovascular Risk Inflammatory Marker: Risk Assessment for Coronary Heart Disease and Ischemic Stroke – Atherosclerosis.

http://pharmaceuticalintelligence.com/2012/10/30/cardiovascular-risk-inflammatory-marker-risk-assessment-for-coronary-heart-disease-and-ischemic-stroke-atherosclerosis/

The Essential Role of Nitric Oxide and Therapeutic NO Donor Targets in Renal Pharmacotherapy

http://pharmaceuticalintelligence.com/2012/11/26/the-essential-role-of-nitric-oxide-and-therapeutic-no-donor-targets-in-renal-pharmacotherapy/

Nitric Oxide Function in Coagulation

http://pharmaceuticalintelligence.com/2012/11/26/nitric-oxide-function-in-coagulation/Nitric Oxide Function in Coagulation

15 Novel Risk Loci for Coronary Artery Disease

December 03, 2012

NEW YORK (GenomeWeb News) – A large-scale association analysis of coronary artery disease has detected 15 new loci associated with risk of the disease, bringing the total number of known risk alleles to 46. As the international CARDIoGRAMplusC4D Consortium reported in Nature Genetics yesterday, the study also found that lipid metabolism and inflammation pathways may play a part in coronary artery disease pathogenesis.

“The number of genetic variations that contribute to heart disease continues to grow with the publication of each new study,” Peter Weissberg from the British Heart Foundation, a co-sponsor of the study, said in a statement. “This latest research further confirms that blood lipids and inflammation are at the heart of the development of atherosclerosis, the process that leads to heart attacks and strokes.”

For its study, the consortium, which was comprised of more than 180 researchers, performed a meta-analysis of data from the 22,233 cases and 64,762 controls of the CARDIoGRAM genome-wide association study and of the 41,513 cases and 65,919 controls from 34 additional studies of people of European and South Asian descent. Using the custom Metabochip array from Illumina, the team tested SNPs for disease association in those populations. The SNPs that reached significance in that stage of the study were then replicated using data from a further four studies.

From this, the team identified 15 new loci with genome-wide significance for risk of coronary artery disease, in addition to known risk loci.

The consortium also reported an additional 104 SNPs that appeared to be associated with coronary artery disease but did not meet the cut-off for genome-wide significance.

Then looking to other known risk factors for coronary artery disease, like blood pressure and diabetes, the researchers assessed whether any of those risk factors were associated with the risk loci. Of the 45 known risk loci, 12 were associated with blood lipid content and five with blood pressure. And while people with type 2 diabetes have a higher risk of developing coronary artery disease, none of the known risk loci were linked to diabetic traits.

An analysis of the pathways that SNPs linked to coronary artery disease fall in revealed that many of them are involved in lipid metabolism and inflammation pathways — 10 risk loci were found to be involved in lipid metabolism. “Our network analysis identified lipid metabolism and inflammation as key biological pathways involved in the genetic pathogenesis of CAD,” the researchers wrote in the paper. “Indeed, there was significant crosstalk between the lipid metabolism and inflammation pathways identified.”

The role of inflammation in coronary artery disease has been up for debate — a debate centering on whether it is a cause or a consequence of the disease — and study author Themistocles Assimes from Stanford University Medical Center said in a statement that these findings begin to clear up its role. “Our network analysis of the top approximately 240 genetic signals in this study seems to provide evidence that genetic defects in some pathways related to inflammation are a cause,” he said.

Related Stories

SOURCE:

http://www.genomeweb.com//node/1159041?hq_e=el&hq_m=1424172&hq_l=3&hq_v=09187c3305

 

GWAS, Meta-Analyses Uncover New Coronary Artery Disease Risk Loci

March 07, 2011

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb News) – Three new studies — including the largest meta-analysis yet of coronary artery disease — have identified dozens of coronary artery disease risk loci in European, South Asian, and Han Chinese populations. All three papers appeared online yesterday in Nature Genetics.

For the first meta-analysis, members of a large international consortium known as the Coronary Artery Disease Genome-wide Replication and Meta-Analysis study, or CARDIoGRAM, sifted through data on more than 135,000 individuals from the UK, US, Europe, Iceland, and Canada. In so doing, they tracked down nearly two-dozen new and previously reported coronary artery disease risk loci.

Because only a few of these loci have been linked to other heart disease-related risk factors such as high blood pressure, those involved say the work points to yet unexplored heart disease pathways.

“[W]e have discovered several new genes not previously known to be involved in the development of coronary heart disease, which is the main cause of heart attacks,” co-corresponding author Nilesh Samani, a cardiology researcher affiliated with the University of Leicester and Glenfield Hospital, said in a statement. “Understanding how these genes work, which is the next step, will vastly improve our knowledge of how the disease develops, and could ultimately help to develop new treatments.”

Samani and his co-workers identified the loci by bringing together data on 22,233 individuals with coronary artery disease and 64,762 unaffected controls. The participants, all of European descent, had been sampled through 14 previous genome-wide association studies and genotyped at an average of about 2.5 million SNPs each. The team then assessed the top candidate SNPs found in this initial analysis in another 56,582 individuals (roughly half of whom had coronary artery disease).

The search not only confirmed associations between coronary artery disease and 10 known loci, but also uncovered associations with 13 other loci. All but three of these were distinct from loci previously implicated in other heart disease risk factors such as hypertension or cholesterol levels, researchers noted.

Consequently, those involved in the study say that exploring the biological functions of the newly detected genes could offer biological clues about how heart disease develops — along with strategies for preventing and treating it.

The genetic complexity of coronary artery disease being revealed by such studies has diagnostic implications as well, according to some.

“Each new gene identified brings us a small step closer to understanding the biological mechanisms of cardiovascular disease development and potential new treatments,” British Heart Foundation Medical Director Peter Weissberg, who was not directly involved in the new studies, said in a statement. “However, as the number of genes grows, it takes us further away from the likelihood that a simple genetic test will identify those most of risk of suffering a heart attack or a stroke.”

Meanwhile, researchers involved with Coronary Artery Disease Genetics Consortium did their own meta-analysis using data collected from four GWAS to find five coronary artery-associated loci in European and South Asian populations.

The group initially looked at 15,420 individuals with coronary artery disease — including 6,996 individuals from South Asia and 8,424 from Europe — and 15,062 unaffected controls. Participants were genotyped at nearly 575,000 SNPs using Illumina BeadChips. Most South Asian individuals tested came from India and Pakistan, researchers noted, while European samples came from the UK, Italy, Sweden, and Germany.

For the validation phase of the study, the team focused in on 59 SNPs at 50 loci from the discovery group that seemed most likely to yield authentic new disease associations. These variants were assessed in 10 replication groups comprised of 21,408 individuals with coronary artery disease and 19,185 individuals without coronary artery disease.

All told, researchers found five loci that seem to influence coronary artery disease risk in the European and South Asian populations: one locus each on chromosomes 7, 11, and 15, along with a pair of loci on chromosome 10.

The team didn’t see significant differences in the frequency or effect sizes of these newly identified variants between the European and South Asian populations, though they emphasized that their approach may have missed some potential risk variants, particularly in those of South Asian descent.

“[C]urrent genome-wide arrays may not capture all important variants in South Asians,” they explained, “Nevertheless, all of the known and new variants were significantly associated with [coronary artery disease] risk in both the European and South Asian populations in the current study, indicating the importance of genes associated with [coronary artery disease] beyond the European ancestry groups in which they were first defined.”

Finally, using a three-stage discovery, validation, and replication GWAS approach, Chinese researchers identified a single coronary artery disease risk variant in the Han Chinese population.

In this first phase of that study, researchers tested samples from 230 cases and 230 controls from populations in Beijing and in China’s Hubei province that were genotyped at Genentech and CapitalBio using Affymetrix Human SNP5.0 arrays.

From the nearly three-dozen SNPs identified in the first stage of the study, they narrowed in on nine suspect variants. After finding linkage disequilibrium between two of the variants, they did validation testing on eight of these in 572 individuals with coronary artery disease and 436 unaffected controls, all from Hubei province.

That analysis implicated a single chromosome 6 SNP called rs6903956 in coronary artery disease — a finding the team ultimately replicated in another group of 2,668 coronary artery disease cases and 3,917 controls from three independent populations in Hubei, Shandong province, and northern China.

The team’s subsequent experiments suggest that the newly detected polymorphism, which falls within a putative gene called C6orf105 on chromosome 6, curbs the expression of this gene. The functional consequences of this shift in expression, if any, are yet to be determined.

Because C6orf105 shares some identity and homology with an androgen hormone inducible gene known as AIG1, those involved in the study argue that it may be worthwhile to investigate possible ties between C6orf105 expression, androgen signaling, and coronary artery disease.

“Androgen has previously been reported to be associated the pathogenesis of atherosclerosis,” they wrote. “Future studies are needed to explore whether C6orf105 expression can be induced by androgen and to further determine the potential mechanism of [coronary artery disease] associated with decreased C6orf105 expression.”

 SOURCE:

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Computational Genomics Center: New Unification of Computational Technologies at Stanford

Word Cloud by Zach Day

Stanford Launches Computational Genomics Center

December 03, 2012

NEW YORK (GenomeWeb News) – Stanford University has launched a new genomics research center that will foster collaboration across its seven schools and harness new computational technologies, it said today.

The Stanford Center for Computational, Evolutionary and Human Genomics, headed by the university’s School of Medicine and School of Humanities and Sciences, has been authorized for five years of funding, the university said.

Created with the goal of spurring and nurturing cross-cutting research collaborations, the new center will be open to all university faculty and labs. It will provide support for small project grants and computational genomics analysis services for member labs, faculty, students, and staff.

The center also will consult with academic institutions, industry, government, and research organizations on collaborations, will support graduate and postdoctoral students, and in its first year will launch public outreach programs in three areas – genomics and social systems, medical genomics, and agricultural, ecological, and environmental genomics. The center’s focus, regardless of the particulars of the project at hand, will be on using expertise and methods for sorting through, integrating, and analyzing large-scale data sets.

Stanford Professor Carlos Bustamante, who also is one of the center’s two founding directors, told GenomeWeb Daily News today that the university has not yet set the funding amount for the center but has committed to five years and will be “sufficient to catalyze all of the programs that we want to get started.” Ultimately, the center will seek funding from beyond the university, he noted.

“The incredible thing about a place like Stanford is that we’ve got the medical school co-located with the main campus, the traditional arts and sciences and humanities programs, and an exceptional engineering school, so we really are looking to create interdisciplinary programs that cut across traditional academic boundaries,” Bustamante said.

He explained that the new center will pursue and support projects that cut a broad swath across Stanford’s academic research areas, including paleo-anthropology, population genetics, agriculture, climate science, and biomedicine, as well as pursue bioethical questions that have arisen alongside human genomic science.

For example, Bustamante said, the research may involve integrating genetics and history studies.

“How can we use technologies from genomics to improve our understanding of the great human diaspora? That’s an area that [Founding Director and Stanford Biology Professor] Mark Feldman and I have been interested in for years.

“But now we can begin to do things that are cross-cutting in, say, funding archaeology students that want to study ancient DNA, or beginning to do projects that have to do with race, genetics, and ethnicity,” he said. “Now we can fund graduate students and post-docs to really work on interdisciplinary issues that are very hard to fund through traditional mechanisms.”

Bustamante pointed out that Stanford has “a tremendous amount of expertise in machine learning and statistical learning,” and the center will try to bring people and projects together with clinicians who are pursuing cutting-edge projects in a wide array of fields, such as cancer genomics.

“Traditionally, these people would know about each other but they haven’t necessarily had the mechanisms to initiative [joint] pilot projects and collaborations,” Bustamante said, and that is where the new center might fit in.

One of the key aims of the center also is to forge collaborations between biomedical researchers with those in the humanities and social sciences.

For example, one of the center’s executive committee members, Stanford Biology Professor Noah Rosenberg, is co-directing a program focused on Jewish genetics and Jewish history. Another executive member, Professor Dmitri Petrov, will head a year-long project focused on ecological genetics.

Bustamante, who previously was a researcher at Cornell University, said he expects that the center will branch out into agricultural genomics as well.

“Genomics is transforming agriculture. It is probably where genomics is having some of its biggest impacts,” he said.

Aside from the wide range of research areas that the new center may support, it will have one core mission, Bustamante told GWDN.

“It really is, first and foremost, a center focused on computational analysis, both in terms of developing methods and computing on big data. That is a particular expertise of those of us involved in launching the center.”

Read Full Post »

Diagnosing Lung Cancer in Exhaled Breath using Gold Nanoparticles

Reporter-curator: Tilda Barliya PhD

Authors: Gang Peng, Ulrike Tisch, Orna Adams1, Meggie Hakim, Nisrean Shehada, Yoav Y. Broza, Salem Billan, Roxolyana Abdah-Bortnyak, Abraham Kuten & Hossam Haick. (NATURE NANOTECHNOLOGY | VOL 4 | OCTOBER 2009 |)

Abstract:

Conventional diagnostic methods for lung cancer1,2 are unsuitable for widespread screening, because they are expensive and occasionally miss tumours. Gas chromatography/mass spectrometry studies have shown that several volatile organic compounds, which normally appear at levels of 1–20 ppb in healthy human breath, are elevated to levels between 10 and 100 ppb in lung cancer patients. Here we show that an array of sensors based on gold nanoparticles can rapidly distinguish the breath of lung cancer patients from the breath of healthy individuals in an atmosphere of high humidity. In combination with solidphase microextraction, gas chromatography/mass spectrometry was used to identify 42 volatile organic compounds that represent lung cancer biomarkers. Four of these were used to train and optimize the sensors, demonstrating good agreement between patient and simulated breath samples. Our results show that sensors based on gold nanoparticles could form the basis of an inexpensive and non-invasive diagnostic tool for lung cancer. (http://www.nature.com/nnano/journal/v4/n10/abs/nnano.2009.235.html) (lnbd.technion.ac.il/NanoChemistry/SendFile.asp?DBID=1…1…) Nanosensors Detect Cancer Breath

Introduction:

Lung cancer accounts for 28% of cancer-related deaths. Approximately 1.3 million people die worldwide every year. Breath testing is a fast, non-invasive diagnostic method that links specific volatile organic compounds (VOCs) in exhaled breath to medical conditions. Gas chromatography/mass spectrometry (GC-MS), ion flow tube mass spectrometry10, laser absorption spectrometry,infrared spectroscopy, polymer-coated surface acoustic wave sensors and coated quartz crystal microbalance sensors have been used for this purpose. However, these techniques are expensive, slow, require complex instruments and, furthermore, require pre-concentration of the biomarkers (that is, treating the biomarkers by a process to increase the relative concentration of the biomarkers to a level that can be detected by the specific technique) to improve detection.

Here, we report a simple, inexpensive, portable sensing technology to distinguish the breath of lung cancer patients from healthy subjects without the need to pre-treat the exhaled breath in any way (see also refs 14–16 for the diagnosis of lung cancer by sensing technology that is based on arrays of polymer/carbon black sensors). Our study consisted of four phases and included volunteers aged 28–60 years. Samples were collected from 56 healthy controls and 40 lung cancer patients after clinical diagnosis using conventional methods and before chemotherapy or other treatment.

In the first phase, we collected exhaled alveolar breath of lung cancer patients and healthy subjects using an ‘offline’ method. This method was designed to avoid potential errors arising from the failure to distinguish endogenous compounds from exogenous ones in the breath and to exclude nasal entrainment of the gas. Exogenous VOCs can be either directly absorbed through the lung via the inhaled breath or indirectly through the blood or skin. Endogenous VOCs are generated by cellular biochemical processes in the body and may provide insight into the body’s function

In the second phase, we identified the VOCs that can serve as biomarkers for lung cancer in the breath samples and determined their relative compositions, using GC-MS in combination with solidphase microextraction (SPME). GC-MS analysis identified over 300–400 different VOCs per breath sample, with .87% reproducibility for a specific volunteer examined multiple times over a period of six months. Forward stepwise discriminant analysis identified 33 common VOCs that appear in at least 83% of the patients but in fewer than 83% of the healthy subjects

The compounds that were observed in both healthy breath and lung cancer breath were presented not only at different concentrations but also in distinctively different mixture compositions.

Further forward stepwise discriminant analysis revealed nine uncommon VOCs that appear in at least 83% of the patients but not in the majority (83%) of healthy subjects. This additional class of VOCs has not been recognized in earlier GC-MS studies.

In spite of these advances in the GC-MS analysis, these data certainly do not account for all the VOCs present in the exhaled breath samples, because the pre-concentration technique can be thought of as a solid phase that extracts only part of the analytes present in the examined phase and, subsequently, releases only part of the extracted analytes.

So, it is likely that the actual mixture of VOCs to which, for example, an array of gold nanoparticle sensors would be responding  is different from that obtained by GC-MS.

In the third phase of this study we designed an array of nine crossreactive chemiresistors, in which each sensor was widely responsive to a variety of odorants for the detection of lung cancer by means of breath testing. We used chemiresistors based on assemblies of 5-nm gold nanoparticles  with different organic functionalities (dodecanethiol, decanethiol, 1-butanethiol, 2-ethylhexanethiol, hexanethiol, tert-dodecanethiol, 4-methoxy-toluenethiol, 2-mercaptobenzoxazole and 11-mercapto-1-undecanol).Diagnosing lung cancer in exhaled breath

Chemiresistors based on functionalized gold nanoparticles combine the advantages of organic specificity with the robustness and processability of inorganic materials.

The response of the nine-sensor array to both healthy and lung cancer breath samples was analysed using principal component analysis . It can be seen that there is no overlap of the lung cancer and healthy patterns.

The PCA of the healthy control group revealed that the set of gold nanoparticles sensors was not influenced by characteristics such as gender, age or smoking habits, thus strengthening the ability of the sensors to discriminate between healthy and cancerous breath. Experiments with a wider population of volunteers to thoroughly probe the influence of diet, alcohol consumption,metabolic state and genetics are under way and will be published elsewhere.

Summary:

To summarize, we have demonstrated that an array of chemiresistors based on functionalized gold nanoparticles in combination with pattern recognition methods can distinguish between the breath of lung cancer patients and healthy controls, without the need for dehumidification or pre-concentration of the lung cancer biomarkers. Our results show great promise for fast, easy and cost-effective diagnosis and screening of lung cancer. The developed devices are expected to be relatively inexpensive, portable and amenable to use in widespread screening, making them potentially valuable in saving millions of lives every year. Given the impact of the rising incidence of cancer on health budgets worldwide, the proposed technology will be a significant saving for both private and public health expenditures. The potential exists for using the proposed technology to diagnose other conditions and diseases, which could mean additional cost reductions and enhanced opportunities to save lives.

Ref:

1. Gang Peng, Ulrike Tisch, Orna Adams, Meggie Hakim, Nisrean Shehada, Yoav Y. Broza, Salem Billan, Roxolyana Abdah-Bortnyak, Abraham Kuten& Hossam Haick. Diagnosing lung cancer in exhaled breath using gold nanoparticles. Nature Nanotechnology 4, 669 – 673 (2009) http://www.nature.com/nnano/journal/v4/n10/abs/nnano.2009.235.html

2. http://lungcancer.about.com/od/diagnosisoflungcancer/a/diagnosislungca.htm

3. http://metabolomx.com/2011/12/15/metabolomx-test-detects-lung-cancer-from-breath/

4. http://www.chestnet.org/accp/pccsu/medical-applications-exhaled-breath-analysis-and-testing?page=0,3

 

Read Full Post »

Personalized Medicine: Cancer Cell Biology and Minimally Invasive Surgery (MIS)

Curator: Aviva Lev-Ari, PhD, RN

In the field of Cancer Research, Translational Medicine  will become Personalized Medicine when each of the cancer type, below will have a Genetic Marker allowing the Clinical Team to use the marker for:

  • prediction of Patient’s reaction to Drug induction
  • design of Clinical Trials to validate drug efficacy on small subset of patients predicted to react favorable to drug regimen, increasing validity and reliability
  • Genetical identification of patients at no need to have a drug administered if non sensitivity to the drug has been predicted

Current urgent need exists for Identification of Genetic Markers to predict Patient’s reaction to Drugs Induction for the following types of Cancer:

 

The executive task of the clinician remains to assess the differentiation in Tumor Response to Treatment.

Review of limitations for the current existing Tools used by clinicians in to be found in:

Brücher BLDM, Bilchik A, Nissan A, Avital I & Stojadinovic A. Can tumor response to therapy be predicted, thereby improving the selection of patients for cancer treatment?  Future Oncology 2012; 8(8): 903-906 , DOI 10.2217/fon.12.78 (doi:10.2217/fon.12.78)   The heterogeneity is a problem that will take at least another decade to unravel because of the number of signaling pathways and the crosstalk that is specifically at issue.

Future Oncology August 2012, Vol. 8, No. 8, Pages 903-906 ,

It is suggested that the new modality should be based on individualized histopathology as well as tumor molecular, genetic and functional characteristics, and individual patients’ characteristics. The new modality should be based on empirical evidence that translates into relevant and meaningful clinical outcome data.

Cancer is in particular a difficult to treat tissue type pathology. In “Tumor response criteria: are they appropriate?” that concern is addressed as follows:

“This becomes a conundrum of sorts in an era of ‘minimally invasive treatment’. One frequently encountered example is that of a patient with chronic gastric reflux and an ultrasound-staged T3N1 distal esophageal adenocarcinoma, who had complete sonographic tumor response to neoadjuvant chemoradiation. The physician may declare that, the tumor having disappeared, the patient requires no further treatment. The surgical oncologist recommends resection, recognizing the fact that up to 20% or more of these complete responders will have identifiable nests of tumor beyond the mucosal scar within the specimen – in other words: residual tumor. In other cases, patients with clinical, sonographic, functional (PET) and histopathological ‘complete’ tumor response to induction therapy experience recurrence within the first 2 years of resection, reminding us of the intricacy and enigma of tumor biology. We have yet to develop the tools needed to consistently delineate the response of a tumor to multimodality therapy.”

This described reality in the Oncology Operating Room is coupled with new trends in invasive treatment of tumor resection.

Minimally Invasive Surgery (MIS) vs. conventional surgery dissection applied to cancer tissue with the known pathophysiology of recurrence and remission cycles has its short term advantages. However, in many cases MIS is not the right surgical decision, yet, it is applied for a corollary of patient-centered care considerations. At present, facing the unknown of the future behavior of the tumor as its response to therapeutics bearing uncertainty related to therapy outcomes.

An increase in the desirable outcomes of MIS as a modality of treatment, will be strongly assisted in the future, with anticipated progress to be made in the field of Cancer Research, Translational Medicine and Personalized Medicine, when each of the cancer types, above,  will already have a Genetic Marker allowing the Clinical Team to use the marker(s) for:

  • prediction of Patient’s reaction to Drug induction
  • design of Clinical Trials to validate drug efficacy on small subset of patients predicted to react favorable to drug regimen, increasing validity and reliability
  • Genetical identification of patients at no need to have a drug administered if non sensitivity to the drug has been predicted by the genetic marker.

REFERENCES

Tumor response criteria: are they appropriate?

Björn LDM Brücher*1,2, Anton Bilchik2,3, Aviram Nissan2,4, Itzhak Avital2,5 & Alexander Stojadinovic2,6

 
Treatment for cure is not the endpoint, but the best that can be done is to extend the time of survival to a realistic long term goal and retain a quality of life.
 
Brücher BLDM, Piso P, Verwaal V et al. Peritoneal carcinomatosis: overview and basics. Cancer Invest.30(3),209–224 (2012).
 
Brücher BLDM, Swisher S, Königsrainer A et al. Response to preoperative therapy in upper gastrointestinal cancers. Ann. Surg. Oncol.16(4),878–886 (2009).
 
Miller AB, Hoogstraten B, Staquet M, Winkler A. Reporting results of cancer treatment. Cancer47(1),207–214 (1981).
 
 
 

Other research papers on Cancer and Cancer Therapeutics were published on this Scientific Web site as follows:

What can we expect of tumor therapeutic response?

PIK3CA mutation in Colorectal Cancer may serve as a Predictive Molecular Biomarker for adjuvant Aspirin therapy

Nanotechnology Tackles Brain Cancer

Response to Multiple Cancer Drugs through Regulation of TGF-β Receptor Signaling: a MED12 Control

Personalized medicine-based cure for cancer might not be far away

GSK for Personalized Medicine using Cancer Drugs needs Alacris systems biology model to determine the in silico effect of the inhibitor in its “virtual clinical trial”

Lung Cancer (NSCLC), drug administration and nanotechnology

Non-small Cell Lung Cancer drugs – where does the Future lie?

Cancer Innovations from across the Web

arrayMap: Genomic Feature Mining of Cancer Entities of Copy Number Abnormalities (CNAs) Data

How mobile elements in “Junk” DNA promote cancer. Part 1: Transposon-mediated tumorigenesis.

Cancer Genomics – Leading the Way by Cancer Genomics Program at UC Santa Cruz

Closing the gap towards real-time, imaging-guided treatment of cancer patients.

Closing the gap towards real-time, imaging-guided treatment of cancer patients.

mRNA interference with cancer expression

Search Results for ‘cancer’ on this web site

Cancer Genomics – Leading the Way by Cancer Genomics Program at UC Santa Cruz

Closing the gap towards real-time, imaging-guided treatment of cancer patients.

Lipid Profile, Saturated Fats, Raman Spectrosopy, Cancer Cytology

mRNA interference with cancer expression

Pancreatic cancer genomes: Axon guidance pathway genes – aberrations revealed

Biomarker tool development for Early Diagnosis of Pancreatic Cancer: Van Andel Institute and Emory University

Is the Warburg Effect the cause or the effect of cancer: A 21st Century View?

Crucial role of Nitric Oxide in Cancer

Targeting Glucose Deprived Network Along with Targeted Cancer Therapy Can be a Possible Method of Treatment

 

See comment written for:

Knowing the tumor’s size and location, could we target treatment to THE ROI by applying…..

http://pharmaceuticalintelligence.com/2012/10/16/knowing-the-tumors-size-and-location-could-we-target-treatment-to-the-roi-by-applying-imaging-guided-intervention/

24 Responses

  1. GREAT work.

    I’ll read and comment later on

  2. Highlights of The 2012 Johns Hopkins Prostate Disorders White Paper include:

    A promising new treatment for men with frequent nighttime urination.
    Answers to 8 common questions about sacral nerve stimulation for lower urinary tract symptoms.
    Surprising research on the link between smoking and prostate cancer recurrence.
    How men who drink 6 cups of coffee a day or more may reduce their risk of aggressive prostate cancer.
    Should you have a PSA screening test? Answers to important questions on the controversial USPSTF recommendation.
    Watchful waiting or radical prostatectomy for men with early-stage prostate cancer? What the research suggests.
    A look at state-of-the-art surveillance strategies for men on active surveillance for prostate cancer.
    Locally advanced prostate cancer: Will you benefit from radiation and hormones?
    New drug offers hope for men with metastatic castrate-resistant prostate cancer.
    Behavioral therapy for incontinence: Why it might be worth a try.

    You’ll also get the latest news on benign prostatic enlargement (BPE), also known as benign prostatic hyperplasia (BPH) and prostatitis:
    What’s your Prostate Symptom Score? Here’s a quick quiz you can take right now to determine if you should seek treatment for your enlarged prostate.
    Your surgical choices: a close look at simple prostatectomy, transurethral prostatectomy and open prostatectomy.
    New warnings about 5-alpha-reductase inhibitors and aggressive prostate cancer.

  3. Promising technique.

    INCORE pointed out in detail about the general problem judging response and the stil missing quality in standardization:

    http://www.futuremedicine.com/doi/abs/10.2217/fon.12.78?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dwww.ncbi.nlm.nih.gov

    I did research in response evaluation and prediction for about 15y now and being honest: neither the clinical, nor the molecular biological data proved significant benefit in changing a strategy in patient diagnosis and / or treatment. I would state: this brings us back on the ground and not upon the sky. Additionally it means: we have to ´work harder on that and the WHO has to take responsibility: clinicians use a reponse classification without knowing, that this is just related to “ONE” experiment from the 70′s and that this experiment never had been rescrutinized (please read the Editorial I provided – we use a clinical response classification since more than 30 years worldwide (Miller et al. Cancer 1981) but it is useless !

  4. Dr. BB

    Thank you for your comment.
    Dr. Nir will reply to your comment.
    Regarding the Response Classification in use, it seems that the College of Oncology should champion a task force to revisit the Best Practice in use in this domain and issue a revised version or a new effort for a a new classification system for Clinical Response to treatment in Cancer.

  5. I’m sorry that I was looking for this paper again earlier and didn’t find it. I answered my view on your article earlier.

    This is a method demonstration, but not a proof of concept by any means. It adds to the cacophany of approaches, and in a much larger study would prove to be beneficial in treatment, but not a cure for serious prostate cancer because it is unlikely that it can get beyond the margin, and also because there is overtreatment at the cutoff of PSA at 4.0. There is now a proved prediction model that went to press some 4 months ago. I think that the pathologist has to see the tissue, and the standard in pathology now is for any result that is cancer, two pathologist or a group sitting together should see it. It’s not an easy diagnosis.

    Björn LDM Brücher, Anton Bilchik, Aviram Nissan, Itzhak Avital, & Alexander Stojadinovic. Tumor response criteria: are they appropriate? Future Oncol. (2012) 8(8), 903–906. 10.2217/FON.12.78. ISSN 1479-6694.

    ..Tumor heterogeneity is a ubiquitous phemomenon. In particular, there are important differences among the various types of gastrointestinal (GI) cancers in terms of tumor biology, treatment response and prognosis.

    ..This forms the principal basis for targeted therapy directed by tumor-specific testing at either the gene or protein level. Despite rapid advances in our understanding of targeted therapy for GI cancers, the impact on cancer survival has been marginal.

    ..Can tumor response to therapy be predicted, thereby improving the selection of patients for cancer treatment?

    ..In 2000 theNCI with the European Association for Research and Treatment of Cancer, proposed a replacement of 2D measurement with a decrease in the largest tumor diameter by 30% in one dimension. Tumor response as defined would translate into a 50% decrease for a spherical lesion

    ..We must rethink how we may better determine treatment response in a reliable, reproducible way that is aimed at individualizing the therapy of cancer patients.

    ..we must change the tools we use to assess tumor response. The new modality should be based on empirical evidence that translates into relevant and meaningful clinical outcome data.

    ..This becomes a conundrum of sorts in an era of ‘minimally invasive treatment’.

    ..integrated multidisciplinary panel of international experts – not sure that that will do it

    Several years ago i heard Stamey present the totality of his work at Stanford, with great disappointment over hsPSA that they pioneered in. The outcomes were disappointing.

    I had published a review of all of our cases reviewed for 1 year with Marguerite Pinto.
    There’s a reason that the physicians line up outside of her office for her opinion.
    The review showed that a PSA over 24 ng/ml is predictive of bone metastasis. Any result over 10 was as likely to be prostatitis, BPH or cancer.

    I did an ordinal regression in the next study with Gustave Davis using a bivariate ordinal regression to predict lymph node metastasis using the PSA and the Gleason score. It was better than any univariate model, but there was no followup.

    I reviewed a paper for Clin Biochemistry (Elsevier) on a new method for PSA, very different than what we are familiar with. It was the most elegant paper I have seen in the treatment of the data. The model could predict post procedural time to recurrence to 8 years.

    • I hope we are in agreement on the fact that imaging guided interventions are needed for better treatment outcome. The point I’m trying to make in this post is that people are investing in developing imaging guided intervention and it is making progress.

      Over diagnosis and over treatment is another issue altogether. I think that many of my other posts are dealing with that.

  6. Tumor response criteria: are they appropriate?
    Future Oncology 2012; 8(8): 903-906 , DOI 10.2217/fon.12.78 (doi:10.2217/fon.12.78)
    Björn LDM Brücher, Anton Bilchik, Aviram Nissan, Itzhak Avital & Alexander Stojadinovic
    Tumor heterogeneity is a problematic because of differences among the metabolic variety among types of gastrointestinal (GI) cancers, confounding treatment response and prognosis.
    This is in response to … a group of investigators from Sunnybrook Health Sciences Centre, University of Toronto, Ontario, Canada who evaluate the feasibility and safety of magnetic resonance (MR) imaging–controlled transurethral ultrasound therapy for prostate cancer in humans. Their study’s objective was to prove that using real-time MRI guidance of HIFU treatment is possible and it guarantees that the location of ablated tissue indeed corresponds to the locations planned for treatment.
    1. There is a difference between expected response to esophageal or gastric neoplasms both biologically and in expected response, even given variability within a class. The expected time to recurrence is usually longer in the latter case, but the confounders are – age at time of discovery, biological time of detection, presence of lymph node and/or distant metastasis, microscopic vascular invasion.
    2. There is a long latent period in abdominal cancers before discovery, unless a lesion is found incidentally in surgery for another reason.
    3. The undeniable reality is that it is not difficult to identify the main lesion, but it is difficult to identify adjacent epithelium that is at risk (transitional or pretransitional). Pathologists have a very good idea about precancerous cervical neoplasia.

    The heterogeneity rests within each tumor and between the primary and metastatic sites, which is expected to be improved by targeted therapy directed by tumor-specific testing. Despite rapid advances in our understanding of targeted therapy for GI cancers, the impact on cancer survival has been marginal.

    The heterogeneity is a problem that will take at least another decade to unravel because of the number of signaling pathways and the crosstalk that is specifically at issue.

    I must refer back to the work of Frank Dixon, Herschel Sidransky, and others, who did much to develop a concept of neoplasia occurring in several stages – minimal deviation and fast growing. These have differences in growth rates, anaplasia, and biochemical. This resembles the multiple “hit” theory that is described in “systemic inflammatory” disease leading to a final stage, as in sepsis and septic shock.
    In 1920, Otto Warburg received the Nobel Prize for his work on respiration. He postulated that cancer cells become anaerobic compared with their normal counterpart that uses aerobic respiration to meet most energy needs. He attributed this to “mitochondrial dysfunction. In fact, we now think that in response to oxidative stress, the mitochondrion relies on the Lynen Cycle to make more cells and the major source of energy becomes glycolytic, which is at the expense of the lean body mass (muscle), which produces gluconeogenic precursors from muscle proteolysis (cancer cachexia). There is a loss of about 26 ATP ~Ps in the transition.
    The mitochondrial gene expression system includes the mitochondrial genome, mitochondrial ribosomes, and the transcription and translation machinery needed to regulate and conduct gene expression as well as mtDNA replication and repair. Machinery involved in energetics includes the enzymes of the Kreb’s citric acid or TCA (tricarboxylic acid) cycle, some of the enzymes involved in fatty acid catabolism (β-oxidation), and the proteins needed to help regulate these systems. The inner membrane is central to mitochondrial physiology and, as such, contains multiple protein systems of interest. These include the protein complexes involved in the electron transport component of oxidative phosphorylation and proteins involved in substrate and ion transport.
    Mitochondrial roles in, and effects on, cellular homeostasis extend far beyond the production of ATP, but the transformation of energy is central to most mitochondrial functions. Reducing equivalents are also used for anabolic reactions. The energy produced by mitochondria is most commonly thought of to come from the pyruvate that results from glycolysis, but it is important to keep in mind that the chemical energy contained in both fats and amino acids can also be converted into NADH and FADH2 through mitochondrial pathways. The major mechanism for harvesting energy from fats is β-oxidation; the major mechanism for harvesting energy from amino acids and pyruvate is the TCA cycle. Once the chemical energy has been transformed into NADH and FADH2 (also discovered by Warburg and the basis for a second Nobel nomination in 1934), these compounds are fed into the mitochondrial respiratory chain.
    The hydroxyl free radical is extremely reactive. It will react with most, if not all, compounds found in the living cell (including DNA, proteins, lipids and a host of small molecules). The hydroxyl free radical is so aggressive that it will react within 5 (or so) molecular diameters from its site of production. The damage caused by it, therefore, is very site specific. The reactions of the hydroxyl free radical can be classified as hydrogen abstraction, electron transfer, and addition.
    The formation of the hydroxyl free radical can be disastrous for living organisms. Unlike superoxide and hydrogen peroxide, which are mainly controlled enzymatically, the hydroxyl free radical is far too reactive to be restricted in such a way – it will even attack antioxidant enzymes. Instead, biological defenses have evolved that reduce the chance that the hydroxyl free radical will be produced and, as nothing is perfect, to repair damage.
    Currently, some endogenous markers are being proposed as useful measures of total “oxidative stress” e.g., 8-hydroxy-2’deoxyguanosine in urine. The ideal scavenger must be non-toxic, have limited or no biological activity, readily reach the site of hydroxyl free radical production (i.e., pass through barriers such as the blood-brain barrier), react rapidly with the free radical, be specific for this radical, and neither the scavenger nor its product(s) should undergo further metabolism.
    Nitric oxide has a single unpaired electron in its π*2p antibonding orbital and is therefore paramagnetic. This unpaired electron also weakens the overall bonding seen in diatomic nitrogen molecules so that the nitrogen and oxygen atoms are joined by only 2.5 bonds. The structure of nitric oxide is a resonance hybrid of two forms.
    In living organisms nitric oxide is produced enzymatically. Microbes can generate nitric oxide by the reduction of nitrite or oxidation of ammonia. In mammals nitric oxide is produced by stepwise oxidation of L-arginine catalyzed by nitric oxide synthase (NOS). Nitric oxide is formed from the guanidino nitrogen of the L-arginine in a reaction that consumes five electrons and requires flavin adenine dinucleotide (FAD), flavin mononucleotide (FMN) tetrahydrobiopterin (BH4), and iron protoporphyrin IX as cofactors. The primary product of NOS activity may be the nitroxyl anion that is then converted to nitric oxide by electron acceptors.
    The thiol-disulfide redox couple is very important to oxidative metabolism. GSH is a reducing cofactor for glutathione peroxidase, an antioxidant enzyme responsible for the destruction of hydrogen peroxide. Thiols and disulfides can readily undergo exchange reactions, forming mixed disulfides. Thiol-disulfide exchange is biologically very important. For example, GSH can react with protein cystine groups and influence the correct folding of proteins, and it GSH may play a direct role in cellular signaling through thiol-disulfide exchange reactions with membrane bound receptor proteins (e.g., the insulin receptor complex), transcription factors (e.g., nuclear factor κB), and regulatory proteins in cells. Conditions that alter the redox status of the cell can have important consequences on cellular function.
    So the complexity of life is not yet unraveled.

    Can tumor response to therapy be predicted, thereby improving the selection of patients for cancer treatment?
    The goal is not just complete response. Histopathological response seems to be related post-treatment histopathological assessment but it is not free from the challenge of accurately determining treatment response, as this method cannot delineate whether or not there are residual cancer cells. Functional imaging to assess metabolic response by 18-fluorodeoxyglucose PET also has its limits, as the results are impacted significantly by several variables:

    • tumor type
    • sizing
    • doubling time
    • anaplasia?
    • extent of tumor necrosis
    • type of antitumor therapy and the time when response was determined.
    The new modality should be based on individualized histopathology as well as tumor molecular, genetic and functional characteristics, and individual patients’ characteristics, a greater challenge in an era of ‘minimally invasive treatment’.
    This listing suggests that for every cancer the following data has to be collected (except doubling time). If there are five variables, the classification based on these alone would calculate to be very sizable based on Eugene Rypka’s feature extraction and classification. But looking forward, time to remission and disease free survival are additionally important. Treatment for cure is not the endpoint, but the best that can be done is to extend the time of survival to a realistic long term goal and retain a quality of life.

    Brücher BLDM, Piso P, Verwaal V et al. Peritoneal carcinomatosis: overview and basics. Cancer Invest.30(3),209–224 (2012).
    Brücher BLDM, Swisher S, Königsrainer A et al. Response to preoperative therapy in upper gastrointestinal cancers. Ann. Surg. Oncol.16(4),878–886 (2009).
    Miller AB, Hoogstraten B, Staquet M, Winkler A. Reporting results of cancer treatment. Cancer47(1),207–214 (1981).
    Therasse P, Arbuck SG, Eisenhauer EA et al. New guidelines to evaluate the response to treatment in solid tumors. European Organization for Research and Treatment of Cancer, National Cancer Institute of the United States, National Cancer Institute of Canada. J. Natl Cancer Inst.92(3),205–216 (2000).
    Brücher BLDM, Becker K, Lordick F et al. The clinical impact of histopathological response assessment by residual tumor cell quantification in esophageal squamous cell carcinomas. Cancer106(10),2119–2127 (2006).

    • Dr. Larry,

      Thank you for this comment.

      Please carry it as a stand alone post, Dr. Ritu will refer to it and reference it in her FORTHCOMING pst on Tumor Response which will integrate multiple sources.

      Please execute my instruction

      Thank you

    • Thank you Larry for this educating comment. It explains very well why the Canadian investigators did not try to measure therapy response!

      What they have demonstrated is the technological feasibility of coupling a treatment device to an imaging device and use that in order to guide the treatment to the right place.

      the issue of “choice of treatment” to which you are referring is not in the scope of this publication.
      The point is: if one treatment modality can be guided, other can as well! This should encourage others, to try and develop imaging-based treatment guidance systems.

  7. The crux of the matter in terms of capability is that the cancer tissue, adjacent tissue, and the fibrous matrix are all in transition to the cancerous state. It is taught to resect leaving “free margin”, which is better aesthetically, and has had success in breast surgery. The dilemma is that the patient may return, but how soon?

    • Correct. The philosophy behind lumpectomy is preserving quality of life. It was Prof. Veronesi (IEO) who introduced this method 30 years ago noticing that in the majority of cases, the patient will die from something else before presenting recurrence of breast cancer..

      It is well established that when the resection margins are declared by a pathologist (as good as he/she could be) as “free of cancer”, the probability of recurrence is much lower than otherwise.

  8. Dr. Larry,

    To assist Dr. Ritu, PLEASE carry ALL your comments above into a stand alone post and ADD to it your comment on my post on MIS

    Thank you

  9. Great post! Dr. Nir, can the ultrasound be used in conjunction with PET scanning as well to determine a spatial and functional map of the tumor. With a disease like serous ovarian cancer we typically see an intraperitoneal carcimatosis and it appears that clinicians are wanting to use fluorogenic probes and fiberoptics to visualize the numerous nodules located within the cavity Also is the technique being used mainy for surgery or image guided radiotherapy or can you use this for detecting response to various chemotherapeutics including immunotherapy.

    • Ultrasound can and is actually used in conjunction with PET scanning in many cases. The choice of using ultrasound is always left to the practitioner! Being a non-invasive, low cost procedure makes the use of ultrasound a non-issue. The down-side is that because it is so easy to access and operate, nobody bothers to develop rigorous guidelines about using it and the benefits remains the property of individuals.

      In regards to the possibility of screening for ovarian cancer and characterising pelvic masses using ultrasound I can refer you to scientific work in which I was involved:

      1. VAES (E.), MANCHANDA (R), AUTIER, NIR (R), NIR (D.), BLEIBERG (H.), ROBERT (A.), MENON (U.). Differential diagnosis of adnexal masses: Sequential use of the Risk of Malignancy Index and a novel computer aided diagnostic tool. Published in Ultrasound in Obstetrics & Gynecology. Issue 1 (January). Vol. 39. Page(s): 91-98.

      2. VAES (E.), MANCHANDA (R), NIR (R), NIR (D.), BLEIBERG (H.), AUTIER (P.), MENON (U.), ROBERT (A.). Mathematical models to discriminate between benign and malignant adnexal masses: potential diagnostic improvement using Ovarian HistoScanning. Published in International Journal of Gynecologic Cancer (IJGC). Issue 1. Vol. 21. Page(s): 35-43.

      3. LUCIDARME (0.), AKAKPO (J.-P.), GRANBERG (S.), SIDERI (M.), LEVAVI (H.), SCHNEIDER (A.), AUTIER (P.), NIR (D.), BLEIBERG (H.). A new computer aided diagnostic tool for non-invasive characterisation of malignant ovarian masses: Results of a multicentre validation study. Published in European Radiology. Issue 8. Vol. 20. Page(s): 1822-1830.

      Dror Nir, PhD
      Managing partner

      BE: +32 (0) 473 981896
      UK: +44 (0) 2032392424

      web: http://www.radbee.com/
      blogs: http://radbee.wordpress.com/ ; http://www.MedDevOnIce.com

  10. totally true and i am very thankfull for these briliant comments.

    Remember: 10years ago: every cancer researcher stated: “look at the tumor cells only – forget the stroma”. The era of laser-captured tumor-cell dissection started. Now , everyone knows: it is a system we are looking at and viewing and analyzing tumor cells only is really not enough.

    So if we would be honest, we would have to declare, that all data, which had been produced 13-8years ago, dealing with laser capture microdissection, that al these data would need a re-scrutinization, cause the influence of the stroma was “forgotten”. I ‘d better not try thinking about the waisted millions of dollars.

    If we keep on being honest: the surgeon looks at the “free margin” in a kind of reductionable model, the pathologist is more the control instance. I personally see the pathologist as “the control instance” of surgical quality. Therefore, not the wish of the surgeon is important, the objective way of looking into problems or challenges. Can a pathologist always state, if a R0-resection had been performed ?

    The use of the Resectability Classification:
    There had been many many surrogate marker analysis – nothing new. BUT never a real substantial well tought through structured analysis had been done: mm by mm by mm by mm and afterwards analyzing that by a ROC analysis. BUt against which goldstandard ? If you perform statistically a ROC analysis – you need a golstandard to compare to. Therefore what is the real R0-resectiòn? It had been not proven. It just had been stated in this or that tumor entity that this or that margin with this margin free mm distance or that mm distance is enough and it had been declared as “the real R0-classification”. In some organs it is very very difficult and we all (surgeons, pathologists, clinicians) that we always get to the limit, if we try interpretating the R-classification within the 3rd dimension. Often it is just declared and stated.

    Otherwise: if lymph nodes are negative it does not mean, lymph nodes are really negative, cause up to 38% for example in upper GI cancers have histological negative lymph nodes, but immunohistochemical positive lymph nodes. And this had been also shown by Stojadinovic at el analyzing the ultrastaging in colorectal cancer. So the 4th dimension of cancer – the lymph nodes / the lymphatic vessel invasion are much more important than just a TNM classification, which unfortunately does often not reflect real tumor biology.

    AS we see: cancer has multifactorial reasons and it is necessary taking the challenge performing high sophisticated research by a multifactorial and multidisciplinary manner.

    Again my deep and heartly thanks for that productive and excellent discussion !

    • Dr. BB,

      Thank you for your comment.

      Multidisciplinary perspectives have illuminated the discussion on the pages of this Journal.

      Eager to review Dr. Ritu’s forthcoming paper – the topic has a life of its own and is embodied in your statement:

      “the 4th dimension of cancer – the lymph nodes / the lymphatic vessel invasion are much more important than just a TNM classification, which unfortunately does often not reflect real tumor biology.”

    • Thank you BB for your comment. You have touched the core limitation of healthcare professionals: how do we know that we know!

      Do we have a reference to each of the test we perform?

      Do we have objective and standardise quality measures?

      Do we see what is out-there or are we imagining?

      The good news: Everyday we can “think” that we learned something new. We should be happy with that, even if it is means that we learned that yesterday’s truth is not true any-more and even if we are likely to be wrong again…:)

      But still, in the last decades, lots of progress was made….

  11. Dr. Nir,
    I thoroughly enjoyed reading your post as well as the comments that your post has attracted. There were different points of view and each one has been supported with relevant examples in the literature. Here are my two cents on the discussion:
    The paper that you have discussed had the objective of finding out whether real-time MRI guidance of treatment was even possible and if yes, and also if the treatment could be performed in accurate location of the ROI? The data reveals they were pretty successful in accomplishing their objective and of course that gives hope to the imaging-based targeted therapies.
    Whether the ROI is defined properly and if it accounts for the real tumor cure, is a different question. Role of pathologists and the histological analysis they bring about to the table cannot be ruled out, and the absence of a defined line between the tumor and the stromal region in the vicinity is well documented. However, that cannot rule out the value and scope of imaging-based detection and targeted therapy. After all, it is seminal in guiding minimally invasive surgery. As another arm of personalized medicine-based cure for cancer, molecular biologists at MD Anderson have suggested molecular and genetic profiling of the tumor to determine genetic aberrations on the basis of which matched-therapy could be recommended to patients. When phase I trial was conducted, the results were obtained were encouraging and the survival rate was better in matched-therapy patients compared to unmatched patients. Therefore, everytime there is more to consider when treating a cancer patient and who knows a combination of views of oncologists, pathologists, molecular biologists, geneticists, surgeons would device improvised protocols for diagnosis and treatment. It is always going to be complicated and generalizations would never give an answer. Smart interpretations of therapies – imaging-based or others would always be required!

    Ritu

    • Dr. Nir,
      One of your earlier comments, mentioned the non invasiveness of ultrasound, thus, it’s prevalence in use for diagnosis.

      This may be true for other or all areas with the exception of Mammography screening. In this field, an ultrasound is performed only if a suspected area of calcification or a lump has been detected in the routine or patient-initiated request for ad hoc mammography secondery to patient complain of pain or patient report of suspected lump.

      Ultrasound in this field repserents ascalation and two radiologists review.

      It in routine use for Breast biopsy.

    • Thanks Ritu for this supporting comment. The worst enemy of finding solutions is doing nothing while using the excuse of looking for the “ultimate solution” . Personally, I believe in combining methods and improving clinical assessment based on information fusion. Being able to predict, and then timely track the response to treatment is a major issue that affects survival and costs!

Judging the ‘Tumor response’-there is more food for thought

http://pharmaceuticalintelligence.com/2012/12/04/judging-the-tumor-response-there-is-more-food-for-thought/

13 Responses

  1. Dr. Sanexa
    you have brought up an interesting and very clinically relevant point: what is the best measurement of response and 2) how perspectives among oncologists and other professionals differ on this issues given their expertise in their respective subspecialties (immunologist versus oncologist. The advent of functional measurements of tumors (PET etc.) seems extremely important in the therapeutic use AND in the development of these types of compounds since usually a response presents (in cases of solid tumors) as either a lack of growth of the tumor or tumor shrinkage. Did the authors include an in-depth discussion of the rapidity of onset of resistance with these types of compounds?
    Thanks for the posting.

  2. Dr. Williams,
    Thanks for your comment on the post. The editorial brings to attention a view that although PET and other imaging methods provide vital information on tumor growth, shrinkage in response to a therapy, however, there are more aspects to consider including genetic and molecular characteristics of tumor.
    It was an editorial review and the authors did not include any in-depth discussion on the rapidity of onset of resistance with these types of compounds as the focus was primarily on interpreting tumor response.
    I am glad you found the contents of the write-up informative.
    Thanks again!
    Ritu

  3. Thank you for your wonderful comment and interpretation. Dr.Sanexa made a brilliant comment.

    May I allow myself putting my finger deeper into this wound ? Cancer patients deserve it.

    It had been already pointed out by international experts from Munich, Tokyo, Hong-Kong and Houston, dealing with upper GI cancer, that the actual response criteria are not appropriate and moreover: the clinical response criteria in use seem rather to function as an alibi, than helping to differentiate and / or discriminate tumor biology (Ann Surg Oncol 2009):

    http://www.ncbi.nlm.nih.gov/pubmed/19194759

    The response data in a phase-II-trial (one tumor entity, one histology, one treatment, one group) revealed: clinical response evaluation according to the WHO-criteria is not appropriate to determine response:

    http://www.ncbi.nlm.nih.gov/pubmed/15498642

    Of course, there was a time, when it seemed to be useful and this also has to be respected.

    There is another challenge: using statistically a ROC and resulting in thresholds. This was, is and always be “a clinical decision only” and not the decision of the statistician. The clinician tells the statistician, what decision, he wants to make – the responsibility is enormous. Getting back to the roots:
    After the main results of the Munich-group had been published 2001 (Ann Surg) and 2004 (J Clin Oncol):

    http://www.ncbi.nlm.nih.gov/pubmed/11224616

    http://www.ncbi.nlm.nih.gov/pubmed/14990646

    the first reaction in the community was: to difficult, can’t be, not re-evaluated, etc.. However, all evaluated cut-offs / thresholds had been later proven to be the real and best ones by the MD Anderson Cancer Center in Houston, Texas. Jaffer Ajani – a great and critical oncologist – pushed that together with Steve Swisher and they found the same results. Than the upper GI stakeholders went an uncommon way in science: they re-scrutinized their findings. Meanwhile the Goldstandard using histopathology as the basis-criterion had been published in Cancer 2006.

    http://www.ncbi.nlm.nih.gov/pubmed/16607651

    Not every author, who was at the authorlist in 2001 and 2004 wanted to be a part of this analysis and publication ! Why ? Everyone should judge that by himself.

    The data of this analysis had been submitted to the New England Journal of Medicine. In the 2nd review stage process, the manuscript was rejected. The Ann Surg Oncol accepted the publication: the re-scrutinized data resulted in another interesting finding: in the future maybe “one PET-scan” might be appropriate predicting the patient’s response.

    Where are we now ?

    The level of evidence using the response criteria is very low: Miller’s (Cancer 1981) publication belonged to ”one single” experiment from Moertel (Cancer 1976). During that time, there was no definition of “experiences” rather than “oncologists”. These terms had not been in use during that time.

    Additionally they resulted in a (scientifically weak) change of the classification, published by Therasse (J Natl Cancer Inst 2000). Targeted therapy did not result in a change so far. In 2009, the international upper GI experts sent their publication of the Ann Surg Oncol 2009 to the WHO but without any kind of reaction.

    Using molecular biological predictive markers within the last 10years all seem to have potential.

    http://www.ncbi.nlm.nih.gov/pubmed/20012971

    http://www.ncbi.nlm.nih.gov/pubmed/18704459

    http://www.ncbi.nlm.nih.gov/pubmed/17940507

    http://www.ncbi.nlm.nih.gov/pubmed/17354029

    But, experts are aware: the real step breaking barriers had not been performed so far. Additionally, it is very important in trying to evaluate and / predict response, that not different tumor entities with different survival and tumor biology are mixed together. Those data are from my perspective not helpful, but maybe that is my own Bias (!) of my view.

    INCORE, the International Consortium of Research Excellence of the Theodor-Billroth-Academy, was invited publishing the Editorial in Future Oncology 2012. The consortium pointed out, that living within an area of ‘prove of principle’ and also trying to work out level of evidence in medicine, it is “the duty and responsibility” of every clinician, but also of the societies and institutions, also of the WHO.

    Complete remission is not the only goal, as experts dealing with ‘response-research’ are aware. It is so frustrating for patients and clinicians: there is a rate of those patients with complete remission, who develop early recurrence ! This reflects, that complete remission cannot function as the only criterion describing response !

    Again, my heartly thanks, that Dr.Sanexa discussed this issue in detail.
    I hope, I found the way explaining the way of development and evaluating response criteria properly and in a differentiated way of view. From the perspective of INCORE:

    “an interdisciplinary initiative with all key stake¬holders and disciplines represented is imperative to make predictive and prognostic individualized tumor response assessment a modern-day reality. The integrated multidisciplinary panel of international experts need to define how to leverage existing data, tissue and testing platforms in order to predict individual patient treatment response and prognosis.”

  4. Dr. Brucher,

    First of all thanks for expressing your views on the ‘tumor response’ in a comprehensive way. You are the first author of the editorial review one of the prominent people who has taken part in the process of defining tumor response and I am glad that you decided to write a comment on the writeup.
    The topic has been explained well in an immaculate manner and that it further clarifies the need for the perfect markers that would be able to evaluate and predict tumor response. There are, as you mentioned, some molecular markers available including VEGF, cyclins, that have been brought to focus in the context of squamous cell carcinoma.

    It would be great if you could be the guest author for our blog and we could publish your opinion (comment on this blog post) as a separate post. Please let us know if it is OK with you.

    Thanks again for your comment
    Ritu

  5. Thank you all to the compelling discussions, above.

    Please review the two sources on the topic I placed at the bottom of the post, above as post on this Scientific Journal,

    All comments made to both entries are part of thisvdiscussion, I am referring to Dr. Nir’s post on size of tumor, to BB comment to Nir’s post, to Larry’ Pathologist view on Tumors and my post on remission and minimally invasive surgery (MIS).

    Great comments by Dr. Williams, BB and wonderful topic exposition by Dr. Ritu.

  6. Aviva,
    Thats a great idea. I will combine all sources referred by you, the post on tumor imaging by Dr. Nir and the comments made on the these posts including Dr. Brucher’s comments in a new posts.
    Thanks
    Ritu

    • Great idea, ask Larry, he has written two very long important comments on this topic, one on Nir’s post and another one, ask him where, if it is not on MIS post. GREAT work, Ritu, integration is very important. Dr, Williams is one of our Gems.

    • Assessing tumour response it is not an easy task!Because tumours don’t change,but happilly our knowlege(about them) does really change,is everchanging(thans god!).In the past we had the Recist Criteria,then the Modified Recist Criteria,becausa of Gist and other tumors.At this very moment,these are clearly insuficient.We do need more ,new validated facing the reality of nowadays.A great,enormoust post Dr Ritu!Congratulations!

 

 

 

 

Read Full Post »

Special Considerations in Blood Lipoproteins, Viscosity, Assessment and Treatment

Special Considerations in Blood Lipoproteins, Viscosity, Assessment and Treatment

Author: Larry H. Bernstein, MD, FCAP

and

Curator: Aviva Lev-Ari, PhD, RN

This is the second of a two part discussion of viscosity, hemostasis, and vascular risk

This is Part II of a series on blood flow and shear stress effects on hemostasis and vascular disease.

See Part I on viscosity, triglycerides and LDL, and thrombotic risk.

 

Hemostatic Factors in Thrombophilia

Objectives.—To review the state of the art relating to elevated hemostatic factor levels as a potential risk factor for thrombosis, as reflected by the medical literature and the consensus opinion of recognized experts in the field, and to make recommendations for the use of specific measurements of hemostatic factor levels in the assessment of thrombotic risk in individual patients.

Data Sources.—Review of the medical literature, primarily from the last 10 years.

Data Extraction and Synthesis.—After an initial assessment of the literature, key points were identified. Experts were assigned to do an in-depth review of the literature and to prepare a summary of their findings and recommendations.

A draft manuscript was prepared and circulated to every participant in the College of American Pathologists Conference XXXVI: Diagnostic Issues in Thrombophilia prior to the conference. Each of the key points and associated recommendations was then presented for discussion at the conference. Recommendations were accepted if a consensus of the 27 experts attending the conference was reached. The results of the discussion were used to revise the manuscript into its final form.

Consensus was reached on 8 recommendations concerning the use of hemostatic factor levels in the assessment of thrombotic risk in individual patients.

The underlying premise for measuring elevated coagulation factor levels is that if the average level of the factor is increased in the patient long-term, then the patient may be at increased risk of thrombosis long-term. Both risk of thrombosis and certain factors increase with age (eg, fibrinogen, factor VII, factor VIII, factor IX, and von Willebrand factor). Are these effects linked or do we need age specific ranges? Do acquired effects like other diseases or medications affect factor levels, and do the same risk thresholds apply in these instances? How do we assure that the level we are measuring is a true indication of the patient’s average baseline level and not a transient change? Fibrinogen, factor VIII, and von Willebrand factor are all strong acute-phase reactants.

Risk of bleeding associated with coagulation factor levels increases with roughly log decreases in factor levels. Compared to normal (100%), 60% to 90% decreases in a coagulation factor may be associated with excess bleeding with major trauma, 95% to 98% decreases with minor trauma, and .99% decrease with spontaneous hemorrhage. In contrast, the difference between low risk and high risk for thrombosis may be separated by as little as 15% above normal.

It may be possible to define relative cutoffs for specific factors, for example, 50% above the mean level determined locally in healthy subjects for a certain factor. Before coagulation factor levels can be routinely used to assess individual risk, work must be done to better standardize and calibrate the assays used.

Detailed discussion of the rationale for each of these recommendations is presented in the article. This is an evolving area of research. While routine use of factor level measurements is not recommended, improvements in assay methodology and further clinical studies may change these recommendations in the future.

Chandler WL, Rodgers GM, Sprouse JT, Thompson AR.  Elevated Hemostatic Factor Levels as Potential Risk Factors for Thrombosis.  Arch Pathol Lab Med. 2002;126:1405–1414

Model System for Hemostatic Behavior

This study explores the behavior of a model system in response to perturbations in

  • tissue factor
  • thrombomodulin surface densities
  • tissue factor site dimensions
  • wall shear rate.

The classic time course is characterized by

  • initiation and
  • amplification of thrombin generation
  • the existence of threshold-like responses

This author defines a new parameter, the „effective prothrombotic zone‟,  and its dependence on model parameters. It was found that prothrombotic effects may extend significantly beyond the dimensions of the spatially discrete site of tissue factor expression in both axial and radial directions. Furthermore, he takes advantage of the finite element modeling approach to explore the behavior of systems containing multiple spatially distinct sites of TF expression in a physiologic model. The computational model is applied to assess individualized thrombotic risk from clinical data of plasma coagulation factor levels. He proposes a systems-based parameter with deep venous thrombosis using computational methods in combination with biochemical panels to predict hypercoagulability for high risk populations.

 

The Vascular Surface

The ‘resting’ endothelium synthesizes and presents a number of antithrombogenic molecules including

  • heparan sulfate proteoglycans
  • ecto-adenosine diphosphatase
  • prostacyclin
  • nitric oxide
  • thrombomodulin.

In response to various stimuli

  • inflammatory mediators
  • hypoxia
  • oxidative stress
  • fluid shear stress

the cell surface becomes ‘activated’ and serves to organize membrane-associated enzyme complexes of coagulation.

Fluid Phase Models of Coagulation

Leipold et al. developed a model of the tissue factor pathway as a design aid for the development of exogenous serine protease inhibitors. In contrast, Guo et al. focused on the reactions of the contact, or intrinsic pathway, to study parameters relevant to material-induced thrombosis, including procoagulant surface area.

Alternative approaches to modeling the coagulation cascade have been pursued including the use of stochastic activity networks to represent the intrinsic, extrinsic, and common pathways through fibrin formation and a kinetic Monte Carlo simulation of TF-initiated thrombin generation. Generally, fluid phase models of the kinetics of coagulation are both computationally and experimentally less complex. As such, the computational models are able to incorporate a large number of species and their reactions, and empirical data is often available for regression analysis and model validation. The range of complexity and motivations for these models is wide, and the models have been used to describe various phenomena including the ‘all-or-none’ threshold behavior of thrombin generation. However, the role of blood flow in coagulation is well recognized in promoting the delivery of substrates to the vessel wall and in regulating the thrombin response by removing activated clotting factors.

Flow Based Models of Coagulation

In 1990, Basmadjian presented a mathematical analysis of the effect of flow and mass transport on a single reactive event at the vessel wall and consequently laid the foundation for the first flow-based models of coagulation. It was proposed that for vessels greater than 0.1 mm in diameter, reactive events at the vessel wall could be adequately described by the assumption of a concentration boundary layer very close to the reactive surface, within which the majority of concentration changes took place. The height of the boundary layer and the mass transfer coefficient that described transport to and from the vessel wall were shown to stabilize on a time scale much shorter than the time scale over which concentration changes were empirically observed. Thus, the vascular space could be divided into two compartments, a boundary volume and a bulk volume, and furthermore, changes within the bulk phase could be considered negligible, thereby reducing the previously intractable problem to a pseudo-one compartment model described by a system of ordinary differential equations.

Basmadjian et al. subsequently published a limited model of six reactions, including two positive feedback reactions and two inhibitory reactions, of the common pathway of coagulation triggered by exogenous factor IXa under flow. As a consequence of the definition of the mass transfer coefficient, the kinetic parameters were dependent on the boundary layer height. Furthermore, the model did not explicitly account for intrinsic tenase or prothrombinase formation, but rather derived a rate expression for reaction in the presence of a cofactor. The major finding of the study was the predicted effect of increased mass transport to enhance thrombin generation by decreasing the induction time up to a critical mass transfer rate, beyond which transport significantly decreased peak thrombin levels thereby reducing overall thrombin production.

Kuharsky and Fogelson formulated a more comprehensive, pseudo-one compartment model of tissue factor-initiated coagulation under flow, which included the description of 59 distinct fluid- and surface-bound species. In contrast to the Baldwin-Basmadjian model, which defined a mass transfer coefficient as a rate of transport to the vessel surface, the Kuharsky-Fogelson model defined the mass transfer coefficient as a rate of transport into the boundary volume, thus eliminating the dependence of kinetic parameters on transport parameters. The computational study focused on the threshold response of thrombin generation to the availability of membrane binding sites. Additionally, the model suggested that adhered platelets may play a role in blocking the activity of the TF/ VIIa complex. Fogelson and Tania later expanded the model to include the protein C and TFPI pathways.

Modeling surface-associated reactions under flow uses finite element method (FEM), which is a technique for solving partial differential equations by dividing the vascular space into a finite number of discrete elements. Hall et al. used FEM to simulate factor X activation over a surface presenting TF in a parallel plate flow reactor. The steady state model was defined by the convection-diffusion equation and Michaelis-Menten reaction kinetics at the surface. The computational results were compared to experimental data for the generation of factor Xa by cultured rat vascular smooth muscle cells expressing TF.

Based on discrepancies between numerical and experimental studies, the catalytic activity of the TF/ VIIa complex may be shear-dependent. Towards the overall objective of developing an antithrombogenic biomaterial, Tummala and Hall studied the kinetics of factor Xa inhibition by surface-immobilized recombinant TFPI under unsteady flow conditions. Similarly, Byun et al. investigated the association and dissociation kinetics of ATIII inactivation of thrombin accelerated by surface-immobilized heparin under steady flow conditions. To date, finite element models that detail surface-bound reactions under flow have been restricted to no more than a single reaction catalyzed by a single surface-immobilized species.

 

Models of Coagulation Incorporating Spatial Parameter

Major findings include the roles of these specific coagulation pathways in the

  • initiation
  • amplification
  • termination phases of coagulation.

Coagulation near the activating surface was determined by TF/VIIa catalyzed factor Xa production, which was rapidly inhibited close to the wall. In contrast, factor IXa diffused farther from the surface, and thus factor Xa generation and clot formation away from the reactive wall was dependent on intrinsic tenase (IXa/ VIIIa) activity. Additionally, the concentration wave of thrombin propagated away from the activation zone at a rate which was dependent on the efficiency of inhibitory mechanisms.

Experimental and ‘virtual’ addition of plasma-phase thrombomodulin resulted in dose-dependent termination of thrombin generation and provided evidence of spatial localization of clot formation by TM with final clot lengths of 0.2-2 mm under diffusive conditions.

These studies provide an interesting analysis of the roles of specific factors in relation to space due to diffusive effects, but neglect the essential role of blood flow in the transport analysis. Additionally, the spatial dynamics of clot localization by thrombomodulin would likely be affected by restricting the inhibitor to its physiologic site on the vessel surface.

Finite Element Modeling

Finite element method (FEM) is a numerical technique for solving partial differential equations. Originally proposed in the 1940s to approach structural analysis problems in civil engineering, FEM now finds application in a wide variety of disciplines. The computational method relies on mesh discretization of a continuous domain which subdivides the space into a finite number of ‘elements’. The physics of each element are defined by its own set of physical properties and boundary conditions, and the simultaneous solution of the equations describing the individual elements approximate the behavior of the overall domain.

Sumanas W. Jordan, PhD Thesis. A Mathematical Model of Tissue Factor-Induced Blood Coagulation: Discrete Sites of Initiation and Regulation under Conditions of Flow.

Doctor of Philosophy in Biomedical Engineering. Emory University, Georgia Institute of Technology. May 2010.  Under supervision of: Dr. Elliot L. Chaikof, Departments of Surgery and Biomedical Engineering.

Blood Coagulation (Thrombin) and Protein C Pat...

Blood Coagulation (Thrombin) and Protein C Pathways (Blood_Coagulation_and_Protein_C_Pathways.jpg) (Photo credit: Wikipedia)

Coagulation cascade

Coagulation cascade (Photo credit: Wikipedia)

 

Cardiovascular Physiology: Modeling, Estimation and Signal Processing

With cardiovascular diseases being among the main causes of death in the world, quantitative modeling, assessment and monitoring of cardiovascular dynamics, and functioning play a critical role in bringing important breakthroughs to cardiovascular care. Quantification of cardiovascular physiology and its control mechanisms from physiological recordings, by use of mathematical models and algorithms, has been proved to be of important value in understanding the causes of cardiovascular diseases and assisting the diagnostic and prognostic process. This E-Book is derived from the Frontiers in Computational Physiology and Medicine Research Topic entitled “Engineering Approaches to Study Cardiovascular Physiology: Modeling, Estimation and Signal Processing.”

There are two review articles. The first review article by Chen et al. (2012) presents a unified point process probabilistic framework to assess heart beat dynamics and autonomic cardiovascular control. Using clinical recordings of healthy subjects during Propofol anesthesia, the authors demonstrate the effectiveness of their approach by applying the proposed paradigm to estimate

  • instantaneous heart rate (HR),
  • heart rate variability (HRV),
  • respiratory sinus arrhythmia (RSA)
  • baroreflex sensitivity (BRS).

The second review article, contributed by Zhang et al. (2011), provides a comprehensive overview of tube-load model parameter estimation for monitoring arterial hemodynamics.

The remaining eight original research articles can be mainly classified into two categories. The two articles from the first category emphasize modeling and estimation methods. In particular, the paper “Modeling the autonomic and metabolic effects of obstructive sleep apnea: a simulation study” by Cheng and Khoo (2012), combines computational modeling and simulations to study the autonomic and metabolic effects of obstructive sleep apnea (OSA).

The second paper, “Estimation of cardiac output and peripheral resistance using square-wave-approximated aortic flow signal” by Fazeli and Hahn (2012), presents a model-based approach to estimate cardiac output (CO) and total peripheral resistance (TPR), and validates the proposed approach via in vivo experimental data from animal subjects.

The six articles in the second category focus on application of signal processing techniques and statistical tools to analyze cardiovascular or physiological signals in practical applications. the paper “Modulation of the sympatho-vagal balance during sleep: frequency domain study of heart rate variability and respiration” by Cabiddu et al. (2012), uses spectral and cross-spectral analysis of heartbeat and respiration signals to assess autonomic cardiac regulation and cardiopulmonary coupling variations during different sleep stages in healthy subjects.

The paper “increased non-gaussianity of heart rate variability predicts cardiac mortality after an acute myocardial infarction” by Hayano et al. (2011) uses a new non-gaussian index to assess the HRV of cardiac mortality using 670 post-acute myocardial infarction (AMI) patients. the paper “non-gaussianity of low frequency heart rate variability and sympathetic activation: lack of increases in multiple system atrophy and parkinson disease” by Kiyono et al. (2012), applies a non-gaussian index to assess HRV in patients with multiple system atrophy (MSA) and parkinson diseases and reports the relation between the non-gaussian intermittency of the heartbeat and increased sympathetic activity. The paper “Information domain approach to the investigation of cardio-vascular, cardio-pulmonary, and vasculo-pulmonary causal couplings” by Faes et al. (2011), proposes an information domain approach to evaluate nonlinear causality among heartbeat, arterial pressure, and respiration measures during tilt testing and paced breathing protocols. The paper “integrated central-autonomic multifractal complexity in the heart rate variability of healthy humans” by Lin and Sharif (2012), uses a relative multifractal complexity measure to assess HRV in healthy humans and discusses the related implications in central autonomic interactions. Lastly, the paper “Time scales of autonomic information flow in near-term fetal sheep” by Frasch et al. (2012), analyzes the autonomic information flow (AIF) with kullback–leibler entropy in fetal sheep as a function of vagal and sympathetic modulation of fetal HRV during atropine and propranolol blockade.

In summary, this Research Topic attempts to give a general panorama of the possible state-of-the-art modeling methodologies, practical tools in signal processing and estimation, as well as several important clinical applications, which can altogether help deepen our understanding about heart physiology and pathology and further lead to new scientific findings. We hope that the readership of Frontiers will appreciate this collected volume and enjoy reading the presented contributions. Finally, we are grateful to all contributed authors, reviewers, and editorial staffs who had all put tremendous effort to make this E-Book a reality.

Cabiddu, R., Cerutti, S., Viardot, G., Werner, S., and Bianchi, A. M. (2012). Modulation of the sympatho-vagal balance during sleep: frequency domain study of heart rate variability and respiration. Front. Physio. 3:45. doi: 10.3389/fphys.2012.00045

Chen, Z., Purdon, P. L., Brown, E. N., and Barbieri, R. (2012). A unified point process probabilistic framework to assess heartbeat dynamics and autonomic cardiovascular control. Front. Physio. 3:4. doi: 10.3389/fphys.2012.00004

Cheng, L., and Khoo, M. C. K. (2012). Modeling the autonomic and metabolic effects of obstructive sleep apnea: a simulation study. Front. Physio. 2:111. doi: 10.3389/fphys.2011.00111

Faes, L., Nollo, G., and Porta, A. (2011). Information domain approach to the investigation of cardio-vascular, cardio-pulmonary, and vasculo-pulmonary causal couplings. Front. Physio. 2:80. doi: 10.3389/fphys.2011.00080

Fazeli, N., and Hahn, J.-O. (2012). Estimation of cardiac output and peripheral resistance using square-wave-approximated aortic flow signal. Front. Physio. 3:298. doi: 10.3389/fphys.2012.00298

Frasch, M. G., Frank, B., Last, M., and Müller, T. (2012). Time scales of autonomic information flow in near-term fetal sheep. Front. Physio. 3:378. doi: 10.3389/fphys.2012.00378

Hayano, J., Kiyono, K., Struzik, Z. R., Yamamoto, Y., Watanabe, E., Stein, P. K., et al. (2011). Increased non-gaussianity of heart rate variability predicts cardiac mortality after an acute myocardial infarction. Front. Physio. 2:65. doi: 10.3389/fphys.2011.00065

Kiyono, K., Hayano, J., Kwak, S., Watanabe, E., and Yamamoto, Y. (2012). Non-Gaussianity of low frequency heart rate variability and sympathetic activation: lack of increases in multiple system atrophy and Parkinson disease. Front. Physio. 3:34. doi: 10.3389/fphys.2012.00034

Lin, D. C., and Sharif, A. (2012). Integrated central-autonomic multifractal complexity in the heart rate variability of healthy humans. Front. Physio. 2:123. doi: 10.3389/fphys.2011.00123

Zhang, G., Hahn, J., and Mukkamala, R. (2011). Tube-load model parameter estimation for monitoring arterial hemodynamics. Front. Physio. 2:72. doi: 10.3389/fphys.2011.00072

Citation: Chen Z and Barbieri R (2012) Editorial: engineering approaches to study cardiovascular physiology: modeling, estimation, and signal processing. Front. Physio. 3:425. doi: 10.3389/fphys.2012.00425

fluctuations of cerebral blood flow and metabolic demand following hypoxia in neonatal brain

Most of the research investigating the pathogenesis of perinatal brain injury following hypoxia-ischemia has focused on excitotoxicity, oxidative stress and an inflammatory response, with the response of the developing cerebrovasculature receiving less attention. This is surprising as the presentation of devastating and permanent injury such as germinal matrix-intraventricular haemorrhage (GM-IVH) and perinatal stroke are of vascular origin, and the origin of periventricular leukomalacia (PVL) may also arise from poor perfusion of the white matter. This highlights that cerebrovasculature injury following hypoxia could primarily be responsible for the injury seen in the brain of many infants diagnosed with hypoxic-ischemic encephalopathy (HIE).

The highly dynamic nature of the cerebral blood vessels in the fetus, and the fluctuations of cerebral blood flow and metabolic demand that occur following hypoxia suggest that the response of blood vessels could explain both regional protection and vulnerability in the developing brain.

This review discusses the current concepts on the pathogenesis of perinatal brain injury, the development of the fetal cerebrovasculature and the blood brain barrier (BBB), and key mediators involved with the response of cerebral blood vessels to hypoxia.

Baburamani AA, Ek CJ, Walker DW and Castillo-Melendez M. Vulnerability of the developing brain to hypoxic-ischemic damage: contribution of the cerebral vasculature to injury and repair? Front. Physio. 2012;  3:424. doi: 10.3389/fphys.2012.00424

remodeling of coronary and cerebral arteries and arterioles 

Effects of hypertension on arteries and arterioles often manifest first as a thickened wall, with associated changes in passive material properties (e.g., stiffness) or function (e.g., cellular phenotype, synthesis and removal rates, and vasomotor responsiveness). Less is known, however, regarding the relative evolution of such changes in vessels from different vascular beds.

We used an aortic coarctation model of hypertension in the mini-pig to elucidate spatiotemporal changes in geometry and wall composition (including layer-specific thicknesses as well as presence of collagen, elastin, smooth muscle, endothelial, macrophage, and hematopoietic cells) in three different arterial beds, specifically aortic, cerebral, and coronary, and vasodilator function in two different arteriolar beds, the cerebral and coronary.

Marked geometric and structural changes occurred in the thoracic aorta and left anterior descending coronary artery within 2 weeks of the establishment of hypertension and continued to increase over the 8-week study period. In contrast, no significant changes were observed in the middle cerebral arteries from the same animals. Consistent with these differential findings at the arterial level, we also found a diminished nitric oxide-mediated dilation to adenosine at 8 weeks of hypertension in coronary arterioles, but not cerebral arterioles.

These findings, coupled with the observation that temporal changes in wall constituents and the presence of macrophages differed significantly between the thoracic aorta and coronary arteries, confirm a strong differential progressive remodeling within different vascular beds.

These results suggest a spatiotemporal progression of vascular remodeling, beginning first in large elastic arteries and delayed in distal vessels.

Hayenga HN, Hu J-J, Meyer CA, Wilson E, Hein TW, Kuo L and Humphrey JD  Differential progressive remodeling of coronary and cerebral arteries and arterioles in an aortic coarctation model of hypertension. Front. Physio. 2012; 3:420. doi: 10.3389/fphys.2012.00420

C-reactive protein oxidant-mediated release of pro-thrombotic  factor

Inflammation and the generation of reactive oxygen species (ROS) have been implicated in the initiation and progression of atherosclerosis. Although C-reactive protein (CRP) has traditionally been considered to be a biomarker of inflammation, recent in vitro and in vivo studies have provided evidence that CRP, itself, exerts pro-thrombotic effects on vascular cells and may thus play a critical role in the development of atherothrombosis. Of particular importance is that CRP interacts with Fcγ receptors on cells of the vascular wall giving rise to the release of pro-thrombotic factors. The present review focuses on distinct sources of CRP-mediated ROS generation as well as the pivotal role of ROS in CRP-induced tissue factor expression. These studies provide considerable insight into the role of the oxidative mechanisms in CRP-mediated stimulation of pro-thrombotic factors and activation of platelets. Collectively, the available data provide strong support for ROS playing an important intermediary role in the relationship between CRP and atherothrombosis.

Zhang Z, Yang Y, Hill MA and Wu J.  Does C-reactive protein contribute to atherothrombosis via oxidant-mediated release of pro-thrombotic factors and activation of platelets? Front. Physio.  2012; 3:433. doi: 10.3389/fphys.2012.00433

CRP association with Peripheral Vascular Disease

To determine whether the increase in plasma levels of C-Reactive Protein (CRP), a non-specifi c reactant in the acute-phase of systemic infl ammation, is associated with clinical severity of peripheral arterial disease (PAD).

This is a cross-sectional study at a referral hospital center of institutional practice in Madrid, Spain.  These investigators took a stratifi ed random sampling of 3370 patients with symptomatic PAD from the outpatient vascular laboratory database in 2007 in the order of their clinical severity:

  • the fi rst group of patients with mild chronological clinical severity who did not require surgical revascularization,
  • the second group consisted of patients with moderate clinical severity who had only undergone only one surgical revascularization procedure and
  • the third group consisted of patients who were severely affected and had undergone two or more surgical revascularization procedures of the lower extremities in different areas or needed late re-interventions.

The Neyman affi xation was used to calculate the sample size with a fi xed relative error of 0.1.

A homogeneity analysis between groups and a unifactorial analysis of comparison of medians for CRP was done.

The groups were homogeneous for

  • age
  • smoking status
  • Arterial Hypertension
  • diabetes mellitus
  • dyslipemia
  • homocysteinemia and
  • specifi c markers of infl ammation.

In the unifactorial analysis of multiple comparisons of medians according to Scheffé, it was observed that

the median values of CRP plasma levels were increased in association with higher clinical severity of PAD

  • 3.81 mg/L [2.14–5.48] vs.
  • 8.33 [4.38–9.19] vs.
  • 12.83 [9.5–14.16]; p  0.05

as a unique factor of tested ones.

Plasma levels of CRP are associated with not only the presence of atherosclerosis but also with its chronological clinical severity.

De Haro J, Acin F, Medina FJ, Lopez-Quintana A, and  March JR.  Relationship Between the Plasma Concentration of C-Reactive Protein and Severity of Peripheral Arterial Disease.
Clinical Medicine: Cardiology 2009;3: 1–7

Hemostasis induced by hyperhomocysteinemia

Elevated concentration of homocysteine (Hcy) in human tissues, defined as hyperhomocysteinemia has been correlated with some diseases, such as

  • cardiovascular
  • neurodegenerative
  • kidney disorders

L-Homocysteine (Hcy) is an endogenous amino acid, containing a free thiol group, which in healthy cells is involved in methionine and cysteine synthesis/resynthesis. Indirectly, Hcy participates in methyl, folate, and cellular thiol metabolism. Approximately 80% of total plasma Hcy is protein-bound, and only a small amount exists as a free reduced Hcy (about 0.1 μM). The majority of the unbound fraction of Hcy is oxidized, and forms dimers (homocystine) or mixed disulphides consisting of cysteine and Hcy.

Two main pathways of Hcy biotoxicity are discussed:

  1. Hcy-dependent oxidative stress – generated during oxidation of the free thiol group of Hcy. Hcy binds via a disulphide bridge with

—     plasma proteins

—     or with other low-molecular plasma  thiols

—     or with a second Hcy molecule.

Accumulation of oxidized biomolecules alters the biological functions of many cellular pathways.

  1. Hcy-induced protein structure modifications, named homocysteinylation.

Two main types of homocysteinylation exist: S-homocysteinylation and N-homocysteinylation; both considered as posttranslational protein modifications.

a)      S-homocysteinylation occurs when Hcy reacts, by its free thiol group, with another free thiol derived from a cysteine residue in a protein molecule.

These changes can alter the thiol-dependent redox status of proteins.

b)      N-homocysteinylation takes place after acylation of the free ε-amino lysine groups of proteins by the most reactive form of Hcy — its cyclic thioester (Hcy thiolactone — HTL), representing up to 0.29% of total plasma Hcy.

Homocysteine occurs in human blood plasma in several forms, including the most reactive one, the homocysteine thiolactone (HTL) — a cyclic thioester, which represents up to 0.29% of total plasma Hcy. In human blood, N-homocysteinylated (N-Hcy-protein) and S-homocysteinylated proteins (S-Hcy-protein) such as NHcy-hemoglobin, N-(Hcy-S-S-Cys)-albumin, and S-Hcyalbumin are known. Other pathways of Hcy biotoxicity might be apoptosis and excitotoxicity mediated through glutamate receptors. The relationship between homocysteine and risk appears to hold for total plasma concentrations of homocysteine between 10 and 30 μM.

Different forms of homocysteine present in human blood.

*Total level of homocysteine — the term “total homocysteine” describes the pool of homocysteine released by reduction of all disulphide bonds in the sample (Perla-Kajan et al., 2007; Zimny, 2008; Manolescu et al., 2010, modified).

The form of Hcy The concentration in human blood
Homocysteine thiolactone (HTL) 0–35 nM
Protein N-linked homocysteine:
N-Hcy-hemoglobin, N-(Hcy-S-S-Cys)-albumin
about 15.5 μM: 12.7 μM, 2.8 μM
Protein S-linked homocysteine — S-Hcy-albumin about 7.3 μM*
Homocystine (Hcy-S-S-Hcy) and combined with cysteine to from mixed disulphides (Hcy-S-S-Cys) about 2 μM*
Free reduced Hcy about 0.1 μM*

As early as in the 1960s it was noted that the risk of atherosclerosis is markedly increased in patients with homocystinuria, an inherited disease resulting from homozygous CBS deficiency and characterized by episodes of

—     thromboembolism

—     mental retardation

—     lens dislocation

—     hepatic steatosis

—     osteoporosis.

—     very high concentrations of plasma homocysteine and methionine.

Patients with homocystinuria have very severe hyperhomocysteinemia, with plasma homocysteine concentration reaching even 400 μM, and represent a very small proportion of the population (approximately 1 in 200,000 individuals). Heterozygous lack of CBS, CBS mutations and polymorphism of the methylenetetrahydrofolate reductase gene are considered to be the most probable causes of hyperhomocysteinemia.

The effects of hyperhomocysteinemia include the complex process of hemostasis, which regulates the properties of blood flow. Interactions of homocysteine and its different derivatives, including homocysteine thiolactone, with the major components of hemostasis are:

  • endothelial cells
  • platelets
  • fibrinogen
  • plasminogen

Elevated plasma Hcy (>15 μM; Hcy) is associated with an increased risk of cardiovascular diseases

  • thrombosis
  • thrombosis related diseases
  • ischemic brain stroke (independent of other, conventional risk factors of this disease)

Every increase of 2.5 μM in plasma Hcy may be associated with an increase of stroke risk of about 20%.  Total plasma Hcy level above 20 μM are associated with a nine-fold increase of the myocardial infarction and stroke risk, in comparison to the concentrations below 9 μM. The increase of Hcy concentration has been also found in other human pathologies, including neurodegenerative diseases

Modifications of hemostatic proteins (N-homocysteinylation or S-homocysteinylation) induced by Hcy or its thiolactone seem to be the main cause of homocysteine biotoxicity in hemostatic abnormalities.

Hcy and HTL may act as oxidants, but various polyphenolic antioxidants are able to inhibit the oxidative damage induced by Hcy or HTL. Therefore, we have to consider the role of phenolic antioxidants in hyperhomocysteinemia –induced changes in hemostasis.

The synthesis of homocysteine thiolactone is associated with the activation of the amino acid by aminoacyl-tRNA synthetase (AARS). Hcy may also undergo erroneous activation, e.g. by methionyl-t-RNA synthetase (MetRS). In the first step of conversion of Hcy to HTL, MetRS misactivates Hcy giving rise to homocysteinyl-adenylate. In the next phase, the homocysteine side chain thiol group reacts with the activated carboxyl group and HTL is produced. The level of HTL synthesis in cultured cells depends on Hcy and Met levels.

Hyperhomocysteinemia and Changes in Fibrinolysis and Coagulation Process

The fibrinolytic activity of blood is regulated by specific inhibitors; the inhibition of fibrinolysis takes place at the level of plasminogen activation (by PA-inhibitors: plasminogen activator inhibitor type-1, -2; PAI-1 or PAI-2) or at the level of plasmin activity (mainly by α2-antiplasmin). Hyperhomocysteinemia disturbs hemostasis and shifts the hemostatic mechanisms in favor of thrombosis. The recent reports indicate that the prothrombotic state observed in hyperhomocysteinemia may arise not only due to endothelium dysfunction or blood platelet and coagulation activation, but also due to impaired fibrinolysis. Hcy-modified fibrinogen is more resistant to the fibrinolytic action. Oral methionine load increases total Hcy, but may diminish the fibrinolytic activity of the euglobulin plasma fraction. Homocysteine-lowering therapies may increase fibrinolytic activity, thereby, prevent atherothrombotic events in patients with cardiovascular diseases after the first myocardial infarction.

Homocysteine — Fibronectin Interaction and its Consequences

Fibronectin (Fn) plays key roles in

  • cell adhesion
  • migration
  • embryogenesis
  • differentiation
  • hemostasis
  • thrombosis
  • wound healing
  • tissue remodeling

Interaction of FN with fibrin, mediated by factor XIII transglutaminase, is thought to be important for cell adhesion or cell migration into fibrin clots. After tissue injury, a blood clot formation serves the dual role of restoring vascular integrity and serving as a temporary scaffold for the wound healing process. Fibrin and plasma FN, the major protein components of blood clots, are essential to perform these functions. In the blood clotting process, after fibrin deposition, plasma FN-fibrin matrix is covalently crosslinked, and it then promotes fibroblast adhesion, spreading, and migration into the clot.

Homocysteine binds to several human plasma proteins, including fibronectin. If homocysteine binds to fibronectin via a disulphide linkage, this binding results in a functional change, namely, the inhibition of fibrin binding by fibronectin. This inhibition may lead to a prolonged recovery from a thrombotic event and contribute to vascular occlusion.

Grape seeds are one of the richest plant sources of phenolic substances, and grape seed extract reduces the toxic effect of Hcys and HTL on fibrinolysis. The grape seed extract (12.5–50 μg/ml) supported plasminogen to plasmin conversion inhibited by Hcys or HTL. In vitro experiments showed in the presence of grape seed extract (at the highest tested concentration — 50 μg/ml) the increase of about 78% (for human plasminogen-treated with Hcys) and 56% (for human plasma-treated with Hcys). Thus, in the in vitro model system, that the grape seed extract (12.5–50 μg/ml) diminished the reduction of thiol groups and of lysine ε-amino groups in plasma proteins treated with Hcys (0.1 mM) or HTL (1 μM). In the presence of the grape seed extract at the concentration of 50 μg/ml, the level of reduction of thiol groups reached about 45% (for plasma treated with Hcys) and about 15% (for plasma treated with HTL).

In the presence of the grape seed extract at the concentration of 50 μg/ml, the level of reduction of thiol groups reached about 45% (for plasma treated with Hcys) and about 15% (for plasma treated with HTL).Very similar protective effects of the grape seed extract were observed in the measurements of lysine ε-amino groups in plasma proteins treated with Hcys or HTL. These results indicated that the extract from berries of Aronia melanocarpa (a rich source of phenolic substances) reduces the toxic effects of Hcy and HTL on the hemostatic properties of fibrinogen and plasma. These findings indicate a possible protective action of the A. melanocarpa extract in hyperhomocysteinemia-induced cardiovascular disorders. Moreover, the extract from berries of A. melanocarpa, due to its antioxidant action, significantly attenuated the oxidative stress (assessed by measuring of the total antioxidant status — TAS) in plasma in a model of hyperhomocysteinemia.

Proposed model for the protective role of phenolic antioxidants on selected elements of hemostasis during hyperhomocysteinemia.

various antioxidants (present in human diet), including phenolic compounds, may reduce the toxic effects of Hcy or its derivatives on hemostasis. These findings give hope for the develop development of dietary supplements, which will be capable of preventing thrombosis which occurs under pathological conditions, observed also in hyperhomocysteinemia, such as plasma procoagulant activity and oxidative stress.

Malinowska J,  Kolodziejczyk J and Olas B. The disturbance of hemostasis induced by hyper-homocysteinemia; the role of antioxidants. Acta Biochimica Polonica 2012; 59(2): 185–194.

Lipoprotein (a)

Lipoprotein (a) (Lp(a)), for the first time described in 1963 by Berg belongs to the lipoproteins with the strongest atherogenic effect. Its importance for the development of various atherosclerotic vasculopathies (coronary heart disease, ischemic stroke, peripheral vasculopathy, abdominal aneurysm) was recognized considerably later.

Lipoprotein(a) (Lp(a)), an established risk marker of cardiovascular diseases, is independent from other risk markers. The main difference of Lp(a) compared to low density lipoprotein (LDL) is the apo(a) residue, covalently bound to apoB is covalently by a disulfide-bridge. Apo(a) synthesis is performed in the liver, probably followed by extracellular assembly to the apoB location of the LDL.

 

ApoB-100_______LDL¬¬___ S-S –    9

Apo(a) has been detected bound to triglyceride-rich lipoproteins (Very Low Density Lipoproteins; VLDL). Corresponding to the structural similarity to LDL, both particles are very similar to each other with regard to their composition. It is a glycoprotein which underlies a large genetic polymorphism caused by a variation of the kringle-IV-type-2 repeats of the protein, characterized by a structural homology to plasminogen. Apo(a)’s structural homology to plasminogen, shares the gene localization on chromosome 6. The kringle repeats present a particularly characteristic structure, which have a high similarity to kringle IV (K IV) of plasminogen. Apo(a) also has a kringle V structure of plasminogen and also a protease domain, which cannot be activated, as opposed to the one of plasminogen. At least 30 genetically determined apo(a) isoforms were identified in man.

Features:

  • Non covalent binding of kringle -4 types 7 and 8 of apo (a) to apo B
  • Disulfide bond at Cys4326 of ApoB (near its receptor binding domain ) and the only free cysteine group in K –IV type 9 (Cys4057) of apo(a )
  • Binding to fibrin and cell membranes
  • Enhancement by small isoforms ; high concentrations compared to plasminogen and homocysteine
  • Binding to different lysine rich components of the coagulation system (e. g. TFPI)
  • Intense homology to plasminogen but no protease activity
ApoB-100_______LDL¬¬___ S-S – 9

The synthesis of Lp(a), which thus occurs as part of an assembly, is a two-step process.

  • In a first step, which can be competitively inhibited by lysine analogues, the free sulfhydryl groups of apo(a) and apoB are brought close together.
  • The binding of apo(a) then occurs near the apoB domain which binds to the LDL receptor, resulting in a reduced affinity of Lp(a) to the LDL-receptor.

Particles that show a reduced affinity to the LDL receptor are not able to form stable compounds with apo(a). Thus the largest part of apo(a) is present as apo(a) bound to LDL. Only a small, quantitatively variable part of apo(a) remains as free apo(a) and probably plays an important role in the metabolism and physiological function of Lp(a).

The Lp(a) plasma concentration in the population is highly skewed and determined to more than 90 % by genetic factors. In healthy subjects the Lp(a)-concentration is correlated with its synthesis.

It is assumed that the kidney has a specific function in Lp(a) catabolizm, since nephrotic syndrome and terminal kidney failure are associated with an elevation of the Lp(a) plasma concentration. One consequence of the poor knowledge of the metabolic path of Lp(a) is the fact that so far pharmaceutical science has failed to develop drugs that are able to reduce elevated Lp(a) plasma concentrations to a desirable level.

Plasma concentrations of Lp(a) are affected by different diseases (e.g. diseases of liver and kidney), hormonal factors (e.g. sexual steroids, glucocorticoids, thyroid hormones), individual and environmental factors (e.g. age, cigarette smoking) as well as pharmaceuticals (e.g. derivatives of nicotinic acid) and therapeutic procedures (lipid apheresis). This review describes the physiological regulation of Lp(a) as well as factors influencing its plasma concentration.

Apart from its significance as an important agent in the development of atherosclerosis, Lp(a) has even more physiological functions, e.g. in

  • wound healing
  • angiogenesis
  • hemostasis

However, in the meaning of a pleiotropic mechanism the favorable action mechanisms are opposed by pathogenic mechanisms, whereby the importance of Lp(a) in atherogenesis is stressed.

Lp(a) in Atherosclerosis

In transgenic, hyperlipidemic and Lp(a) expressing Watanabe rabbits, Lp(a) leads to enhanced atherosclerosis. Under the influence of Lp(a), the binding of Lp(a) to glycoproteins, e.g. laminin, results – via its apo(a)-part – both in

  • an increased invasion of inflammatory cells and in
  • an activation of smooth vascular muscle cells

with subsequent calcifications in the vascular wall.

The inhibition of transforming growth factor-β1 (TGF-β1) activation is another mechanism via which Lp(a) contributes to the development of atherosclerotic vasculopathies. TGF-β1 is subject to proteolytic activation by plasmin and its active form leads to an inhibition of the proliferation and migration of smooth muscle cells, which play a central role in the formation and progression of atherosclerotic vascular diseases.

In man, Lp(a) is an important risk marker which is independent of other risk markers. Its importance, partly also under consideration of the molecular weight and other genetic polymorphisms, could be demonstrated by a high number of epidemiological and clinical studies investigating the formation and progression of atherosclerosis, myocardial infarction, and stroke.

Lp(a) in Hemostasis

Lp(a) is able to competitively inhibit the binding of plasminogen to fibrinogen and fibrin, and to inhibit the fibrin-dependent activation of plasminogen to plasmin via the tissue plasminogen activator, whereby apo(a) isoforms of low molecular weight have a higher affinity to fibrin than apo(a) isoforms of higher molecular weight. Like other compounds containing sulfhydryl groups, homocysteine enhances the binding of Lp(a) to fibrin.

Pleiotropic effect of Lp(a).

Prothrombotic :

  • Binding to fibrin
  • Competitive inhibition of plasminogen
  • Stimulation of plasminogen activator inhibitor I and II (PAI -I, PAI -II)
  • Inactivation of tissue factor pathway inhibitor (TFPI)

Antithrombotic :

  • Inhibition of platelet activating factor acetylhydrolase (PAF -AH)
  • Inhibition of platelet activating factor
  • Inhibition of collagen dependent platelet aggregation
  • Inhibition of secretion of serotonin und thromboxane

Lp(a) in Angiogenesis

Lp(a) is also important for the process of angiogenesis and the sprouting of new vessels.

  • angiogenesis starts with the remodelling of matrix proteins and
  • activation of matrix metalloproteinases (MMP).

The latter ones are usually synthesised as

  • inactive zymogens and
  • require activation by proteases,

Recall that Apo(a) is not activated by proteases. The angiogenesis is also accomplished by plasminogen. Lp(a) and apo(a) and its fragments has an antiangiogenetic and metastasis inhibiting effect related to the structural homology with plasminogen without the protease activity.

Siekmeier R, Scharnagl H, Kostner GM, T. Grammer T, Stojakovic T and März W.  Variation of Lp(a) Plasma Concentrations in Health and Disease.  The Open Clinical Chemistry Journal, 2010; 3: 72-89.

LDL-Apheresis

In 1985, Brown and Goldstein were awarded the Nobel Prize for medicine for their work on the regulation of cholesterol metabolism. On the basis of numerous studies, they were able to demonstrate that circulating low-density lipoprotein (LDL) is absorbed into the cell through receptor linked endocytosis. The absorption of LDL into the cell is specific and is mediated by a LDL receptor. In patients with familial hypercholesterolemia, this receptor is changed, and the LDL particles can no longer be recognized. Their absorption can thus no longer be mediated, leading to an accumulation of LDL in blood.

Furthermore, an excess supply of cholesterol also blocks the 3-hydrox-3 methylglutaryl-Co enzyme A (HMG CoA), reductase enzyme, which otherwise inhibits the cholesterol synthesis rate. Brown and Goldstein also determined the structure of the LDL receptor. They discovered structural defects in this receptor in many patients with familial hypercholesterolemia. Thus, familial hypercholesterolemia was the first metabolic disease that could be tracked back to the mutation of a receptor gene.

Dyslipoproteinemia in combination with diabetes mellitus causes a cumulative insult to the vasculature resulting in more severe disease which occurs at an earlier age in large and small vessels as well as capillaries. The most common clinical conditions resulting from this combination are myocardial infarction and lower extremity vascular disease. Ceriello et al. show an independent and cumulative effect of postprandial hypertriglyceridemia and hyperglycemia on endothelial function, suggesting oxidative stress as common mediator of such effect. The combination produces greater morbidity and mortality than either alone.

As an antiatherogenic factor, HDL cholesterol correlates inversely to the extent of postprandial lipemia. A high concentration of HDL is a sign that triglyceride-rich particles are quickly decomposed in the postprandial phase of lipemia. Conversely, with a low HDL concentration this decomposition is delayed. Thus, excessively high triglyceride concentrations are accompanied by very low HDL counts. This combination has also been associated with an increased risk of pancreatitis.

The importance of lipoprotein (a) (Lp(a)) as an atherogenic substance has also been recognized in recent years. Lp(a) is very similar to LDL. But it also contains Apo(a), which is very similar to plasminogen, enabling Lp(a) to bind to fibrin clots. Binding of plasminogen is prevented and fibrinolysis obstructed. Thrombi are integrated into the walls of the arteries and become plaque components.

Another strong risk factor for accelerated atherogenesis, which must be mentioned here, are the widespread high homocysteine levels found in dialysis patients. This risk factor is independent of classic risk factors such as high cholesterol and LDL levels, smoking, hypertension, and obesity, and much more predictive of coronary events in dialysis patients than are these better-known factors. Homocysteine is a sulfur aminoacid produced in the metabolism of methionine. Under normal conditions, about 50 percent of homocysteine is remethylated to methionine and the remaining via the transsulfuration pathway.

Defining hyperhomocysteinemia as levels greater than the 90th percentile of controls and elevated Lp(a) level as greater than 30mg/dL, the frequency of the combination increased with declining renal function. Fifty-eight percent of patients with a GFR less than 10mL/min had both hyperhomocysteinemia and elevated Lp(a) levels, and even in patients with mild renal impairment, 20 percent of patients had both risk factors present.

The prognosis of patients suffering from severe hyperlipidemia, sometimes combined with elevated lipoprotein (a) levels, and coronary heart disease refractory to diet and lipid-lowering drugs is poor. For such patients, regular treatment with low-density lipoprotein (LDL) apheresis is the therapeutic option. Today, there are five different LDL-apheresis systems available: cascade filtration or lipid filtration, immunoadsorption, heparin-induced LDL precipitation, dextran sulfate LDL adsorption, and the LDL hemoperfusion. The requirement that the original level of cholesterol is to be reduced by at least 60 percent is fulfilled by all these systems.

There is a strong correlation between hyperlipidemia and atherosclerosis. Besides the elimination of other risk factors, in severe hyperlipidemia therapeutic strategies should focus on a drastic reduction of serum lipoproteins. Despite maximum conventional therapy with a combination of different kinds of lipid-lowering drugs, sometimes the goal of therapy cannot be reached. Hence, in such patients, treatment with LDL-apheresis is indicated. Technical and clinical aspects of these five different LDL-apheresis methods are depicted. There were no significant differences with respect to or concerning all cholesterols, or triglycerides observed.

High plasma levels of Lp(a) are associated with an increased risk for atherosclerotic coronary heart       disease
(CHD) by a mechanism yet to be determined. Because of its structural properties, Lp(a) can have both atherogenic and thrombogenic potentials. The means for correcting the high plasma levels of Lp(a) are still limited in effectiveness. All drug therapies tried thus far have failed. The most effective therapeutic methods in lowering Lp(a) are the LDL-apheresismethods. Since 1993, special immunoadsorption polyclonal antibody columns (Pocard, Moscow, Russia) containing sepharose bound anti-Lp(a) have been available for the treatment of patients with elevated Lp(a) serum concentrations.

With respect to elevated lipoprotein (a) levels, however, the immunoadsorption method seems to be most effective. The different published data clearly demonstrate that treatment with LDL-apheresis in patients suffering from severe hyperlipidemia refractory to maximum conservative therapy is effective and safe in long-term application.

LDL-apheresis decreases not only LDL mass but also improves the patient’s life expectancy. LDL-apheresis performed with different techniques decreases the susceptibility of LDL to oxidation. This decrease may be related to a temporary mass imbalance between freshly produced and older LDL particles. Furthermore, the baseline fatty acid pattern influences pretreatment and postreatment susceptibility to oxidation.

Bambauer R, Bambauer C, Lehmann B, Latza R, and Ralf Schiel R. LDL-Apheresis: Technical and Clinical Aspects. The Scientific World Journal 2012; Article ID 314283, pp 1-19. doi:10.1100/2012/314283

Summary:  This discussion is a two part sequence that first establishes the known strong relationship between blood flow viscosity, shear stress, and plasma triglycerides (VLDL) as risk factors for hemostatic disorders leading to thromboembolic disease, and the association with atherosclerotic disease affecting the heart, the brain (via carotid blood flow), peripheral circulation,the kidneys, and retinopathy as well.

The second part discusses the modeling of hemostasis and takes into account the effects of plasma proteins involved with red cell and endothelial interaction, which is related to part I.  The current laboratory assessment of thrombophilias is taken from a consensus document of the American Society for Clinical Pathology.  The problems encountered are sufficient for the most common problems of coagulation testing and monitoring, but don’t address the large number of patients who are at risk for complications of accelerated vasoconstrictive systemic disease that precede serious hemostatic problems.  Special attention is given to Lp(a) and to homocysteine.  Lp(a) is a protein that has both prothrombotic and antithrombotic characteristics, and is a homologue of plasminogen and is composed of an apo(a) bound to LDL.  Unlike plasminogen, it has no protease activity.   Homocysteine elevation is a known risk factor for downstream myocardial infarct.  Homocysteine is a mirror into sulfur metabolism, so an increase is an independent predictor of risk, not fully discussed here.  The modification of risk is discussed by diet modification.  In the most serious cases of lipoprotein disorders, often including Lp(a) the long term use of LDL-apheresis is described.

see Relevent article that appears in NEJM from American College of Cardiology

Apolipoprotein(a) Genetic Sequence Variants Associated With Systemic Atherosclerosis and Coronary Atherosclerotic Burden but Not With Venous Thromboembolism

Helgadottir A, Gretarsdottir S, Thorleifsson G, et al

J Am Coll Cardiol. 2012;60:722-729

Study Summary

The LPA gene codes for apolipoprotein(a), which, when linked with low-density lipoprotein particles, forms lipoprotein(a) [Lp(a)] — a well-studied molecule associated with coronary artery disease (CAD). The Lp(a) molecule has both atherogenic and thrombogenic effects in vitro , but the extent to which these translate to differences in how atherothrombotic disease presents is unknown.

LPA contains many single-nucleotide polymorphisms, and 2 have been identified by previous groups as being strongly associated with levels of Lp(a) and, as a consequence, strongly associated with CAD. However, because atherosclerosis is thought to be a systemic disease, it is unclear to what extent Lp(a) leads to atherosclerosis in other arterial beds (eg, carotid, abdominal aorta, and lower extremity), as well as to other thrombotic disorders (eg, ischemic/cardioembolic stroke and venous thromboembolism). Such distinctions are important, because therapies that might lower Lp(a) could potentially reduce forms of atherosclerosis beyond the coronary tree.

To answer this question, Helgadottir and colleagues compiled clinical and genetic data on the LPA gene from thousands of previous participants in genetic research studies from across the world. They did not have access to Lp(a) levels, but by knowing the genotypes for 2 LPA variants, they inferred the levels of Lp(a) on the basis of prior associations between these variants and Lp(a) levels. [1] Their studies included not only individuals of white European descent but also a significant proportion of black persons, in order to widen the generalizability of their results.

Their main findings are that LPA variants (and, by proxy, Lp(a) levels) are associated with CAD,  peripheral arterial disease, abdominal aortic aneurysm, number of CAD vessels, age at onset of CAD diagnosis, and large-artery atherosclerosis-type stroke. They did not find an association with cardioembolic or small-vessel disease-type stroke; intracranial aneurysm; venous thrombosis; carotid intima thickness; or, in a small subset of individuals, myocardial infarction.

Viewpoint

The main conclusion to draw from this work is that Lp(a) is probably a strong causal factor in not only CAD, but also the development of atherosclerosis in other arterial trees. Although there is no evidence from this study that Lp(a) levels contribute to venous thrombosis, the investigators do not exclude a role for Lp(a) in arterial thrombosis.

Large-artery atherosclerosis stroke is thought to involve some element of arterial thrombosis or thromboembolism, [2] and genetic substudies of randomized trials of aspirin demonstrate that individuals with LPA variants predicted to have elevated levels of Lp(a) benefit the most from antiplatelet therapy. [3] Together, these data suggest that Lp(a) probably has clinically relevant effects on the development of atherosclerosis and arterial thrombosis.

Of  note, the investigators found no association between Lp(a) and carotid intima thickness, suggesting that either intima thickness is a poor surrogate for the clinical manifestations of atherosclerosis or that Lp(a) affects a distinct step in the atherosclerotic disease process that is not demonstrable in the carotid arteries.

Although Lp(a) testing is available, these studies do not provide any evidence that testing for Lp(a) is of clinical benefit, or that screening for atherosclerosis should go beyond well-described clinical risk factors, such as low-density lipoprotein cholesterol levels, high-density lipoprotein levels, hypertension, diabetes, smoking, and family history. Until evidence demonstrates that adding information on Lp(a) levels to routine clinical practice improves the ability of physicians to identify those at highest risk for atherosclerosis, Lp(a) testing should remain a research tool. Nevertheless, these findings do suggest that therapies to lower Lp(a) may have benefits that extend to forms of atherothrombosis beyond the coronary tree.

The finding of this study is interesting:

[1] It consistent with Dr. William LaFramboise..   examination specifically at APO B100, which is part of Lp(a) with some 14 candidate predictors for a more accurate exclusion of patients who don’t need intervention.          Apo B100 was not one of 5 top candidates.

William LaFramboise • Our study (http://www.ncbi.nlm.nih.gov/pubmed/23216991) comprised discovery research using targeted immunochemical screening of retrospective patient samples using both Luminex and Aushon platforms as opposed to shotgun proteomics. Hence the costs constrained sample numbers. Nevertheless, our ability to predict outcome substantially exceeded available methods:

The Framingham CHD scores were statistically different between groups (P <0.001, unpaired Student’s t test) but they classified only 16% of the subjects without significant CAD (10 of 63) at a 95% sensitivity for patients with CAD. In contrast, our algorithm incorporating serum values for OPN, RES, CRP, MMP7 and IFNγ identified 63% of the subjects without significant CAD (40 of 63) at 95% sensitivity for patients with CAD. Thus, our multiplex serum protein classifier correctly identified four times as many patients as the Framingham index.

This study is consistent with the concept of CAD, PVD, and atheromatous disease is a systemic vascular disease, but the point that is made is that it appears to have no relationship to venous thrombosis. The importance for predicting thrombotic events is considered serious.   The venous flow does not have the turbulence of large arteries, so the conclusion is no surprise.  The flow in capillary beds is a linear cell passage with minimal viscosity or turbulence.  The finding of no association with carotid artery disease  is interpreted to mean that the Lp(a) might be an earlier finding than carotid intimal thickness.  It is reassuring to find a recommendation for antiplatelet therapy for individuals with LPA variants based on randomized trials of aspirin substudies.

If that is the conclusion from the studies, and based on the strong association between the prothrombotic (pleiotropic) effect and the association with hyperhomocysteinemia, my own impression is that the recommendation is short-sighted.

[2]  Lp(a) is able to competitively inhibit the binding of plasminogen to fibrinogen and fibrin, and to inhibit the fibrin-dependent activation of plasminogen to plasmin via the tissue plasminogen activator, whereby apo(a) isoforms of low molecular weight have a higher affinity to fibrin than apo(a) isoforms of higher molecular weight. Like other compounds containing sulfhydryl groups, homocysteine enhances the binding of Lp(a) to fibrin.

Prothrombotic :

  • Binding to fibrin
  • Competitive inhibition of plasminogen
  • Stimulation of plasminogen activator inhibitor I and II (PAI -I, PAI -II)
  • Inactivation of tissue factor pathway inhibitor (TFPI)

Source for Lp(a)

Artherogenesis: Predictor of CVD – the Smaller and Denser LDL Particles

http://pharmaceuticalintelligence.com/2012/11/15/artherogenesis-predictor-of-cvd-the-smaller-and-denser-ldl-particles/

References on Triglycerides and blood viscosity

Lowe GD, Lee AJ, Rumley A, et al. Blood viscosity and risk of cardiovascular events: the Edinburgh Artery Study. Br J Haematol 1997; 96:168-173.


Sloop GD. A unifying theory of atherogenesis. Med Hypotheses. 1996; 47:321-5.
Smith WC, Lowe GD, et al. Rheological determinants of blood pressure in a Scottish adult population. J Hypertens 1992; 10:467-72.

Letcher RL, Chien S, et al. Direct relationship between blood pressure and blood viscosity in normal and hypertensive subjects. Role of fibrinogen and concentration. Am J Med 1981; 70:1195-1202.


Devereux RB, Case DB, Alderman MH, et al. Possible role of increased blood viscosity in the hemodynamics of systemic hypertension. Am J Cardiol 2000; 85:1265-1268.


Levenson J, Simon AC, Cambien FA, Beretti C. Cigarette smoking and hypertension. Factors independently associated with blood hyperviscosity and arterial rigidity. Arteriosclerosis 1987; 7:572-577.


Sloop GD, Garber DW. The effects of low-density lipoprotein and high-density lipoprotein on blood viscosity correlate with their association with risk of atherosclerosis in humans. Clin Sci 1997; 92:473-479.

Lowe GD. Blood viscosity, lipoproteins, and cardiovascular risk. Circulation 1992; 85:2329-2331.


Rosenson RS, Shott S, Tangney CC. Hypertriglyceridemia is associated with an elevated blood viscosity: triglycerides and blood viscosity. Atherosclerosis 2002; 161:433-9.


Stamos TD, Rosenson RS. Low high density lipoprotein levels are associated with an elevated blood viscosity. Atherosclerosis 1999; 146:161-5.


Hoieggen A, Fossum E, Moan A, Enger E, Kjeldsen SE. Whole-blood viscosity and the insulin-resistance syndrome. J Hypertens 1998; 16:203-10.


de Simone G, Devereux RB, Chien S, et al. Relation of blood viscosity to demographic and physiologic variables and to cardiovascular risk factors in apparently normal adults. Circulation 1990; 81:107-17.


Rosenson RS, McCormick A, Uretz EF. Distribution of blood viscosity values and biochemical correlates in healthy adults. Clin Chem 1996; 42:1189-95.


Tamariz LJ, Young JH, Pankow JS, et al. Blood viscosity and hematocrit as risk factors for type 2 diabetes mellitus: The Atherosclerosis Risk in Communities (ARIC) Study. Am J Epidemiol 2008; 168:1153-60.


Jax TW, Peters AJ, Plehn G, Schoebel FC. Hemostatic risk factors in patients with coronary artery disease and type 2 diabetes – a two year follow-up of 243 patients. Cardiovasc Diabetol 2009; 8:48.


Ernst E, Weihmayr T, et al. Cardiovascular risk factors and hemorheology. Physical fitness, stress and obesity. Atherosclerosis 1986; 59:263-9.


Hoieggen A, Fossum E, et al. Whole-blood viscosity and the insulin-resistance syndrome. J Hypertens 1998; 16:203-10.


Carroll S, Cooke CB, Butterly RJ. Plasma viscosity, fibrinogen and the metabolic syndrome: effect of obesity and cardiorespiratory fitness. Blood Coagul Fibrinolysis 2000; 11:71-8.


Ernst E, Koenig W, Matrai A, et al. Blood rheology in healthy cigarette smokers. Results from the MONICA project, Augsburg. Arteriosclerosis 1988; 8:385-8.


Ernst E. Haemorheological consequences of chronic cigarette smoking. J Cardiovasc Risk 1995; 2:435-9.


Lowe GD, Drummond MM, Forbes CD, Barbenel JC. The effects of age and cigarette-smoking on blood and plasma viscosity in men. Scott Med J 1980; 25:13-7.


Kameneva MV, Watach MJ, Borovetz HS. Gender difference in rheologic properties of blood and risk of cardiovascular diseases. Clin Hemorheol Microcirc 1999; 21:357-363.


Fowkes FG, Pell JP, Donnan PT, et al. Sex differences in susceptibility to etiologic factors for peripheral atherosclerosis. Importance of plasma fibrinogen and blood viscosity. Arterioscler Thromb 1994; 14:862-8.


Coppola L, Caserta F, De Lucia D, et al. Blood viscosity and aging. Arch Gerontol Geriatr 2000; 31:35-42.

 

Read Full Post »

What is the Role of Plasma Viscosity in Hemostasis and Vascular Disease Risk?

Author: Larry H Bernstein, MD

and

Curator: Aviva Lev-Ari, PhD, RN

This is the first of a two part discussion of viscosity, hemostasis, and vascular risk

Part II:  Special Considerations in Blood Lipoproteins, Viscosity, Assessment and Treatment

Thesis Statement: The effects of low-density lipoprotein and high-density lipoprotein on blood viscosity correlate with their association with risk of atherosclerosis in humans.  (Seminal study)

G. D. Sloop, MD.
Department of Pathology, Louisiana State University School of Medicine,
New Orleans, LA 70112, U.S.A.

  •  Increased blood or plasma viscosity has been associated with increased atherogenesis, and that the effects of low-density lipoprotein and high-density lipoprotein on blood viscosity correlate with their association with atherosclerosis risk.
  • Low-density lipoprotein-cholesterol was more strongly correlated with blood viscosity than was total cholesterol (r = 0.4149, P = 0.0281, compared with r = 0.2790, P = 0.1505). High-density lipoprotein-cholesterol levels were inversely associated with blood viscosity (r = – 0.4018, P = 0.0341).
  • To confirm these effects, viscometry was performed on erythrocytes, suspended in saline, which had been incubated in plasma of various low-density lipoprotein/high-density lipoprotein ratios. Viscosity correlated directly with low-density lipoprotein/high-density lipoprotein ratio (n = 23, r = 0.8561, P < 0.01).
  • Low-density lipoprotein receptor occupancy data suggests that these effects on viscosity are mediated by erythrocyte aggregation.
  • These results demonstrate that the effects of low-density lipoprotein and high-density lipoprotein on blood viscosity in healthy subjects may play a role in atherogenesis by modulating the dwell or residence time of atherogenic particles in the vicinity of the endothelium.

This discussion is an additional perspective on the series on coagulation, and earlier posts that were on flow dynamics.

Stroke and Bleeding in Atrial Fibrillation with Chronic Kidney Disease

Atrial Fibrillation: The Latest Management Strategies

Outcomes in High Cardiovascular Risk Patients: Prasugrel (Effient) vs. Clopidogrel (Plavix); Aliskiren (Tekturna) added to ACE or added to ARB

Positioning a Therapeutic Concept for Endogenous Augmentation of cEPCs — Therapeutic Indications for Macrovascular Disease: Coronary, Cerebrovascular and Peripheral

New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia

Nitric Oxide Signalling Pathways            AviralvatsaEndothelial Dysfunction, Diminished Availability of cEPCs, Increasing CVD Risk for Macrovascular Disease – Therapeutic Potential of cEPCs

Endothelin Receptors in Cardiovascular Diseases: The Role of eNOS Stimulation

Repair damaged blood vessels in heart disease, stroke, diabetes and trauma: Cellular Reprogramming amniotic fluid-derived cells into Endothelial Cells

Septic Shock: Drotrecogin Alfa (Activated) in Septic Shock

Statins’ Nonlipid Effects on Vascular Endothelium through eNOS Activation   LHB

Nitric Oxide Covalent Modifications: A Putative Therapeutic Target?  SJWilliamspa

Vascular Wall Shear Stress

Shear Stress

  1. The basic principles concerning mechanical stress applies to pathophysiological mechanisms in the vascular bed. In physics, stress is the internal distribution of forces within a body that balance and react to the external loads applied to it. Blood flow in the circulation leads to the development of superficial stresses near the vessel walls in either of two categories:

a) circumferential stress due to pulse pressure variation inside the vessel;
b) shear stress due to blood flow.

  1. The direction of the shear stress vector is determined the blood flow velocity vector adjacent to applied against the vessel wall.
  2. Friction is the opposing force applied by the wall.
  3. Shear stresses are disturbed by turbulent flow, regions of flow recirculation or flow separation.
  4. The notions of shear rate and fluid viscosity are crucial for the assessment of shear stress.

Fluid Flow and Shear Stress

  1. Shear rate is defined as the rate at which adjacent layers of fluid move with respect to each other, usually expressed as reciprocal seconds.
  2. The size of the shear rate gives an indication of the shape of the velocity profile for a given situation.
  3. The determination of shear stresses on a surface is based on the fundamental assumption of fluid mechanics, according to which the velocity of fluid upon the surface is zero (no-slip condition).
  4. Assuming that the blood is an ideal Newtonian fluid with constant viscosity, the flow is steady and laminar and the vessel is straight, cylindrical and inelastic, which is not the case. Under ideal conditions a parabolic velocity profile could be assumed.

The following assumptions have been made:

  1. The blood is considered as a Newtonian fluid.
  2. The vessel cross sectional area is cylindrical.
  3. The vessel is straight with inelastic walls.
  4. The blood flow is steady and laminar.

The Haagen-Poisseuille equation indicates that shear stress is directly proportional to blood flow rate and inversely proportional to vessel diameter.

  1. Viscosity is a property of a fluid that offers resistance to flow, and it is a measure of the combined effects of adhesion and cohesion.
  2. Viscosity increases as temperature decreases.
  3. Blood viscosity (non-Newtonian fluid) depends on shear rate, which is determined by blood platelets, red cells, etc.
  4. Blood viscosity is slightly affected by shear rate changes at low levels of hematocrit, but as hematocrit increases, the effect of shear rate changes becomes greater.
  5. the dependence of blood viscosity on hematocrit is more pronounced in the microcirculation than in larger vessels, due to hematocrit variations observed in small vessels (lumen diameter <100 Ìm).

The significant change of hematocrit in relation to vessel diameter is associated with the tendency of red blood cells to travel closer to the centre of the vessels. Thus, the greater the decrease in vessel lumen, the smaller the number of red blood cells that pass through, resulting in a decrease in blood viscosity.

Shear stress and vascular endothelium

  1. Endothelium responds to shear stress depending on the kind and the magnitude of shear stresses.
  2. the exposure of vascular endothelium to shear forces in the normal value range stimulates endothelial cells to release agents with direct or indirect antithrombotic properties, such as
  • prostacyclin,
  • nitric oxide (NO),
  • calcium,
  • thrombomodulin, etc.

Changes in shear stress magnitude activate cellular proliferation mechanisms as well as vascular remodeling processes.

  1. a high grade of shear stress increases wall thickness and expands the vessel’s diameter
  2. low shear stress induces a reduction in vessel diameter.
  3. Shear stresses are maintained at a mean of about 15 dynes/cm2.
  4. The presence of low shear stresses is frequently accompanied by unstable flow conditions
  • turbulence flow,
  • regions of blood recirculation,
  • “stagnant” blood areas.

(Papaioannou TG, Stefanadis C. Vascular Wall Shear Stress: Basic Principles and Methods. Hellenic J Cardiol 2005; 46: 9-15.)

Hemorheology and Microvascular Disorders

Blood flow in large arteries is dominated by inertial forces exhibited at high flow velocities, while viscosity is negligible. When the flow velocity is compromised by deceleration at a bifurcation, endothelial cell dysfunction can occur along the outer wall at the bifurcation.

In sharp contrast, the flow of blood in micro-vessels is dominated by viscous shear forces since the inertial forces are negligible due to low flow velocities. Shear stress is a critical parameter in micro-vascular flow, and a force-balance approach is proposed for determining micro-vascular shear stress. When the attractive forces between erythrocytes are greater than the shear force produced by micro-vascular flow, tissue perfusion itself cannot be sustained.

The yield stress parameter is presented as a diagnostic candidate for future clinical research, specifically, as a fluid dynamic biomarker for micro-vascular disorders. The relation between the yield stress and diastolic blood viscosity (DBV) is described using the Casson model for viscosity, from which one may be able determine thresholds of DBV where the risk of microvascular disorders is high.

Cho Y-Il, and Cho DJ. Hemorheology and Microvascular Disorders. Korean Circ J 2011; 41:287-295.
Print ISSN 1738-5520 / On-line ISSN 1738-5555

Blood Rheology in Genesis of Atherothrombosis

Elevated blood viscosity is an integral component of vascular shear stress that contributes to the

  • site specificity of atherogenesis,
  • rapid growth of atherosclerotic lesions, and
  • increases their propensity to rupture.

Ex vivo measurements of whole blood viscosity (WBV) is a predictor of cardiovascular events in apparently both healthy individuals and cardiovascular disease patients. The association of an elevated WBV and incident cardiovascular events remains significant in multivariate models that adjust for major cardiovascular risk factors.

These prospective data suggest that measurement of WBV may be valuable as part of routine cardiovascular profiling, thereby potentially useful data for risk stratification and therapeutic interventions.

The recent development of a high throughput blood viscometer, which is capable of rapidly performing blood viscosity measurements across 10,000 shear rates using a single blood sample, enables the assessment of blood flow characteristics in different regions of the circulatory system and opens new opportunities for detecting and monitoring cardiovascular diseases.

Cowan AQ, Cho DJ, & Rosenson RS. Importance of Blood Rheology in the Pathophysiology of Athero-thrombosis. Cardiovasc Drugs Ther 2012; 26:339–348. DOI 10.1007/s10557-012-6402-4

 

English: shear stress

English: shear stress (Photo credit: Wikipedia)

English: Shear rate dependency on fluid type a...

English: Shear rate dependency on fluid type and applied shear stress. (Photo credit: Wikipedia)

Inflammatory, haemostatic, and rheological markers

Markers of inflammation, hemostasis, and blood rheology have been ascertained to be risk factors for coronary heart disease and stroke. Their role in peripheral arterial disease (PAD) is not well established and some of them, including the pro-inflammatory cytokine interleukin-6 (IL-6), have not been examined before in prospective epidemiological studies.

In the Edinburgh Artery Study, we studied the development of PAD in the general population and evaluated 17 potential blood markers as predictors of incident PAD. At baseline (1987), 1519 men and women free of PAD aged 55–74 were recruited. After 17 years, 208 subjects had developed symptomatic PAD. In analysis adjusted for cardiovascular risk factors and baseline cardiovascular disease (CVD), only

  1. C-reactive protein 1.30 (1.08, 1.56)
  2. fibrinogen               1.16 (1.05, 1.17)
  3. lipoprotein (a)        1.22 (1.04, 1.44),
  4. hematocrit 1.22 (1.08, 1.38) [hazard ratio (95% CI) ]

-corresponding to an increase equal to the inter-tertile range-

were significantly (P , 0.01) associated with PAD.

These markers provided very little prognostic information for incident PAD to that obtained by cardiovascular risk factors and the ankle brachial index. Other markers included:

  • IL-6
  • intracellular adhesion molecule 1 (ICAM-1)
  • D-dimer
  • tissue plasminogen activator antigen
  • plasma and blood viscosities

having weak associations, were considerably attenuated when accounting for CVD risk factors.

Tzoulaki I, Murray GD, Lee AJ, Rumley A, et al. Inflammatory, haemostatic, and rheological markers for incident peripheral arterial disease: Edinburgh Artery Study. European Heart Journal (2007) 28, 354–362. doi:10.1093/eurheartj/ehl441

 

Leukocyte and platelet adhesion under flow

Leukocyte adhesion under flow in the microvasculature is mediated by

  • binding between cell surface receptors and
  • complementary ligands expressed on the surface of the endothelium.

Leukocytes adhere to endothelium in a two-step mechanism:

  1. rolling (primarily mediated by selectins) followed by
  2. firm adhesion (primarily mediated by integrins).

These investigators simulated the adhesion of a cell to a surface in flow, and elucidated the relationship between receptor–ligand functional properties and the dynamics of adhesion using a computational method called ‘‘Adhesive Dynamics.’’

Behaviors that are observed in simulations include

  • firm adhesion,
  • transient adhesion (rolling), and
  • no adhesion.

They varied the

  • dissociative properties,
  • association rate,
  • bond elasticity, and
  • shear rate

and found that the

  1. unstressed dissociation rate, kro,
  2. and the bond interaction length, γ,

are the most important molecular properties controlling the dynamics of adhesion.

(Chang KC, Tees DFJ andHammer DA. The state diagram for cell adhesion under flow: Leukocyte rolling and firm adhesion. PNAS 2000; 97(21):11262-11267.)

  • The effect of leukocyte adhesion on blood flow in small vessels is treated as a homogeneous Newtonian fluid is sufficient to explain resistance changes in venular microcirculation.
  • The Casson model represents the effect of red blood cell aggregation and requires the non-Newtonian fluid flow model of resistance changes in small venules.

In this model the blood vessel is considered as a circular cylinder and the leukocyte is considered as a truncated spherical protrusion in the inner side of the blood vessel.

Numerical simulations demonstrated that for a Casson fluid with hematocrit of 0.4 and flow rate Q = 0:072 nl/s, a single leukocyte increases flow resistance by 5% in a 32 m diameter and 100 m long vessel. For a smaller vessel of 18 m, the flow resistance increases by 15%.

(Das B, Johnson PC, and Popel AS. Computational fluid dynamic studies of leukocyte adhesion effects on non-Newtonian blood flow through microvessels. Biorheology  2000; 37:239–258.)

Adhesive interactions between leukocytes

The mechanics of how blood cells interact with one another and with biological or synthetic surfaces is quite complex: owing to

  • the deformability of cells,
  • the variation in vessel geometry, and
  • the large number of competing chemistries present

(Lipowski et al., 1991, 1996).

Adhesive interactions between white blood cells and the interior surface of the blood vessels they contact are important in

  • inflammation and in
  • the progression of heart disease.

Parallel-plate micro-channels have been used to characterize the strength of these interactions. Recent computational and experimental work by several laboratories are directed at bridging the gap between

  • behavior observed in flow chamber experiments, and
  • cell surface interactions observed in the micro-vessels

What follows is a computational simulation of specific adhesive interactions between cells and surfaces under flow. In the adhesive dynamics formulation, adhesion molecules are modeled as compliant springs. The Bell model is used to describe the kinetics of single biomolecular bond failure, which relates

  1. the rate of dissociation kr to
  2. the magnitude of the force on the bond F.

The rate of formation directly follows from the Boltzmann distribution for affinity. The expression for the binding rate must also incorporate the effect of the relative motion of the two surfaces. Unless firmly adhered to a surface, white blood cells can be effectively modeled as rigid spherical particles. This is consistent with good agreement between bead versus cell in vitro experiments (Chang and Hammer, 2000).

Various methods have been used to bring clarity to the complex range of transient interactions between

  • cells,
  • neighboring cells, and
  • bounding surfaces under flow.

Knowledge gained from these investigations of flow systems may prove useful in microfluidic applications where the transport of

  • blood cells and
  • solubilized, bioactive molecules is needed, or
  • in miniaturized diagnostic devices

where cell mechanics or binding affinities can be correlated with clinical pathologies.

(King MR. Cell-Surface Adhesive Interactions in Microchannels and Microvessels.   First International Conference on Microchannels and Minichannels. 2003, Rochester, NY. Pp 1-6. ICMM2003-1012.

Monitoring Blood Viscosity to Improve Cognitive Function

Blood viscosity, the metric for the thickness and stickiness of blood, is associated with all major risk factors for cardiovascular disease, complications of diabetes, and it is highly predictive of stroke and MI, as well as cognitive decline. While elevated blood viscosity has a role in the etiology of atherosclerosis,  there is strong evidence for a causal role in the development of dementia.  It follows that improving blood viscosity should lead to improvements in cognitive as well as cardiovascular function.

Factors Affecting Blood Viscosity

Five cardinal factors are:

  1. Hematocrit,
  2. erythrocyte deformability,
  3. plasma viscosity,
  4. erythrocyte aggregation, and
  5. temperature

First to consider is hematocrit. Erythrocyte deformability is the ability of red blood cells to elongate and fold themselves for better hemodynamic flow in large vessels as well as for more efficient passage through capillaries.  The more deformable the red blood cells, the less viscous the blood.  Young red blood cells are flexible and tend to stiffen over their 120 day life-span.  Erythrocyte deformability is, after hematocrit, the second most important determinant of blood viscosity.

The third factor is plasma viscosity.  An important determinant of plasma viscosity is hydration status, but it is also determined by the presence of high molecular-weight proteins, especially immune globulins and fibrinogen.

Erythrocyte aggregation, the tendency of red blood cells to be attracted to each other and stick together is not well understood, but erythrocyte deformability and plasma proteins play important roles.

Blood, like most other fluids, is less viscous at higher temperatures. It is estimated that a 1°C increase in temperature results in a 2% decrease in blood viscosity.

Viscous Blood is Abrasive Blood

Maintaining efficient blood flow through the vessels forms layers, or lamina, that slide easily over each other.

  • Faster flowing blood can be found in the central layers and
  • Slower moving blood in the outer layers near the vessel walls.
  • Hyper-viscous blood doesn’t slide as smoothly as less viscous blood.
  • The turbulence damages the delicate intima of the blood vessel.

One of the most common locations for the development of atherosclerotic plaques is at the bifurcation of the carotid arteries, and the positioning of these plaques can be mapped to the turbulent blood flow patterns of this region.

Blood viscosity is highly correlated with thickening of the carotid intima-media, a prelude to plaque formation.  As the carotid arteries become progressively more occluded, there is decreased blood supply to the brain.

Hyper-viscosity also impacts the brain at the level of micro-perfusion.  Stiffened red blood cells have a decreased ability to bend and fold as they pass through capillaries. This leads to endothelial abrasion.  The capillary walls thicken and diffusion of oxygen and nutrients into the tissues decreases. The effect is most pronounced in those tissues where perfusion is essential for unimpaired function, such as the brain.

Diabetes, Blood Viscosity, and Dementia

While diabetics have elevated blood viscosity, blood viscosity is a risk factor that predicts progression from metabolic syndrome to diabetes. Red blood cell flexibility is greatly reduced by fluctuations in the osmolality of the blood which is affected by blood glucose concentration.  Uncontrolled, this leads to  small vessel disease.

  • Blindness,
  • kidney insufficiency, and
  • leg ischemia

develop as these organs are the dependent on micro-perfusion.

The Rotterdam Study and other research point to decreased cognitive function and increased dementia among diabetics as being further manifestations of the decreased perfusion that accompanies elevated blood viscosity.

 

Blood Viscosity, Cognitive Decline, and Alzheimer’s

Multiple forms of cognitive decline, including dementia and Alzheimers’ are impacted by increased blood viscosity. The Edinburgh Artery Study (2010) showed that blood viscosity predicted cognitive decline over a four year period in 452 elderly subjects (p<0.05).  Blood viscosity, an important determinant of the circulatory flow, was significantly linked with cognitive function.  The associations between cardiovascular risk factors, vascular dementia, and Alzheimer’s disease were presented by de la Torre (2002) (nine points of evidence) in a compelling argument that Alzheimer’s is a vascular disorder characterized by impaired micro-perfusion to the brain.

Testing for Blood Viscosity

The most recent technology uses an automated scanning capillary tube viscometer capable of measuring viscosity over the complete range of physiologic values experienced in a cardiac cycle (10,000 shear rates) with a single continuous measurement. This test provides clinicians with measurements of blood viscosity at both systolic and diastolic pressures.

Blood viscosity testing is indicated for a wide range of patients, as good tissue perfusion is central to good health regardless of what system is being addressed.  Patients with signs of cognitive decline should be high on the list of those appropriate to test, and those patients with a history or family history of heart disease, stroke, hypertension, diabetes, metabolic syndrome, migraines, smoking, alcoholism or other risk factors associated with the development of Alzheimer’s disease.

Source: Larsen P, Monitoring Blood Viscosity to Improve Cognitive Function

  1. World Health Organization. Dementia: A Public Health Priority. April, 2012.
  2. Sloop GD. A unifying theory of atherogenesis. Med Hypotheses. 1996; 47:321-5.
  3. Kensey KR and Cho, Y. Physical Principles and Circulation: Hemodynamics. In: The Origin of Atherosclerosis: What Really Initiates the Inflammatory Process. 2nd Ed. Summersville, WV: SegMedica; 2007:33-50.
  4. Hofman A., Ott A, et. al. Atherosclerosis, apolipoprotein E, and prevalence of dementia and Alzheimer’s disease in the Rotterdam Study. Lancet, 1997, 349 (9046): 151-154

 

 Sleep Apnea and Blood Viscosity.

Obstructive sleep apnea (OSA) is an important public health concern, which affects around 2–4% of the population. Left untreated, it causes a decrease not only in quality of life, but also of life expectancy. Despite the fact that knowledge about the mechanisms of development of cardiovascular disease in patients with OSA is still incomplete, observations confirm a relationship between sleep disordered breathing and the rheological properties of blood.

Tażbirek M, Słowińska L, Kawalski M, Pierzchała W.   The rheological properties of blood and the risk of cardiovascular disease in patients with obstructive sleep apnea syndrome (OSAS) Folia Histochemica et Cytobiologica 2011; 49(2):206–210.

Hemostatic and Rheological Risk Factors and the Risk Stratification

Backgound: Thrombosis is regarded to be a key factor in the development of acute coronary syndromes in patients with coronary artery disease (CAD). We hypothesize, that hemostatic
and rheological risk factors may be of major relevance for the incidence and the risk stratification of these patients.

  • Methods: In 243 patients with coronary artery disease and stable angina pectoris parameters of metabolism, hemostasis, blood rheology and endogenous fibrinolysis were assessed.

Patients were prospectively followed for 2 years in respect to elective revascularizations and acute coronary syndromes.

Results: During follow-up 88 patients presented with cardiac events, 22 of those were admitted to the hospital because of acute events, 5 Patients were excluded due to non- cardiac death.

Patients with clinical events were found to be more frequently diabetic and presented with a more progressed coronary atherosclerosis. Even though patients with diabetes mellitus demonstrated a comparable level of multivessel disease (71% vs. 70%) the rate of elective revascularization was higher (41% vs. 28%, p < 0.05). The results were also unfavorable for
the incidence of acute cardiovascular events (18% vs. 8%, p < 0.01).

In comparison to non-diabetic patients diabetics demonstrated significantly elevated levels of

  • fibrinogen (352 ± 76 vs. 312 ± 64 mg/dl, p < 0.01),
  • plasma viscosity (1.38 ± 0.23 vs. 1.31 ± 0.16 mPas, p < 0.01),
  • red blood cell aggregation (13.2 ± 2.5 vs. 12.1 ± 3.1 E, p < 0.05) and

plasmin-activator-inhibitor (6.11 ± 3.4 vs. 4.7 ± 2.7 U/l, p < 0.05).

Conclusion: Pathological alterations of fibrinogen, blood rheology and plasminogen-activatorinhibtor as indicators of a procoagulant state are of major relevance for the
short-term incidence of cardiac events, especially in patients with diabetes mellitus type 2, and may be used to stratify patients to specific therapies.

parameters of metabolism, hemostasis, endogenous fibrinolysis and blood rheology for patients with and without diabetes mellitus.

diabetes mellitus non-diabetic patients p-value
glucose (mg/dl) 157 ± 67 88 ± 12 <0,0001
fibrinogen (mg/dl) 351 ± 76 312 ± 64 <0,01
plasma viscosity (mPa × s-1) 1,38 ± 0,23 1,31 ± 0,16 <0,01

Jax TW, Peters AJ, Plehn G, and  Schoebel FC. Hemostatic risk factors in patients with coronary artery disease and type 2 diabetes – a two year follow-up of 243 patients. Cardiovascular Diabetology 2009; 8:48-57.  doi:10.1186/1475-2840-8-48

 

Abnormal Viscosity in Pregnancy

Abnormal hemorheology has been shown to be in almost all conditions associated with accelerated atherosclerotic cardiovascular disorders. The aim of this study is to test the hypothesis that high concentration of plasma Triglyceride (TG) predicts altered hemorheological variables in normal pregnancy.

Sixty pregnant women attending antenatal clinic of the University of Ilorin Teaching Hospital at 14-36 weeks of gestation (aged 21-36 years) were recruited after giving informed consent to participate in the study. They consisted of 28 primigravidae and 32 multigravidae. Twenty-four healthy non-pregnant women of similar age and socioeconomical status were also recruited. The study showed that fasting plasma Triglyceride (TG) increased significantly in primigravidae and multigravidae.

There was a positive correlation between plasma TG level and blood viscosity (r = 0.36, p<0.01). TG also correlated positively with hematocrit (r = 0.48, p<0.001), hemoglobin concentration (r = 0.43, p<0.001) and white blood cell count (r = 0.38, p<0.01) in the pregnant group as a whole. In primigravidae, there was a strong correlation between TG and

o          blood viscosity (r = 0.63, p<0.001),

o          hematocrit (r = 0.88, p<0.001),

o          hemoglobin concentration (r = 0.85, p<0.001).

However, there was an insignificant correlation between TG and the hemorheological variables in multigravidae.

Plasma TG concentration in primigravidae is strongly associated with blood viscosity also with hematocrit and hemoglobin concentration, but the association is lost in multigravidae. Therefore, TG could be considered as an important potential indicator of altered blood rheology in primigravidae, but not in multigravidae.

Olatunji LA, Soladoye AO, Fawole AA, Jimoh RO and Olatunji VA. Association between Plasma Triglyceride and Hemorheological Variables in Nigerian Primigravidae and Multigravidae.

Research Journal of Medical Sciences 2008; 2(3):116-120. ISSN: 1815-9346.

 

Retinal Vein Occlusion

Retinal vein occlusion (RVO) is an important cause of permanent visual loss. Hyperviscosity, due to alterations of blood cells and plasma components, may play a role in the pathogenesis of RVO. Aim of this case-control study was to evaluate the possible association between hemorheology and RVO. In 180 RVO patients and in 180 healthy subjects comparable for age and gender we analysed the whole hemorheological profile: [whole blood viscosity (WBV), erythrocyte deformability index (DI), plasma viscosity (PLV), and fibrinogen]. WBV and PLV were measured using a rotational viscosimeter, whereas DI was measured by a microcomputer-assisted filtrometer. WBV at 0.512 sec-1 and 94.5 sec-1 shear rates as well as DI, but not PLV, were significantly different in patients as compared to healthy subjects.

At the logistic univariate analysis, a significant association between the

  • highest tertiles of WBV at 94.5 sec-1 shear rate (OR:4.91,95%CI 2.95–8.17;p<0.0001),
  • WBV at 0.512 sec-1 shear rate (OR: 2.31, 95%CI 1.42–3.77; p<0.0001), and
  • the lowest tertile of DI (OR: 0.18, 95%CI 0.10–0.32; p<0.0001) and RVO was found.

After adjustment for potential confounders,

  • the highest tertiles of WBV at 0.512 sec-1 shear rate (OR: 3.23, 95%CI 1.39–7.48; p=0.006),
  • WBV at 94.5 sec-1 shear rate (OR: 6.74, 95%CI 3.06–14.86; p<0.0001) and
  • the lowest tertile of DI (OR:0.20,95%CI 0.09–0.44,p<0.0001)

remained significantly associated with the disease. In conclusion, the data indicate that an alteration of hemorheological parameters may modulate the susceptibility to the RVO.

Sofi F, Mannini L, Marcucci R, Bolli P, Sodi A, et al.  Role of hemorheological factors in patients with retinal vein occlusion. In Blood Coagulation, Fibrinolysis and Cellular Haemostasis.  Thromb Haemost 2007; 98:1215–1219.

Summary:  This discussion is a two part sequence that first establishes the known strong relationship between blood flow viscosity, shear stress, and plasma triglycerides (VLDL) as risk factors for hemostatic disorders leading to thromboembolic disease, and the association with atherosclerotic disease affecting the heart, the brain (via carotid blood flow), peripheral circulation, the kidneys, and retinopathy as well.

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Aspirin Use, Tumor PIK3CA Mutation, and Colorectal-Cancer Survival

N Engl J Med 2012; 367:1596-1606 October 25, 2012DOI: 10.1056/NEJMoa1207756

Screen Shot 2021-07-19 at 7.30.04 PM

Word Cloud By Danielle Smolyar

BACKGROUND

Regular use of aspirin after a diagnosis of colon cancer has been associated with a superior clinical outcome. Experimental evidence suggests that inhibition of prostaglandin-endoperoxide synthase 2 (PTGS2) (also known as cyclooxygenase-2) by aspirin down-regulates phosphatidylinositol 3-kinase (PI3K) signaling activity. We hypothesized that the effect of aspirin on survival and prognosis in patients with cancers characterized by mutated PIK3CA (the phosphatidylinositol-4,5-bisphosphonate 3-kinase, catalytic subunit alpha polypeptide gene) might differ from the effect among those with wild-type PIK3CA cancers.

METHODS

We obtained data on 964 patients with rectal or colon cancer from the Nurses’ Health Study and the Health Professionals Follow-up Study, including data on aspirin use after diagnosis and the presence or absence of PIK3CA mutation. We used a Cox proportional-hazards model to compute the multivariate hazard ratio for death. We examined tumor markers, including PTGS2, phosphorylated AKT,KRAS, BRAF, microsatellite instability, CpG island methylator phenotype, and methylation of long interspersed nucleotide element 1.

RESULTS

Among patients with mutated-PIK3CA colorectal cancers, regular use of aspirin after diagnosis was associated with superior colorectal cancer–specific survival (multivariate hazard ratio for cancer-related death, 0.18; 95% confidence interval [CI], 0.06 to 0.61; P<0.001 by the log-rank test) and overall survival (multivariate hazard ratio for death from any cause, 0.54; 95% CI, 0.31 to 0.94; P=0.01 by the log-rank test). In contrast, among patients with wild-type PIK3CA, regular use of aspirin after diagnosis was not associated with colorectal cancer–specific survival (multivariate hazard ratio, 0.96; 95% CI, 0.69 to 1.32; P=0.76 by the log-rank test; P=0.009 for interaction between aspirin and PIK3CA variables) or overall survival (multivariate hazard ratio, 0.94; 95% CI, 0.75 to 1.17; P=0.96 by the log-rank test; P=0.07 for interaction).

CONCLUSIONS

Regular use of aspirin after diagnosis was associated with longer survival among patients with mutated-PIK3CA colorectal cancer, but not among patients with wild-type PIK3CA cancer. The findings from this molecular pathological epidemiology study suggest that thePIK3CA mutation in colorectal cancer may serve as a predictive molecular biomarker for adjuvant aspirin therapy. (Funded by The National Institutes of Health and others.)

SOURCE:

http://www.nejm.org/doi/pdf/10.1056/NEJMoa1207756

Study Shows Aspirin Could Increase Survival in Colorectal Cancer Patients with PIK3CA Mutations

November 28, 2012

By mining epidemiological data from several long-term health studies and combining it with genomic data, a team led by the Dana-Farber Cancer Institute and Harvard Medical School has shown that colorectal cancer patients with PIK3CA mutations may benefit from treatment with aspirin, and that PIK3CA mutation status could serve as biomarker to predict response to aspirin treatment.

The study, published last month in the New England Journal of Medicine, evaluated data from 964 patients with colon or rectal cancer from the Nurses’ Health Study and the Health Professionals Follow-up Study. It found that patients with PIK3CA-mutated cancers who regularly took aspirin after their diagnosis had significantly longer survival, while those with wild-type cancers showed no benefit from aspirin treatment.

According to the researchers, led by Dana Farber’s Shuji Ogino, the results suggest that aspirin might be worth testing as an adjuvant treatment for the approximately 20 percent of colorectal cancer patients with PIK3CA mutations.

“What we conclude is that this PK3CA mutation can be a predictive biomarker and based on molecular testing, doctors could strongly or weakly recommend aspirin,” Ogino told PGx Reporter.

According to the group, numerous observational and other studies have suggested that aspirin might play a protective role in colorectal cancer. Aspirin is currently prescribed to some colorectal cancer patients, Ogino said, but so far there has been no way to predict which patients are likely to actually benefit from it.

Ogino said his team’s previous research found that levels of the enzyme PTGS2 could predict response to aspirin treatment, but the association didn’t reach statistical significance. And because of a lack of good standards for measuring PTGS2 using immunohistochemistry, the group wanted to search for a better, more objective marker.

According to the group, other experiments have suggested that as aspirin inhibits PTGS2 it also down-regulates PI3K signaling, which hinted that PIK3CA mutations could be a potential marker as well.

“Based on previous studies, we hypothesized that PIK3CA mutation may be a good marker for aspirin response,” Ogino said. Testing this hypothesis prospectively, he said, would have taken decades, but by using epidemiological data coupled with molecular data the group was able to find an answer much more quickly.

In the recent NEJM study, Ogino and his colleagues compared the survival of colorectal patients who reported that they regularly used aspirin after their diagnosis with those who didn’t, and further subdivided the group into those with PIK3CA mutations and those without.

The team studied samples from a subset of 964 patients from the two large longitudinal health studies for which the relevant aspirin use data was available, collecting specimens from the registries and using pyrosequencing to establish PIK3CA mutation status for each patient’s tumor. The group also recorded whether samples had BRAF or KRAS mutations.

The researchers found that patients with PIK3CA mutations who reported regular aspirin use had a significantly improved five-year survival rate — 97 percent — over those who didn’t take aspirin — 74 percent.

In contrast, patients without the mutation showed no difference in survival whether they took aspirin regularly or not.

Because the group had previously found that PTGS2 levels were also predictive of response to aspirin use, the researchers evaluated whether a combination of both markers could serve an even greater predictor. According to the study authors, the strongest effect of aspirin use was indeed in patients with both markers, though this finding did not have high statistical significance.

Because the study sampled patients treated before 2006, the group assumed that chemotherapy treatment was similar for the PIK3CA-mutated cases and the wild-type cases. According to the researchers, information on patients’ mutation status was not available to treating physicians at the time of the studies.

The team also distinguished between aspirin use before and after diagnosis, finding that pre-diagnosis use did not seem to influence the relationship between PIK3CA and post-diagnosis aspirin.

Ogino said that the group is pursuing avenues to validate the findings. Unfortunately, relatively few trials of aspirin treatment in colorectal cancer have been conducted.

One option, he said, would be to analyze data from a trial of celecoxib (Pfizer’s Celebrex), a similar drug to aspirin, instead. But it’s not an ideal solution. If the results reflect what the group found in its aspirin study it would shore up the aspirin finding. However, if the results do not match up it would be unclear what that might mean about the group’s original findings.

Potentially, the researchers could also use mouse models or cell lines, but this route has several downsides. Most important, Ogino said, is the fact that aspirin likely affects inflammation more than cancer cells themselves. “Cancer is not just the cancer cell, it’s a much more complicated system so you can’t assess it in the test tube, basically,” he said.

Molika Ashford is a GenomeWeb contributing editor and covers personalized medicine and molecular diagnostics. E-mail her here.

Related Stories

SOURCE:

Read Full Post »

Ethical Differences: US Physicians vs UK Physicians

Reporter: Aviva Lev-Ari, PhD, RN

Article ID #7: Ethical Differences: US Physicians vs UK Physicians. Published on 11/28/2012

WordCloud Image Produced by Adam Tubman

 

Spotted on

Exclusive: How US and UK Physicians’ Ethics Differ

Harris Meyer

Nov 20, 2012

 

Introduction

US and UK physicians receive medical training so similar that they can readily practice in either the United States or the United Kingdom. They share a common history and culture and speak the same language, more or less.

There were notable contrasts on attitudes toward what doctors regard as

  • futile care,
  • maintaining patient confidentiality in certain situations,
  • alerting patients about poor-quality physicians, and
  • telling patients the truth about terminal conditions.
  • Their biggest difference seen was about whether to defer to the treatment wishes of patients’ families (Table).

But a newMedscape survey of nearly 25,000 US and UK physicians found that doctors in the 2 nations hold markedly different views on some thorny medical ethics issues.

Table. Differences in Attitudes Between US and UK Physicians, Medscape 2012 Ethics Report

Question US Physicians UK Physicians
Would you ever go against a family’s wishes to end treatment and continue treating a patient whom you felt had a chance to recover? Yes: 23% Yes: 57%
Is it ever acceptable to perform “unnecessary” procedures due to malpractice concerns? Yes: 23% Yes: 9%
Is it right to provide intensive care to a newborn who either will die soon or survive with an objectively terrible quality of life? Yes: 34% Yes: 22%
Would you ever hide information from a patient about a terminal or pre-terminal diagnosis if you believed it would help bolster the patient’s spirit? Yes: 10% Yes: 14%
Would you give life-sustaining therapy if you believed it to be futile? Yes: 35% Yes: 22%
Should physician-assisted suicides be allowed in some situations? Yes: 47% Yes: 37%
Would you inform a patient if he or she were scheduled to have a procedure done by a physician whose skill you knew to be substandard? Yes: 47% Yes: 32%
Is it acceptable to breach patient confidentiality if a patient’s health status could harm others? Yes: 63% Yes: 74%
Would you ever decide to devote scarce or costly resources to a younger patient rather than to one who was older but not facing imminent death? Yes: 27% Yes: 24%

© Medscape 2012

Several factors contribute to the differences: different views toward patient-centeredness; different medical liability climate; the way physicians are paid; national religious attitudes; and the nature of the relationship between physicians, patients, and patients’ families.

The survey was conducted as part of Medscape’s Physician Ethics Report 2012. Survey questionnaires were sent to physicians in a wide range of medical specialties in each country. Completed questionnaires were received from more than 24,000 US physicians and 940 UK physicians. The statistical significance of the differences in responses between US and UK doctors was not calculated.

One obvious difference that could affect attitudes is that most US physicians work either independently or for private hospital and medical groups and receive fee-for-service payment, while most UK physicians work directly or indirectly for the country’s socialized National Health Service (NHS). In Great Britain, most medical specialists work as salaried staff in publicly operated hospitals, while most primary care physicians work independently and receive a mix of fee-for-service payments, per-patient global payments, and salary.

“The big difference is the way the system is funded and the culture of the United Kingdom,” says Brian Jarman, MD, a medical professor at Imperial College in London who serves on the NHS’s advisory committee on resource allocation. “I don’t think our decisions are as affected by financial considerations as in the US.”

Another major distinction: There’s less medical malpractice litigation in the UK. On top of that, UK medical specialists receive liability coverage through their hospital, while general practitioners have their premiums offset by NHS payments. In the US, physicians worry a lot more about malpractice suits, and doctors in independent practice are responsible for paying sizable liability premiums on their own.

The largest percentage difference in the survey — and one of the most provocative findings — was seen on the question of whether the doctor would ever go against a family’s wishes to end treatment and continue treating a patient who the doctor felt had a chance to recover. Most UK physicians in the survey — 57% — said yes, compared with just 23% of US physicians. That finding cut against the view that UK doctors are more likely to ration, and it also highlighted an important cultural gap.

“In most places in the world, doctors think they know the right treatment and do it,” says Dr. Lachlan Forrow, MD, a Harvard University medical ethicist and palliative care specialist. “My German friends say patients and families expect doctors to make decisions. In the US we might defer more to the patient and family.”

On top of that, he adds, families in the US probably express their wishes with more vehemence than in the UK and are more likely to file a lawsuit if the doctor goes against their wishes.

Differences Were Surprising

But differing attitudes and responses to survey questions didn’t always fall along lines predictable by economics.

It’s often thought that UK doctors are more cost-conscious and more apt to ration services than US doctors are, given that US doctors are paid more for providing more procedures and services, while UK doctors work in a budgeted, socialized medicine environment. The responses to the survey, however, suggest that this is true in some situations and not true in others.

Even so, the experts found more similarities than differences in the responses, with large percentages of doctors from both countries responding to many of these tough ethical questions by choosing “it depends.” Indeed, the responses of US and UK doctors were comparable on most of the questions, including informing patients about medical errors, reporting impaired colleagues, performing abortions regardless of personal beliefs, and notifying patients about risks of a procedure when obtaining informed consent.

“One of the findings is how remarkably small the differences are,” says Don Berwick, MD, a pediatrics and health policy professor at Harvard University and former head of the Centers for Medicare & Medicaid Services who has done extensive quality-improvement consulting work with the UK’s NHS.

For a majority of issues, US and UK physicians are generally in agreement. For example, on the question of whether it’s right to provide intensive care to a newborn who either will die soon or survive with poor quality of life, US physicians were more likely than UK physicians to say yes — 34% to 22% . But the largest group in both countries — about 40% — said that it depends.

Dr. Forrow says this finding shows that doctors in both countries properly base decisions on individual circumstances. “What if grandma wants to see the baby before she dies and the baby won’t suffer? So it does depend.”

Candor With Patients

Another intriguing difference came on the question of whether the doctor would hide information from a patient about a terminal or pre-terminal diagnosis if the doctor believed it would help the patient’s spirit. Far more US than UK doctors – 72% vs 54% — said, “No, I am always completely truthful about diagnoses,” while more UK than US doctors — 33% vs 18% — said that it depends.

Dr. Berwick says this difference may result from a stronger sense of customer focus in the US. “Patient-centeredness as a fundamental property is better developed in the US than in the UK,” he says. “US doctors say it’s the patient’s right to know, while British doctors might say, ‘In my judgment it would be better for patients for me to not always be completely truthful.'”

Doctors in the 2 countries also differed on the question of whether they would ever give life-sustaining therapy that they believed to be futile, with 35% of US doctors and just 22% of UK doctors saying yes. About 40% of both groups said that it depends.

“The implication is that there is a financial incentive in the US to maintain the end-of-life patient in the hospital, and that incentive is not there in the UK,” Dr. Jarman says.

Societal and Religious Differences

Similarly, US and UK doctors differed on the question of whether it’s right to provide intensive care to a newborn who either will die soon or survive with poor quality of life, with US physicians more likely to say yes.

Both Dr. Forrow and Dr. Jarman agreed that there likely are societal religious factors influencing these differences over whether to provide what could be called futile care.

“The US is a more religious society,” Dr. Forrow says. “We do all kinds of things that are not medically necessary but the patient thinks they are necessary. When doctors think something is futile, patients and families object more. They say, ‘Give God a chance.'”

In contrast, Dr. Jarman says, “The UK is not a religious country and people don’t go to church as much, so those considerations wouldn’t be there.”

Despite greater religiosity in the US, American doctors were somewhat more likely than UK doctors to say that physician-assisted suicide should be allowed in some situations — 47% to 37%. That could be related to the fact that physician-assisted death for terminally ill patients is legal in 3 US states but remains illegal in the UK.

Protecting Other Physicians?

US doctors also were more likely than UK doctors to say that they would inform a patient if they felt a doctor scheduled to perform a procedure on the patient had substandard skill levels — 47% to 32%. Nearly 40% in both countries said that it depends.

“British doctors are more protective of their colleagues than US doctors are,” Dr. Berwick says. “This implies that US doctors are getting a little more comfortable about transparency on clinical performance.”

Dr. Jarman said that this difference in attitude could be a holdover from his country’s old General Medical Council rule, abolished in the 1980s, under which a doctor who reported a colleague for doing something wrong risked being barred from practice.

Finally, the survey showed a difference in attitude toward patient confidentiality and reporting communicable diseases. UK doctors were more likely than US doctors to say that it’s acceptable to breach patient confidentiality if a patient’s health status could harm others — 74% to 63%.

Dr. Berwick explained that by saying that more UK doctors than US doctors receive public training that encourages reporting of communicable diseases, and that the US has a very strong patient confidentiality and privacy law.

Dr. Jarman noted that the General Medical Council rules encourage physicians to break confidentiality and report patients’ communicable diseases or other conditions posing harm or risk to others. “If someone is causing harm to others, doctors are correct in breaking confidentiality for the good of the state,” he says.

Dr. Berwick says that the results of the Medscape survey are complex, revealing some important differences between US and UK physicians. But overall he feels reassured by their shared ethical values.

“A significant portion in both countries say that they will make decisions based on the details of the case,” he says. “They are willing to consider treatment efficacy. They are sensitive to the social world of the patient and what the families are feeling. They are connecting in the most humane way to the patient’s entire circumstance.”

Dr. Jarman says he found the survey interesting and challenging. “You know the correct answers but you also know that with certain patients you’ve got to be human and not totally follow the rules,” he says. “You have to be a little bit human about it.”

SOURCE:

http://www.medscape.com/viewarticle/774737

 

 

 

Read Full Post »

Author: Tilda Barliya PhD

In response to the previous post:

Paclitaxel vs Abraxane (albumin-bound paclitaxel)

http://pharmaceuticalintelligence.com/2012/11/17/paclitaxel-vs-abraxane-albumin-bound-paclitaxel/

Pharmacogenomics properties are presented, below.

Paclitaxel is a mitotic inhibitor used in cancer chemotherapy. It was discovered in a U.S. National Cancer Institute program at the Research Triangle Institute (North Carolina)  in 1967 when Monroe E.Wall and Mansukh C.Wani  isolated it from the bark of the Pacific yew tree, Taxus brevifolia and named it taxol. Later it was discovered that endophytic fungi in the bark synthesize paclitaxel.

Paclitaxel is currently being indicated to lung, breast and ovarian cancer as well as  head and neck cancer, and advanced forms of Kaposi’s sarcoma. 

The administration of paclitaxel (Taxol®) through intravenous infusions was achieved by using Cremophor® EL as a vehicle to entrap the drug in micelles and keep it in solution, which affects the disposition of paclitaxel and is responsible for the nonlinear pharmacokinetics of the drug, especially at higher dose levels. (http://www.futuremedicine.com/doi/pdf/10.2217/pgs.10.32)

Although Nonlinear pharmacokinetics (dose-dependented kinetics) may occur in all aspects of pharmacokinetics (absorption, distribution, and/or elimination), it focus on the in the metabolism or MichaelisMenten (MM) kinetics of the drug. http://archive.ajpe.org/legacy/pdfs/aj650212.pdf

Briefly, it is known that some of these adverse effects such as hypersensitivity reactions were diminished with the administration of corticosteroids and H1 and H2 antihistamine premedication, and by reducing the incidence of grade 3/4 neutropenia with the administration of granulocyte colony-stimulating factors (G-CSF) and shortening paclitaxel infusion time from 24 to 3 h. However, the neurotoxicity, which was believed to be caused by either paclitaxel or Cremophor EL, could not be controlled and became the dose-limiting toxicity of the drug. It was later on found that paclitaxel itself was responsible to the neurotoxicity effects (http://annonc.oxfordjournals.org/content/6/7/699.abstract)

Pharmacokinetics and Pharmacodynamics

The selection of pharmacokinetic (PK) parameter end points and basic model types for exposure-toxicity relationships of paclitaxel is usually based on tradition rather than physiological relevance.

pharmacokinetic (PK)-pharmacodynamic (PD) relationships for paclitaxel are still most commonly described with empirically-designed threshold models, which have little or no mechanistic basis and lack usefulness when applied to conditions (eg, schedules, vehicles, or routes of administration) different from those from which they were originally derived. (http://jco.ascopubs.org/content/21/14/2803.long). As such, the AUC of the unbound paclitaxel is highly important as a pharmacokinetic parameter to describe exposure-neutropenia relationships (the unbound ptx was not evaluated yet). (http://clincancerres.aacrjournals.org.rproxy.tau.ac.il/content/1/6/599.full.pdf+html)

The clearance of Cremophor EL in patients was found to be time-dependent, resulting in disproportional increases in systemic exposure being associated with shortening of infusion from 3 hours to 1 hour.

One study (http://clincancerres.aacrjournals.org/content/1/6/599), compare the pharmacokinetics and pharmacodynamics (PD) of paclitaxel between Phase I trials of 3- and 24-h infusions and to determine the most informative pharmacokinetic parameter to describe the PD. The study had 3 main goals

  • (a) to compare the PK and PD of paclitaxel between Phase I studies of 3- and 24-h infusion,
  • (b) to examine the relationship between PK and PD
  • (c) to determine the most informative pharmacokinetic parameter to describe the PD.

Note: Although this study was conducted in ~1993-1995, is has been cited extensively and paved the was to other clinical trials with similar results.

27 patients were treated in a Phase I study of paclitaxel by a 3-h infusion at one of six doses: 105, 135, 180, 210, 240, and 270 mg/m2. Pharmacokinetic data were obtained from all patients. Paclitaxel concentrations were measured in the plasma and urine using HPLC. Similar eligibility criteria were designed for the 24-hr infusion with these doses were 49.5, 75, 105, 135, and 180 mg/m2 . Plasma and urine samples for pharmacokinetic evaluation of paclitaxel were collected.

Pharmacokinetic Analysis: Pharmacokinetic parameters, Cmax, AUC, t112, and MRT were obtained by a noncompartmental moment method. Cmax was actually observed peak concentration. AUC and MRT were computed by trapezoidal integration with extrapolation to infinite time.

Pharmacodynamic Analysis: The pharmacokinetic/pharmacodynamic relationships were modeled with the sigmoid maximum effect

Results:

Pharmacokinetic analysis:

The drug plasma concentration increased throughout the 3-h infusion period and began to decrease immediately upon cessation of the infusion with t112 of 9.9-16.0 h and MRT of 6.47-10.24 h (Fig. 1). Both Cmax and AUC increased with increasing doses (r = 0.865, P <0.001 for Cmax r 0.870, P < 0.001 for AUC), although the pharmacokinetic behavior appeared to be nonlinear (Fig. 2). The mean Cmax and AUC at a dose of 270 mg/m2 were more than 3-fold greater than those at a dose of 135 mg/m2. CL and V, decreased with increasing doses (Table 1). The urinary excretion of paclitaxel over 75 h was less than 15% of the dose administered, which indicated that non-renal excretion is the primary route of drug elimination.

The urinary excretion of paclitaxel over 75 h was less than 15% of the dose administered, which indicated that non-renal excretion is the primary route of drug elimination.

Comparison of PD between 3-h and 24-h Infusion

Groups. AUC and duration of plasma concentration (h) above (7>) 0.05-0.1 LM correlated with the % D in granulocytes with p values less than 0.05. The best parameter predicting granulocytopenia was T> 0.09 pM with the minimum of the Akaike Information Criterion. In the 24-h schedule, dose, AUC, and T > 0.04-0.07 pM were demonstrated to correlate with the % D in granulocytes. The best parameter predicting granulocytopenia in the 24-h schedule was T > 0.05 p.M.

Nonhematological toxicities such as peripheral neuropathy, hypotension, and arthralgialmyalgia mainly observed in the 3-h infusion group had no relationship with Cm or AUC which are much higher in the 3-h infusion group, although peripheral neuropathy and musculoskeletal toxicity have been suggested to be associated with AUC on a 6- (12) or 24-h (29) schedule.

Pharmacogenomics:

In the past, the major adverse effects encountered with Taxol were severe hypersensitivity reactions, mainly attributed to Cremophor EL; hematologic toxicity, primarily appearing in the form of severe neutropenia; and neurotoxicity, mainly seen as cumulative sensory peripheral neuropathy. The mechanism for the neurotoxicity has been demonstrated to involve ganglioneuropathy and axonopathy caused by dysfunctional microtubules in dorsal root ganglia, axons and Schwann cells.

Variability in paclitaxel pharmacokinetics has  been associated with the adverse effects of the  drug. Thus, polymorphisms in genes encoding  paclitaxel-metabolizing enzymes, transporters and therapeutic targets have been suggested  to contribute to the interindividual variability in toxicity and response.

Further characterization of  genes involved in paclitaxel elimination and drug  response was performed, including the identification of their most relevant genetic variants. The organic anion transporting polypeptide (OATP)  1B3 was identified as a key protein for paclitaxel hepatic uptake and polymorphisms in the genes encoding for paclitaxel metabolizing enzymes and transporters (CYP2C8, CYP3A4) CYP3A5, P-glycoprotein and OATP1B3) (http://www.futuremedicine.com/doi/pdf/10.2217/pgs.10.32)

***It is important to note that  the allele frequencies for many of these polymorphisms are subject to important ethnicity  specific differences, with some alleles exclusively present in specific populations (e.g., the Caucasian CYP2C8*3).

For the CYP2C8 gene, two alleles common in Caucasians that result in amino acid changes CYP2C8*3 (R139K; K399R) and CYP2C8*4 (I264M), were described. The former has been shown to possess an altered activity, while the latter does not seem to have functional
consequences. In addition, two CYP2C8 haplotypes were recently shown to confer an increased and reduced metabolizing activity, respectively.

CYP3A5 was found to be highly polymorphic owing to CYP3A5*3, CYP3A5*6 and CYP3A5*7 , with the latter two being African-specific polymorphisms.

Pharmacogenetic studies comparing the most relevant polymorphisms in these genes and paclitaxel pharmacokinetics have rendered contradictory results, with some studies finding no associations while others reported an effect for ABCB1, CYP3A4 or CYP2C8 polymorphisms on specific pharmacokinetic parameters.

Again, with respect to paclitaxel neurotoxicity risk, some studies have rendered positive results for ABCB1 , CYP2C8  and CYP3A5  polymorphisms, while others found no significant associations.

Note: These differences might be caused by underpowered studies and by differences in the patients under study.

Changes affecting microtubule  structure and/or composition have been shown to affect paclitaxel efficacy, probably by reducing drug–target affinity. Mainly, resistance to tubulin-binding agents has been associated with an overexpression of b-tubulin isotype III,
which seems to be caused by a deregulation of the microRNA family 200.

However, the clinical utility of these findings remains to be established; furthermore, the identification of biomarkers that could be used to individualize paclitaxel treatment remains a challenge.

In summary,

  1. Pharmacokinetics: Paclitaxel seems to have a non-linear (=dose-dependent) PK parameters.
  2. Pharmcokinetics- Pharmacodynamics: Previous clinical trials did NOT take into account the unbound concentrations of Ptx and therefore in the PK analysis, therefore newly designed clinical trials should take that into consideration. This is very important since the neurotoxicity is attributed to ptx and not its vehicle Cremophor (as shown in the PD analysis)
  3. Difficult to compare between the 3hr and 24hr infusion schedule as most clinical trials did NOT used similar dose-regime making the comparison very hard.
  4. Pharmacogenetics: Different polymorphisms seems to attribute to the been suggested  to contribute to the interindividual variability in toxicity and response.
  5. Prospective pharmacogenetic-guided clinical trials will be required in order to accurately establish the utility of the identified markers/strategies for patients and healthcare systems.

 

Read Full Post »

Digital Publishing Promotes Science and Popularizes it by Access to Scientific Discourse

Curator & Reporter: Aviva Lev-Ari, PhD, RN

Article ID #6: Digital Publishing Promotes Science and Popularizes it by Access to Scientific Discourse. Published on 11/27/2012

WordCloud Image Produced by Adam Tubman

 

On the pages of this Open Access Online Scientific Journal we represent the interdisciplinary nature of Health and Disease, the science and the art to be mastered for enabling care delivery and development of therapies beyond the state of science in 2012.

While we agree with Dr. Douglas Field that there are 50 Shades of Grey in Scientific Publication and the irrevocable transformation of the Scientific Publishing Industry as we knows it, we disagree with Dr. Douglas Fields conclusion that Digital Publishing Is Harming Science.The transformation of the industry mirrors the millions of readers of Scientific Content been popularized by the social media and enabled technically by the ubiquitous nature of the WWW and content host and the Internet at a conduit of information.

More on the transformation of Scientific publishing

“Open Access Publishing” is becoming the mainstream model: “Academic Publishing” has changed Irrevocably

http://pharmaceuticalintelligence.com/2012/10/25/open-access-publishing-is-becoming-the-mainstream-model-academic-publishing-has-changed-irrevocably/

I will present below a DIALOG between two scientists on the Internet, LinkedIn Group: Pharmacology and Physiology

Dr. Lev-Ari, posted on 11/14/2012

Bloomberg News

My DNA Results Spur Alzheimer’s Anxiety at $12,000 Cost

By John Lauerman on November 06, 2012

http://www.businessweek.com/news/2012-11-06/my-dna-results-spur-alzheimer-s-anxiety-at-12-000-cost?goback=%2Egmr_143951%2Egde_143951_member_189087541

Dr. Manjeet Sharma in Punjab, India had the opportunity to read the discussion and posted a Comment

People in my part of the world look into their horoscopes in order to get an idea of what life has in store for them. Going by this tendency, and also by the fact that many diseases have overtaken mankind today, I can understand this anxiety to know more about oneself since technology is now available to tell us about our genetic makeup. However, this technology is expensive and not in everyone’s reach. Fortunately most of us are spared the results of genetic decoding of ourselves! I think, genetic decoding should be done in extreme situations i.e if we have a family genetic defect or we are suffering from a rare genetic disorder etc. That means, it is diagnostic testing under defined circumstances. At least we can avail of the new technology to understand what we are suffering from. Otherwise, the results could be very depressing for most of us. If people dont know in advance about their susceptibilities, that manage to get through life fairly well. But if their minds are unnecessarily burdened with the load of latent genetic defects they will lose interest in normal life. Death is inevitable and people know it will happen to all of us someday. They have accepted this fate. But prior knowledge of vague genetic defects which may or may not afflict them in some stage of their life can be cruel, to say the least. I think, it should be resorted to, like most diagnostic tests, when the need arises and not otherwise. It can rob people of the joy of life.

Moreover, not everyone carries genetic defects.Most people carry epigenetic defects, congenital or induced by environment and these will not be picked up by these tests. They are more likely to succumb to epigenetic defects acquired during their lifetime.

Dr. Aviva Lev-Ari, PhD, RN replied

Your perspective is valid.  As scientist we must be objective and affectively neutral, thus respect different point of view to one’s own.  Facing uncertainty may bring some to voluntarily subject themselves to a journey to minimize the uncertainty related to their’s future health and longevity, thus, in search for the sequencing of their genome to know what are the propensities to encounter health adversity.

Dr. Manjeet Sharma  commented:

Your view is also valid. After all, we create novel methods to benefit humanity. Uncertainity is at the base of this quest, to know more about one’s future. But we run the risk of becoming fatalistic. If the gene profilig predicts negative aspects, we will lose the desire to live fully and search for ways and means to overcome the problems. However, these results are merely “predictive” in nature, when done at random. But done in the right diagnostic perspective these are justified. Only, I worry about the non-structural genetic defects which can be acquired without any genetic predisposition, merely because we are exposed to a million environmental carcinogens etc! These agents do not require perhaps a genetic predilection. However, it is good to have methods which can be very useful under the right circumstances,

Dr. Aviva Lev-Ari, PhD, RN  replied

Now, we are on the same page

Dr. Manjeet Sharma  commented:

Coming back to my preoccupation with the “right circumstances”, I dont know how long are are your arms, but this technology would be very useful and pertinent in the farmers of Punjab in India. Punjab is my home state which was considered to be the food bowl of INDIA because this is where the green revolution came. The people were basically hard working, healthy and great foodies. The green revolution also brought in indiscriminate use of pesticides which have permeated soil and water.Unfortunately, the residues have entered the human system too so much so that 1in 10 farmers and their families suffer from cancers now! Even the sperm DNA of males is degenerate! Babies are born with congenital defects and pediatric cancers Their is no facility in our state for cancer treatment with the result that patients have to travel far for free treatment. Did they deserve this dreadful fate? They did not carry cancer susceptibility in their genes earlier but now? I wish you would use your influence to send teams from the west who would use this technology on our people. The authorities would learn a lot and perhaps will ban pesticides in future. Can you help?

Dr. Aviva Lev-Ari, PhD, RN replied:

May I suggest that you will contact the Huffington Post and report that exchange and ask them to cover this topic and sent to Punjab a Team of Reporters to cover and activate POLICY MAKERS and the World Bank. You will be a whistle blower and make a big contribution to society.

Dr. Manjeet Sharma commented:

Now I can say that we are on the same page at last! Can you tell me how to contact the Huflington post to report this exchange? I am glad that you are not merely paying lip service. Thanks. manjeet

Dr. Lev-Ari replied

I am looking into it, We will ocntact you.

On 11/27/2012, Dr. Lev-Ari, contacted The Huffington Post for a Science Reporter to interview Dr. Sharma and write an article on the topic described in her e-mail exchange with Dr. Lev-Ari

It is my believe the ADVANCEMENT OF SCIENCE AND SOCIAL JUSTICE will be promoted by this dialogue, farther then yet another super technical paper in a Paid Subscription Scientific Journal.
The MOTTO in 2012 for Science and their carriers, the Scientists,  is  — POPULARIZE SCIENCE or admit low contribution to advancement of the Human condition in Health and in Disease
Below, read the statements that Dr. Lev-Ari agrees with 

50 Shades of Grey in Scientific Publication

and the Statements, Dr. Lev-Ari opposes to:

Digital Publishing Is Harming Science

Neurobiologist and author, ‘The Other Brain’

wrote on 11/19/2012 in the Huffington Post, Science Section

http://www.huffingtonpost.com/dr-douglas-fields/50-shades-of-grey-in-scientific-publication-how-digital-publishing-is-harming-science_b_2155760.html?utm_hp_ref=science

50 Shades of Grey in Scientific Publication: How Digital Publishing Is Harming Science

Posted: 11/19/2012 6:27 pm

Newsweek is dead. But we have Twitter. Harper-Collins just closed its last warehouse of books in the United States. Cambridge University Press, the oldest publisher of scholarly books and journals in the world, printing continuously since 1584, ceased printing operations this year and will outsource printing to another company. The Press survived tumultuous changes since the Middle Ages — the coming and going of plagues, the rise and fall of empires, wars and famine — but it could not sustain itself in the new environment of digital publication and self-publication that the electronic medium feeds. Most people are acutely aware of the devastation of print journalism by the rise of digital media, but most people are oblivious to the consequences that the same upheaval is having on scientific publication. There is no science without scholarly publication, and scholarly publication as we have known it is dying.

As readers witness their daily newspapers thin, wither away and die, citizens worry about the digital tidal wave sweeping away the once-vigorous independent press. Many fear that one of the three vital legs of democracy is buckling under the combined weight of government power, ruthless capitalistic self-interest and an uninformed public. Scientific publication is undergoing a drastic transformation as it passes deeper into government and capitalistic control, while weakened from struggling simultaneously to cope with unprecedented transformations brought about by electronic publication.

The Final Step in the Scientific Method: Publication

A scientific discovery is useless if it is not communicated with authority to the scientific community. For centuries scientists submitted their research findings for publication in scientific journals that were run by the leading scientists with expertise in a specialized field who served as journal editors. The editors evaluated the submission, and if the findings appeared to be important and technically sound, they sought out other scientists around the world with recognized expertise in the area to read the manuscript critically and advise the editor and authors (anonymously) on its suitability for publication.

This process is essential to root out poor science and pseudoscience, and to prevent bogging down the advancement of science by cluttering the literature with contradictory and erroneous findings. The expert peer reviewers evaluated the potential strengths, weaknesses, technical flaws, significance and novelty of the finding, and they suggested the need for further experiments. If the study failed to be accepted for publication by the editor, the authors benefited from the editorial review process, and they revised their work for submission to another journal. Recent government-mandated changes in scientific publishing are undermining this critical process of validation in scientific publication.

The End of Scientific Publication as We Have Known It

Two transformational changes in scientific publishing are undermining the traditional system of scientific publication: mandated open access and electronic publication. The federal government has mandated that scientific research that is funded in part by federal grants be made freely available to anyone over the Internet. As most scientific research receives some public funding, this mandate affects most biomedical science conducted in the United State and, through international collaborations, much of the science conducted in Europe and Asia. The well-intentioned reasoning of the mandate is that if the research is supported by public funds, then the public should have the right to obtain the published results free of charge. The idea sounds great, but nothing is free.

In traditional scientific publication, after a manuscript was accepted by the editor, it was passed to the production department. Here, as at any book, magazine or newspaper publisher, the text was copyedited and typeset, figure layouts were determined, the article was proofread and, often with much back-and-forth communication between author and publisher, the new study was incorporated into an issue with other papers and printed, bound and delivered to subscribers around the world. Individual articles of general importance were publicized through press releases penned by professional science writers and distributed to the popular media. The journal was marketed to scientists and libraries to attract a wide readership. In this way the quality of the journal was validated by its readers. If the journal consistently published important and accurate studies, subscriptions would rise, income would increase and authors would strive to publish in those prestigious journals.

All this requires a highly educated and expensive workforce. Even as scientific journals (like magazines) transition entirely to digital publication, most of these costs and new ones unique to electronic publication must be paid. The government mandate, however, undercuts all the investment involved in validating and publishing the research studies it funds.

In the absence of income derived from subscriptions, scientific journals must now obtain the necessary funds for publication by charging the authors directly to publish their scientific study. The cost to authors ranges from $1,000 to $3,000 or more per article. Scientists must publish several articles a year, so these costs are substantial.

The funding model fueling open-access publication is a modern rendition of the well-known “vanity” model of publication, in which the author pays to have his or her work printed. The same well-appreciated negative consequences result when applied to scientific publication. Because the income is derived from the authors rather than from readers, the incentive for the publisher is to publish as much as possible, rather than being motivated by a primary concern for quality and significance that would increase subscription by readers, libraries and institutions and thus income. In the open-access, “author-pays” financial model, the more articles that are published, the more income the publishers collect.

In place of rigorous peer review and editorial oversight by the leading scientists in the field, these publishers are substituting “innovative” approaches to review submissions, or they apply no authoritative review at all. Some open-access journals ask reviewers to evaluate only whether the techniques used in the study are valid, rather than judging the significance or novelty of the findings. Others replace rigorous, anonymous peer review from the best experts in the field with open review online where the critics must identify themselves. Anonymous reviewers can be more critical without fear of retribution. Many such open-access journals have no focus, publishing anything in any field of science. Working scientists serving as editors are being replaced by staff who, like factory managers, serve to facilitate production. Nearly all this published material is dumped into the government-run PubMed and PubMed Central biomedical indexes. At one time it took years for a new journal to prove itself before PubMed would index the journal, but not now. PubMed, once the authoritative index of biomedical publication, is now apparently competing with Google Scholar.

Thus we have seen an explosion of open-access scientific publishers around the world soliciting articles for rapid publication online for a fee. I receive direct email solicitations to contribute articles to such journals almost daily now. I have never heard of most of these journals. Weekly I receive formal invitations to speak at an “international conference,” the proceedings of which will be published in an open-access journal. The production tasks are now done by the author without traditional support for copyediting, etc. The production is replaced by automated desktop publishing systems that allow the author to put their text and figures into the journal’s template upon submission.

Validation by Consensus

The argument is made that the loss of rigorous scrutiny and validation provided by the traditional subscription-based mechanism of scientific publication will be replaced by the success of an article in the market after it is published — it’s the “cream-will-rise-to-the-top” theory. What if, rather than ceasing printing, Newsweek had adopted this “author-pays” mode of open-access publishing? The ploy would have sustained the magazine financially, generating profitable income from authors of every persuasion, advancing special interests and others eagerly paying to fill the pages of Newsweek with their articles. Readers would have been left to sort out the worthy from the unsound. The same situation is faced by readers of many open-access scientific journals. Now when a scientist writes up new research for publication in a prestigious journal, he or she must deal with all the contradictory findings of questionable rigor and accuracy being published by these vanity-publishing, open-access journals.

Similar changes are eroding literary publication as direct electronic publication by authors on the Internet has led to erotic and reportedly pornographic works like Fifty Shades of Grey and spinoffs sweeping bestsellers lists for months. The issue is not whether erotica or pornography is or should be popular; rather, one wonders what literary work might have filled those slots on the bestsellers lists if traditional mechanisms of editor-evaluated publication had been applied, which consider more than simply the potential popularity of a work in deciding what to publish.

Scientific publishing is fundamentally different. Science has profound consequences for society that go well beyond the entertainment value or popularity of a publication or its business profits. Scientists and the public are rightfully outraged and we all suffer when flawed scientific studies are published. Even with the most rigorous review at the best journals, flawed studies sometimes slip through, such as the “discovery” of cold fusion published in Science, but it is the rarity of this lapse that makes this so sensational when it happens. With the new open-access model of author-financed publication, the “outstanding” is drowned in a flood of trivial or unsound work. Open-access publishing threatens to become scientific publication’s equivalent of blogging. (Nothing wrong with blogging, but it is not the same thing as scientific publication.)

Well-Intentioned but Twisted Logic

The logic for this government mandate is peculiar. Why do this to science? The scientific journals claim no rights to the results of publicly funded scientific research; they only seek financial compensation for the expenses required for editing, reviewing and producing the article to validate and disseminate the findings as effectively as possible. The government can and does make results of government-funded research freely available through its own publication resources, but such publications from the government printing office lack the scrutiny and validation provided by expert scientists and editors at scientific journals who rigorously and independently evaluate the research.

Do we want a government-run system in which the money for research is supplied by the same body that validates and publishes it? Would you feel confident in a government-run study on a new drug from the pharmaceutical industry made freely available from a government Internet site, or would you want that research rigorously and independently evaluated by expert and impartial scientists before it was published in a scientific journal with an established authority in the field? It is the government that now pays the publication costs for the research it funds. The authors must use the taxpayer money obtained from government research grants to pay the publication costs now required by mandated open-access publishing rather than use these precious dollars to pay for research supplies. Now the public must foot the bill for what was previously paid by subscribers of journals.

Why does this twisted logic apply only to science? Newspapers thrive on publishing publicly financed political processes. By the same reasoning, shouldn’t the political results, including outcomes of elections and other publicly funded political activities, be made freely available by newspapers and TV rather than allowing the media to charge for publishing it? If you accept this, what would become of independent and rigorous review of the results of any publicly funded political processes?

The End of an Era

The same thing that is happening to newspaper and magazine publishers is happening to science publishers. A few large publishing corporations with clout are consolidating power. These operations can exploit the new environment and build monopolies, but many scientific journals and scholarly publishers will fail. New journals are often inspired by working scientists seeing a new field of science emerging, which is as yet unknown by others. These new journals may not launch into the present turbulence. A corporate/government financial alliance is replacing scholarly publication once organized and run by scientists and academics.

I appreciate that there are benefits to digital print, open access and self-publication. My intent here is not to provide a balanced argument but to alert readers to dangers that I feel have not received adequate attention. This is not an abstract issue for me, and I openly declare my bias. Neuron Glia Biology was a scientific journal that was launched in 2004 by me and like-minded scientists to advance scientific research on neuron-glia interactions, and it was published by Cambridge University Press until this year.Neuron Glia Biology provided the opportunity for 1,400 authors to introduce their new research on neuron-glia interactions into the scientific literature, and it helped advance a new field of science, but no longer. One wonders how many new advances in science will never have an opportunity to take root now that scientific publication is an increasingly corporate and government business rather than the scholarly academic activity that it was for centuries. Science is advanced by scientific publication. These changes in publishing will affect the future of science profoundly.

SOURCE:

http://www.huffingtonpost.com/dr-douglas-fields/50-shades-of-grey-in-scientific-publication-how-digital-publishing-is-harming-science_b_2155760.html?utm_hp_ref=science

Read Full Post »

« Newer Posts - Older Posts »