Feeds:
Posts
Comments

See on Scoop.itCardiovascular and vascular imaging

See on www.technologyreview.com

See on Scoop.itCardiovascular Disease: PHARMACO-THERAPY

If you check WebMD, Google or various online forums for answers before a doctor visit, or in substitution of one, you’re not alone.

See on mashable.com

10 Sensor innovations driving the digital health revolution

Reporter: Aviva Lev-Ari, PhD, RN

 

See on Scoop.itCardiovascular Disease: PHARMACO-THERAPY

This year IBM dedicated its Five in Five series (an annual list of five technologies that are likely to advance dramatically) solely to sensors.

 

Digital sensors of the touch, sight,hearing, taste and smell kind along with their potential are all profiled by IBM Sensor technology is going through a renaissance as companies develop smart and innovative new ways to track data using them.

 

Sensor innovation is in-part driving the Digital Health Revolution as digital health companies find ingenius ways to integrate them in to apps, devices and other peripherals. The smartphone will play an increasing important role in all of this as they go from having six built-in sensors currently to having sixteen in the next five years.

 

If these predictions are correct then the next five years will be half-a-decade of sensor proliferation meaning the Digital Health Ecosystem will grow exponentially. In the meantime though there are already a plethora of digital health sensors in use or in the pipeline that are helping people improve and, in some instances, save lives.

See on bionic.ly

See on Scoop.itCardiovascular Disease: PHARMACO-THERAPY

See on pipeline.corante.com

Treatment, Prevention and Cost of Cardiovascular Disease: Current & Predicted Cost of Care and the Potential for Improved Individualized Care Using Clinical Decision Support Systems

Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC

Author and Curator: Larry H Bernstein, MD, FACP

and

Curator: Aviva Lev-Ari, PhD, RN

This article has the following FIVE parts:

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

4. Arterial Elasticity in Quest for a Drug Stabilizer: Isolated Systolic Hypertension
caused by Arterial Stiffening Ineffectively Treated by Vasodilatation Antihypertensives

5. Clinical Decision Support Systems: Realtime Clinical Expert Support — Biomarkers of Cardiovascular Disease : Molecular Basis and Practical Considerations

 

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

PA Heidenreich, NM Albert, LA Allen, DA Bluemke, J Butler, et al. Circulation: Heart Failure 2013;6.
Print ISSN: 1941-3289, Online ISSN: 1941-3297.

Heart failure (HF) poses a major burden on productivity and cost of national healthcare expenditures

  • among older Americans, more are hospitalized for HF than for any other medical condition.

As the population ages, the prevalence of HF is expected to increase.

The purpose of this report is to

  • provide an in-depth look at how the changing demographics in the United States will impact the prevalence and cost of care for HF for different US populations.

 Projections of HF Prevalence

Prevalence estimates for HF were determined from

 Projections of the US Population With HF From 2010 to 2030 for Different Age Groups

Year

All ages

18-44 y

45-64 y

65-79 y

> 80

2012 5 813 262 396 578 1 907 141 2 192 233 1 317 310
2015 6 190 606 402 926 1 949 669 2 483 853 1 354 158
2020 6 859 623 417 600 1 974 585 3 004 002 1 463 436
2025 7 644 674 434 635 1 969 852 3 526 347 1 713 840
2030 8 489 428 450 275 2 000 896 3 857 729 2 180 528

Future Costs of HF

The future costs of HF were estimated by methods developed by the American Heart Association

  • project the prevalence and costs of HF from 2012 to 2030
  • factor out  the costs attributable to comorbid conditions.

The model does this by assuming that

(1) HF prevalence percentages will remain constant by age, sex, and race/ethnicity;

(2) the costs of technological innovation will rise at the current rate.

HF prevalence and costs (direct and indirect) were projected using the following steps:

1. HF prevalence and average cost per person were estimated by age group (18–44, 45–64, 65–79, ≥80 years), gender (male, female), and race/ethnicity (white non-Hispanic, white Hispanic, black, other) [32]. The initial HF cost per person and rate of increase in cost was determined for each demographic group, as a percentage of total healthcare expeditures.

2. Inflation is separately addressed by correcting dollar values from Medical Expenditure Panel Survey (MEPS) to 2010 dollars.

3. Nursing home spending triggered an adjustment. The estimates project the incremental cost of care attributable to heart failure (HF).

4. Total HF population prevalence and costs were projected by multiplying the US Census–projected population of each demographic group by the percentage prevalence and average cost

5. The total work loss and home productivity loss costs were generated by multiplying per capita work days lost attributable to HF by (1) prevalence of HF, (2) the probability of employment given HF (for work loss costs only), (3) mean per capita daily earnings, and (4) US Census population projection counts.

Projections of Indirect Costs

Indirect costs of lost productivity from morbidity and premature mortality were estimated as detailed below.
Morbidity costs represent the value of lost earnings attributable to HF and include loss of work among

  • currently employed individuals and those too sick to work, as well as
  • home productivity loss, which is the value of household services performed by household members who do not receive pay for the services.

Total Costs Attributable to Heart Failure (HF)

Projections of Total Cost of Care ($ Billions) for HF for Different Age Groups of the US Population

Year All 18–44 45–64 65–79 ≥ 80
2012
Medical 20.9 0.33 3.67 8.46 8.42
Indirect: Morbidity 5.42 0.52 1.92 2.05 0.93
Indirect: Mortality 4.35 0.66 2.53 0.98 0.18
Total 30.7 1.51 8.12 11.5 9.53
2020
Medical 31.1 0.43 4.58 14.2 11.8
Indirect: Morbidity 7.09 0.66 2.20 3.11 1.12
Indirect: Mortality 5.39 0.79 2.89 1.49 0.22
Total 43.6 1.88 9.67 18.8 13.2
2030
Medical 53.1 0.59 5.86 23.3 23.4
Indirect: Morbidity 9.80 0.91 2.54 4.48 1.87
Indirect: Mortality 6.84 0.98 3.32 2.16 0.37
Total 69.7 2.48 11.7 29.9 25.6

Excludes HF care costs that have been attributed to comorbid conditions.

Cost of Care

Total medical costs are projected to increase from $20.9 billion in 2012 to $53.1 billion in 2030, a 2.5-fold increase. Assuming continuation of current hospitalization practices, the majority (80%) of the costs stem from

  • hospitalization. Also, the majority of increase is from directs costs. Indirect costs are expected to rise as well, but at a lower rate, from $9.8 billion to $16.6 billion, an increase of 69%.

Direct costs (cost of medical care) are expected to increase at a faster rate than indirect costs because of premature deaths and lost productivity.

The total cost of HF (direct and indirect costs) is expected to increase in 2030 from the current $30.7 billion to at least $69.8 billion. This will amount to $244 for every US adult in 2030.

Thus the burden of HF for the US healthcare system will grow substantially during the next 18 years if current trends continue.

It is estimated that

  • by 2030, the prevalence of HF in the United States will increase by 25%, to 3.0%.
  • >8 million people in the US (1 in every 33) will have HF by 2030.
  • the projected total direct medical costs of HF between 2012 and 2030 (in 2010 dollars) will increase from $21 billion to $53 billion.
  • Total costs, including indirect costs for HF, are estimated to increase from $31 billion in 2012 to $70 billion in 2030.
  • If one assumes all costs of cardiac care for HF patients are attributable to HF
    (no cost attribution to comorbid conditions), the 2030 projected cost estimates of treating patients with HF will be 3-fold higher ($160 billion in direct costs).

Projections can be lowered if action is taken to reduce the health and economic burden of HF. Strategies, plans, and implementation to prevent HF and improve the efficiency of care are needed.

Causes and Stages of HF

If the projections for accelerating HF costs are to be avoided, attention to the different causes of HF and their risk factors is warranted.
HF is a clinical syndrome that results from a variety of cardiac disorders

  1. idiopathic dilated cardiomyopathy
  2. cardiac valvular disease
  3. pericarditis or pericardial effusion
  4. ischemic heart disease
  5. primary or secondary hypertension
  6. renovascular disease
  7. advanced liver disease with decreased venous return
  8. pulmonary hypertension
  9. prolonged hypoalbuminemia with generalized interstitial edema
  10. diabetic nephropathy
  11. heart muscle infiltration disease such as primary or secondary amyloidosis
  12. myocarditis
  13. rhythm disorders
  14. congenital diseases
  15. accidental trauma (war, chest trauma)
  16. toxicities (methamphetamine, cocaine, heavy metals, chemotherapy)

HF generally causes symptoms:

  • shortness of breath
  • fatigue
  • swelling (edema)
  • inability to lay flat (orthopnea, paroxysmal nocturnal dyspnea)
  • possibly cough, wheezing

In the Western world the predominant causes of HF are:

  • coronary artery disease
  • valvular disease
  • hypertension
  • viral, alcohol, methamphetamine or other drug  toxicity cardiomyopathy
  • stress (catechol toxicity, takotsubo “broken heart” cardiomyopathy)
  • atrial fibrillation/rapid heart rates
  • thyroid disease

In 2001, the American College of Cardiology and AHA practice guidelines for chronic HF promoted a classification system that encompasses 4 stages of HF.

  • Stage A: Patients at high risk for developing HF in the future but no functional or structural heart disorder.
  • Stage B: a structural heart disorder but no symptoms.
  • Stage C: previous or current symptoms of heart failure, manageable with medical treatment.
  • Stage D: advanced disease requiring hospital-based support, a heart transplant or palliative care.

Stages A and B are considered precursors to the clinical HF and are meant

  1. to alert healthcare providers to known risk factors for HF and
  2. the available therapies aimed at mitigating disease progression.

Stage A patients have risk factors for HF hypertension, atherosclerotic heart disease, and/or diabetes mellitus.

Patients with stage B are asymptomatic patients who have  developed structural heart disease from a variety of potential insults to the heart muscle such as myocardial infarction or valvular heart disease.

Stages C and D represent the symptomatic phases of HF, with stage C manageable and stage D failing medical management, resulting in marked symptoms at rest or with minimal activity despite optimal medical therapy.

Therapeutic interventions include:

  • dietary salt restriction and diuretics
  • medications known to prolong survival (beta blockers, ACE inhibitors, aldosterone inhibitors)
  • implantable devices such as pacemakers and defibrillators
  • stoppage of tobacco, toxic drugs, excess alcohol

Classic demographic risk factors for the development of HF include

  • older age, male gender, ethnicity, and low socioeconomic status.
  • comorbid disease states contribute to the development of HF
    • Ischemic heart disease
    • Hypertension

Diabetes mellitus, insulin resistance, and obesity are also linked to HF development,

  • with diabetes mellitus increasing the risk of HF by ≈2-fold in men and up to 5-fold in women.

Smoking remains the single largest preventable cause of disease and premature death in the United States.

Translation of Scientific Evidence into Clinical Practice

In multiple studies, failures to apply evidence-based management strategies are blamed for avoidable hospitalizations and/or deaths from HF

Improved implementation of guidelines can delay, mitigate or prevent the onset of HF, and improve survival. Performance improvement programs have facilitated the implementation of evidence-based therapies in both hospital and ambulatory care settings.

Care transition programs by hospitals have become more widespread

  • in an effort to reduce avoidable readmissions.

The interventions used by these programs include

  • initiating discharge planning early in the course of hospital care,
  • actively involving patients and families or caregivers in the plan of care,
  • providing new processes and systems that ensure patient understanding of the plan of care before discharge from the hospital, and
  • improving quality of care by continually monitoring adherence to national evidence-based guidelines with appropriate adaptations for individual differences in needs and responses.

In multiple studies,adherence to the HF plan of care was associated with reduced all-cause mortality as well as HF hospitalization.

It is anticipated that care transition programs may increase appropriate admissions while decreasing inappropriate admissions

This would have a potentially benenficial impact on the 30-day all-cause readmission rate that has become

  • a focus of public reporting in pay for performance.

More than a quarter of Medicare spending occurs in the last year of life, and

  • the costs of care during the last 6 months for a patient with HF have been increasing (11% from 2000 to 2007).

Improving end-of-life care cost effectiveness for patients with stage D HF will require ongoing

  • improved prediction of outcomes
  • integration of multiple aspects of care
  • educated examination of alternatives and priorities
  • improved decision-making
  • unbiased allocation of resources and coverage for this process rather than unbalanced coverage favoring catastrophic care

Palliative care, including formal hospice care, is increasingly advocated for patients with advanced HF.
Offering palliative care to patients with HF may lead to

  • more conservative (and less expensive) treatment
  • consistent with many patients’ goals for care

The use of hospice services is growing among the HF population,

  • HF now the second most common reason for entering hospice
  • but hospice declaration may impose automated restrictions on care that can impose an impediment to election of hospice

A recent study of patients in hospice care found that

  • patients with HF were more likely than patients with cancer to use hospice services longer than 6 months or to be discharged from hospice care alive.

Highlights:

1. Increasing incidence and costs of care for heart failure projected from 2012 to 2030

2. Direct costs rising at greater rate than indirect costs

3. American Heart Association has defined 4 stages of HF, the last 2 of which are advanced

4. Stages C & D are clinically overt and contribute to rehospitalization

5. Stage D accounts for a significant use of end-of-life hospice care

6. There are evidence-based guidelines for the provision of coordinated care that are not widely applied at present

Basic questions raised:

1. If stages A & B are under the radar, then what measures can best trigger the use of evidence-based guidelines for care?
2. Why are evidence-based guidelines commonly not deployed?

  • Flaws in the “evidence” due to bias, design errors, limted ability to extrapolate to the patients it should address
  • Delays in education, convincing of caretakers, and deployment
  • Inadequate resources
  • Financial or other disincentives

The arguments for introducing coordinated care and for evidence-based guidelines is strong.

Arguments AGAINST slavish imposition of evidence based medicine include genetic individuality (what is best on average is not necessarily best for each genetically and behaviorly distinct individual). Strict adherence to evidence-based guidelines also stifles innovative explorations. None-the-less, deviations from evidence-based plans should be cautious, well-documented, and well-informed, not due to mal-aligned incentives, ignorance, carelessness or error.

The question of when and how to intervene most cost effectively is unanswered. If some patients are salt-sensitive as a contribution to the prevalence of hypertension and heart failure, should EVERYONE be salt restricted or should there be a more concerted effort to define who is salt sensitive? What if it proved more cost-effective to restrict salt intake for everyone, even though many might be fine with high sodium intake, and some might even benefit from or require high sodium intake? Is it reasonable to impose costs, hurdles, even possible harm on some as a cheaper way to achieve “greater good”?
These issues are highly relevant to the proposed emphasis on holistic solutions.

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

By GINA KOLATA   2013.05.13  New York Times

Scientists are studying the genetic makeup of the Del Sontro family for

  • telltale mutations or aberrations in the DNA.

Robin Ashwood, one of Mr. Del Sontro’s sisters, found out she had extensive heart disease even though her electrocardiograms was normal. Six of her seven siblings also have heart disease, despite not having any of the traditional risk factors. Then, after a sister, just 47 years old, found out she had advanced heart disease, Mr. Del Sontro, then 43, went to a cardiologist. An X-ray of his arteries revealed the truth. Like his grand-father, his mother, his four brothers and two sisters, he had heart disease.

Now he and his extended family have joined an extraordinary federal research project that is using genetic sequencing to find factors that increase the risk of heart disease beyond the usual suspects — high cholesterol, high blood pressure, smoking and diabetes.“We don’t know yet how many pathways there are to heart disease,” said Dr. Leslie Biesecker, who directs the study Mr. Del Sontro joined. “That’s the power of genetics. To try and dissect that.”

“I had bought the dream: if you just do the right things and eat the right things, you will be O.K.,” said Mr. Del Sontro, whose cholesterol and blood pressure are reassuringly low.

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

GF Mitchell, Shih-Jen Hwang, RS Vasan, MG Larson.

Circulation. 2010;121:505-511.  http://circ.ahajournals.org/content/121/4/505
http://dx.doi.org/10.1161/CIRCULATIONAHA.109.886655

Various measures of arterial stiffness and wave reflection have been proposed as cardiovascular risk markers.
Prior studies have not assessed relations of a comprehensive panel of stiffness measures to prognosis.
First-onset major cardiovascular disease events in relation to arterial stiffness

  • pulse wave velocity [PWV]
  • wave reflection
    • augmentation index
    • carotid-brachial pressure amplification)
  • central pulse pressure

were analyzed  in 2232 participants (mean age, 63 years; 58% women) in the Framingham Heart Study by a proportional hazards model. During median follow-up of 7.8 (range, 0.2 to 8.9) years,

  • 151 of 2232 participants (6.8%) experienced an event.

In multivariable models adjusted for

  • age
  • sex
  • systolic blood pressure
  • use of antihypertensive therapy
  • total and high-density lipoprotein cholesterol concentrations
  • smoking
  • presence of diabetes mellitus

higher aortic PWV was associated with a 48% increase in cardiovascular disease risk (95% confidence interval, 1.16 to 1.91 per SD; P 0.002).

After PWV was added to a standard risk factor model, integrated discrimination improvement was 0.7% (95% confidence interval, 0.05% to 1.3%; P 0.05).

In contrast,

  • augmentation index,
  • central pulse pressure, and
  • pulse pressure amplification

were not related to cardiovascular disease outcomes in multivariable models.

Higher aortic stiffness assessed by PWV

  • is associated with increased risk for a first cardiovascular event.

Aortic PWV improves risk prediction when added to standard risk factors and may represent

  • a valuable biomarker of cardiovascular disease risk

We shall here visit a recent article by Justin D. Pearlman and Aviva Lev-Ari, PhD, RN, on

Pros and Cons of Drug Stabilizers for Arterial  Elasticity as an Alternative or Adjunct to Diuretics and Vasodilators in the Management of Hypertension, titled

4. Hypertension and Vascular Compliance: 2013 Thought Frontier – An Arterial Elasticity Focus

http://pharmaceuticalintelligence.com/2013/05/11/arterial-elasticity-in-quest-for-a-drug-stabilizer-isolated-systolic-hypertension-caused-by-arterial-stiffening-ineffectively-treated-by-vasodilatation-antihypertensives/

Speaking at the 2013 International Conference on Prehypertension and Cardiometabolic Syndrome, meeting cochair Dr Reuven Zimlichman (Tel Aviv University, Israel) argued that there is a growing number of patients for whom the conventional methods are inappropriate for

  • the definitions of hypertension
  • the risk-factor tables used to guide treatment

Most antihypertensives today work by producing vasodilation or decreasing blood volume which may be

  • ineffective treatments for patients in whom average arterial diameter and circulating volume are not the causes of hypertension and as targets of therapy may promote decompensation

In the future, he predicts, “we will have to start looking for a totally different medication that will aim to

  • improve or at least to stabilize arterial elasticity: medication that might affect factors that determine the stiffness of the arteries, like collagen, like fibroblasts.

Those are not the aim of any group of antihypertensive medications today.”

Zimlichman believes existing databases could be used to develop algorithms that focus on

  • inelasticity as a mechanism of hypertensive disease

He also points out that

  • ambulatory blood-pressure-monitoring devices can measure elasticity

http://www.theheart.org/article/1502067.do

A related article was published on the relationship between arterial stiffening and primary hypertension.

Arterial stiffening provides sufficient explanation for primary hypertension.

KH Pettersen, SM Bugenhagen, J Nauman, DA Beard, SW Omholt.

By use of empirically well-constrained computer models describing the coupled function of the baroreceptor reflex and mechanics of the circulatory system, we demonstrate quantitatively that

  • arterial stiffening seems sufficient to explain age-related emergence of hypertension.

Specifically,

  • the empirically observed chronic changes in pulse pressure with age
  • the capacity of hypertensive individuals to regulate short-term changes in blood pressure becomes impaired

The results suggest that a major target for treating chronic hypertension in the elderly  may include

  • the reestablishment of a proper baroreflex response.

http://arxiv.org/abs/1305.0727v2?goback=%2Egde_4346921_member_240018699

5. Clinical Decision Support Systems: Realtime Clinical Expert Support: Biomarkers of Cardiovascular Disease — Molecular Basis and Practical Considerations

RS Vasan.  Circulation. 2006;113:2335-2362

http://dx.doi.org/10.1161/CIRCULATIONAHA.104.482570

http://circ.ahajournals.org/content/113/19/2335

Substantial data indicate that CVD is a life course disease that begins with the evolution of risk factors that contribute to

  • subclinical atherosclerosis.

Subclinical disease culminates in overt CVD. The onset of CVD itself portends an adverse prognosis with greater

  • risks of recurrent adverse cardiovascular events, morbidity, and mortality.

Clinical assessment alone has limitations. Clinicians have used additional tools to aid clinical assessment and to enhance their ability to identify the “vulnerable” patient at risk for CVD, as suggested by a recent National Institutes of Health (NIH) panel.

Biomarkers are one such tool to better identify high-risk individuals, to diagnose disease conditions promptly for diagnosis, prognosis, and treatment guidance.

Biological marker (biomarker): A laboratory test value that is objectively measured and evaluated as an indicator of

  1. normal biological processes,
  2. pathogenic processes, or
  3. pharmacological responses to a therapeutic intervention.

Type 0 biomarker: A marker of the natural history of a disease

  • Type 0 correlates longitudinally with known clinical indices/predicts outcomes.

Type I biomarker: A marker that captures the effects of a therapeutic intervention

  • Type I assesses an aspect of treatment mechanism of action.

Type 2 biomarker (surrogate end point):  A marker intended to predict outcomes on the basis of

  • epidemiologic
  • therapeutic
  • pathophysiologic or
  • other scientific evidence.

With biomarkers monitoring disease progression or response to therapy, the patient can serve as  his or her own control (follow-up values may be compared to baseline  values).

Costs may be less important for prognostic markers when they are largely restricted to people with disease (total cost=cost per person x number to be tested, plus down-stream costs). Some biomarkers (e.g., an exercise stress test) may be used for both diagnostic and prognostic purposes.

Generally there are cost differences in establishing a prognostic value versus diagnostic value of a biomarker:

  • prognostic utility typically requires a large sample and a prospective design, whereas
  • diagnostic value often can be determined with a smaller sample in a cross-sectional design

Regardless of the intended use, it is important to remember that biomarkers that do not change disease management

  • cannot affect patient outcome and therefore
  • are unlikely to be cost-effective (judged in terms of quality-adjusted life-years gained).

Typically, for a biomarker to change management, it is important to have evidence that risk reduction strategies should vary with biomarker levels, and/or biomarker-guided management achieves advantages over a management scheme that ignores the biomarker levels.

Typically it means that biomarker levels should be modifiable by therapy.

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that

  • provides empirical medical reference and
  • suggests quantitative diagnostics options.

The current design of the Electronic Medical Record (EMR) is a
linear presentation of portions of the record

  • by services
  • by diagnostic method, and
  • by date

to cite examples.

This allows perusal through a graphical user interface (GUI) that

  • partitions the information or necessary reports in a workstation entered by keying to icons.
  • presents decision support

Examples of data partitions include:

  • history
  • medications
  • laboratory reports
  • imaging
  • EKGs

The introduction of a DASHBOARD adds presentation of

  • drug reactions
  • allergies
  • primary and secondary diagnoses, and
  • critical information

about any patient the care giver needing access to the record.

A basic issue for such a tool is what information is presented and how it is displayed.

A determinant of the success of this endeavor is if it

  • facilitates workflow
  • facilitates decision-making process
  • reduces medical error.

Continuing work is in progress in extending the capabilities with model datasets, and sufficient data based on the assumption that computer extraction of data from disparate sources will, in the long run, further improve this process.

For instance, there is synergistic value in finding coincidence of:

  • ST shift on EKG
  • elevated cardiac biomarker (troponin)
  • in the absence of substantially reduced renal function.

Similarly, the conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates

  • morphologic review of a peripheral smear
  • descriptive statistics

While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the

  • production
  • release
  • or suppression

of the formed elements from the blood-forming organ into the circulation. In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.

Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of

  • size
  • density, and
  • concentration

resulting in many characteristic features of classification. In the diagnosis of hematological disorders

  • proliferation of marrow precursors
  • domination of a cell line
  • suppression of hematopoiesis

Other dimensions are created by considering

  • the maturity and size of the circulating cells.

The application of rules-based, automated problem solving should provide a valid approach to

  • the classification and interpretation of the data used to determine a knowledge-based clinical opinion.

The exponential growth of knowledge since the mapping of the human genome enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.

As the complexity of statistical models has increased

  • the dependencies have become less clear to the individual.

Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
The development of an evidence-based inference engine that can substantially interpret the data at hand and

  • convert it in real time to a “knowledge-based opinion”

could improve clinical decision-making by incorporating into the model

  • multiple complex clinical features as well as onset and duration .

An example of a difficult area for clinical problem solving is found in the diagnosis of Systemic Inflammatory Response Syndrome (SIRS) and associated sepsis. SIRS is a costly diagnosis in hospitalized patients.   Failure to diagnose it in a timely manner increases the financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria by the clinician.

  • temperature
  • heartrate
  • respiratory rate and
  • WBC count

The application of those clinical criteria, however, defines the condition after it has developed, leaving unanswered the hope for

  • a reliable method for earlier diagnosis of SIRS.

The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including

  • transthyretin
  • C-reactive protein
  • procalcitonin
  • mean arterial pressure

Immature granulocyte (IG) measurement has been proposed as a

  • readily available indicator of the presence of granulocyte precursors (left shift).

The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, provides

  • a promising support to early accurate decision making.

Such a system aims to reduce medical error by utilizing

  • the conjoined syndromic features of disparate data elements .

How we frame our expectations is important. It determines

  • the data we collect to examine the process.

In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.

Potential arenas of benefit include:

  • hospital operations
  • nonhospital laboratory studies
  • companies in the diagnostic business
  • planners of health systems

The problem stated by LL  WEED in “Idols of the Mind” (Dec 13, 2006):
“ a root cause of a major defect in the health care system is that, while we falsely admire and extol the intellectual powers of highly educated physicians, we do not search for the external aids their minds require.” Hospital information technology (HIT) use has been focused on information retrieval, leaving

  • the unaided mind burdened with information processing.

We deal with problems in the interpretation of data presented to the physician, and how the situation could be improved through better

  • design of the software that presents data .

The computer architecture that the physician uses to view the results is more often than not presented

  • as the designer would prefer, and not as the end-user would like.

In order to optimize the interface for physician, the system could have a “front-to-back” design, with the call up for any patient

  • A dashboard design that presents the crucial information that the physician would likely act on in an easily accessible manner
  • Each item used has to be closely related to a corresponding criterion needed for a decision.

Feature Extraction.

Eugene Rypka contributed greatly to clarifying the extraction of features in a series of articles, which

  • set the groundwork for the methods used today in clinical microbiology.

The method he describes is termed S-clustering, and

  • will have a significant bearing on how we can view laboratory data.

He describes S-clustering as extracting features from endogenous data that

  • amplify or maximize structural information to create distinctive classes.

The method classifies by taking the number of features with sufficient variety to generate maps.

The mapping is done by

  • a truth table NxN of messages and choices
  • each variable is scaled to assign values for each message choice.

For example, the message for an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Even though there may be a large number of measured values, the variety is reduced by this compression, even though it may represent less information.

The main issue is

  • how a combination of variables falls into a table to convey meaningful information.

We are concerned with

  • accurate assignment into uniquely variable groups by information in test relationships.

One determines the effectiveness of each variable by its contribution to information gain in the system. The reference or null set is the class having no information.  Uncertainty in assigning to a classification can be countered by providing sufficient information.

One determines the effectiveness of each variable by its contribution to information gain in the system. The possibility for realizing a good model for approximating the effects of factors supported by data used

  • for inference owes much to the discovery of Kullback-Liebler distance or “information”, and Akaike
  • found a simple relationship between K-L information and Fisher’s maximized log-likelihood function.

In the last 60 years the application of entropy comparable to

  • the entropy of physics, information, noise, and signal processing,
  • developed by Shannon, Kullback, and others
  • integrated with modern statistics,
  • as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and work by Coifman

Akaike pioneered recognition that the choice of model influence results in a measurable manner. In particular, a larger number of variables promotes further explanations of variance, such that a model selection criterion is important that penalizes for the number of variables when success is measured by explanation of variance.

Gil David et al. introduced an AUTOMATED processing of the data available to the ordering physician and

  • can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common
  • causes of hospital admission that carry a high cost and morbidity.

For example:

  1. anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome);
  2. pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia;
  3. multiple organ failure and hemodynamic shock;
  4. electrolyte/acid base balance disorders;
  5. acute and chronic liver disease;
  6. acute and chronic renal disease;
  7. diabetes mellitus;
  8. protein-energy malnutrition;
  9. acute respiratory distress of the newborn;
  10. acute coronary syndrome;
  11. congestive heart failure;
  12. hypertension
  13. disordered bone mineral metabolism;
  14. hemostatic disorders;
  15. leukemia and lymphoma;
  16. malabsorption syndromes; and
  17. cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].
  18. endocrine disorders
  19. prenatal and perinatal diseases

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of
myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH (Chairman). Prealbumin in Nutritional Care Consensus Group.

Measurement of visceral protein status in assessing protein and energy
malnutrition: standard of care. Nutrition 1995; 11:169-171.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the
Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB)
Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method
(GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with
chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering, 33(7):2170–2174, July 1994.

R. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression.
C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

Realtime Clinical Expert Support and validation System

We have developed a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options. The primary purpose is to gather medical information, generate metrics, analyze them in realtime and provide a differential diagnosis, meeting the highest standard of accuracy. The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore

  • utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community.
  • The main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies
  • in which anomalous subprofiles are extracted and compared to potentially relevant cases.

As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise.
We anticipate that the effect of implementing this diagnostic amplifier would result in

  • higher physician productivity at a time of great human resource limitations,
  • safer prescribing practices,
  • rapid identification of unusual patients,
  • better assignment of patients to observation, inpatient beds,
    intensive care, or referral to clinic,
  • shortened length of patients ICU and bed days.

The main benefit is a

  1. real time assessment as well as
  2. diagnostic options based on comparable cases,
  3. flags for risk and potential problems

as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

Graphical presentation of patient status

The patient was treated for SIRS and the blood tests were repeated during the following week. The full combined record of our system’s assessment of the patient, as derived from the further hematology tests, is illustrated below. The yellow line shows the diagnosis that corresponds to the first blood test (as also shown in the image above). The red line shows the next diagnosis that was performed a week later.

Progression changes in patient ICU stay with SIRS

The MISSIVE(c) system, by Justin Pearlman, is an alternative approach that includes not only automated data retrieval and reformatting of data for decision support, but also an integrated set of tools to speed up analysis, structured for quality and error reduction, couplled to facilitated report generation, incorporation of just-in-time knowledge and group expertise, standards of care, evidence-based planning, and both physician and patient instruction.

See also in Pharmaceutical Intelligence:

The Cost Burden of Disease: U.S. and Michigan.CHRT Brief. January 2010. @www.chrt.org

The National Hospital Bill: The Most Expensive Conditions by Payer, 2006. HCUP Brief #59.

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor WickerhauserAdapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising.
Optical Engineering 1994; 33(7):2170–2174.

R. Coifman and N. SaitoConstructions of local orthonormal bases for classification and regressionC. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments,
& Computers 2004; 36 (3): 506–515.

De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms.  Exemplar by feature applicability matrices and other Dutch normative data for semantic
concepts.
  Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories.
Lewandowsky , S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.

Sheila Nirenberg/Cornell and Chethan Pandarinath/Stanford, “Retinal prosthetic strategy with the capacity to restore normal vision,” Proceedings of the National Academy of Sciences.

Other related articles published in this Open Access Online Scientific Journal include the following:

http://pharmaceuticalintelligence.com/2012/08/13/the-automated-second-opinion-generator/

http://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-
have-travelled-and-where-is-journeys-end/

http://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-
informatics-to-healthcare-is-more-than-currently-estimated/

http://pharmaceuticalintelligence.com/2013/05/04/cardiovascular-diseases-decision-support-
systems-for-disease-management-decision-making/?goback=%2Egde_4346921_member_239739196

http://pharmaceuticalintelligence.com/2012/08/13/demonstration-of-a-diagnostic-clinical-
laboratory-neural-network-agent-applied-to-three-laboratory-data-conditioning-problems/

http://pharmaceuticalintelligence.com/2012/12/17/big-data-in-genomic-medicine/

http://pharmaceuticalintelligence.com/2013/02/13/cracking-the-code-of-human-life-
the-birth-of-bioinformatics-and-computational-genomics/

http://pharmaceuticalintelligence.com/2013/04/28/genetics-of-conduction-disease-
atrioventricular-av-conduction-disease-block-gene-mutations-transcription-excitability-
and-energy-homeostasis/

http://pharmaceuticalintelligence.com/2012/12/10/identification-of-biomarkers-that-
are-relatedto-the-actin-cytoskeleton/

http://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-
for-comparison-and-classification-of-predictor-variables/

http://pharmaceuticalintelligence.com/2012/08/02/diagnostic-evaluation-of-sirs-by-
immature-granulocytes/

http://pharmaceuticalintelligence.com/2012/08/01/automated-inferential-diagnosis-
of-sirs-sepsis-septic-shock/

http://pharmaceuticalintelligence.com/2012/08/12/1815/

http://pharmaceuticalintelligence.com/2012/08/15/1946/

http://pharmaceuticalintelligence.com/2013/05/13/vinod-khosla-20-doctor-included-speculations-
musings-of-a-technology-optimist-or-technology-will-replace-80-of-what-doctors-do/

http://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/

The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN
Aviva Lev-Ari, PhD, RN 2/28/2013
http://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-
based-pharmacological-therapy-including-thymosin/

FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology
Aviva Lev-Ari, PhD, RN 1/28/2013
http://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-
cardiovascular-imaging-technology/

PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma Fibrinogen not
Platelet Reactivity    Aviva Lev-Ari, PhD, RN 1/10/2013
http://pharmaceuticalintelligence.com/2013/01/10/pci-outcomes-increased-ischemic-risk-
associated-with-elevated-plasma-fibrinogen-not-platelet-reactivity/

The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX,
and Clinical SYNTAX   Aviva Lev-Ari, PhD, RN 1/3/2013
http://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-
established-risk-scores-timi-grace-syntax-and-clinical-syntax/

Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by
Serum Protein Profiles    Aviva Lev-Ari, PhD, RN 12/29/2012
http://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-
patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles/

New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia
Aviva Lev-Ari, PhD, RN 8/27/2012
http://pharmaceuticalintelligence.com/2012/08/27/new-definition-of-mi-unveiled-
fractional-flow-reserve-ffrct-for-tagging-ischemia/

Herceptin Fab (antibody) - light and heavy chains

Herceptin Fab (antibody) – light and heavy chains (Photo credit: Wikipedia)

Personalized Medicine

Personalized Medicine (Photo credit: Wikipedia)

Diagnostic of pathogenic mutations. A diagnost...

Diagnostic of pathogenic mutations. A diagnostic complex is a dsDNA molecule resembling a short part of the gene of interest, in which one of the strands is intact (diagnostic signal) and the other bears the mutation to be detected (mutation signal). In case of a pathogenic mutation, the transcribed mRNA pairs to the mutation signal and triggers the release of the diagnostic signal (Photo credit: Wikipedia)

The Binding of Oligonucleotides in DNA and 3-D Lattice Structures

Curator: Larry H Bernstein, MD, FCAP

 

This article is a renewal of a previous discussion on the role of genomics in discovery of therapeutic targets which focused on:

  •  key drivers of cellular proliferation,
  •  stepwise mutational changes coinciding with cancer progression, and
  •  potential therapeutic targets for reversal of the process.

“The Birth of BioInformatics & Computational Genomics” lays the manifold multivariate systems analytical tools that has moved the science forward to a ground that ensures clinical application. Their is a web-like connectivity between inter-connected scientific discoveries, as significant findings have led to novel hypotheses and has driven our understanding of biological and medical processes at an exponential pace owing to insights into the chemical structure of DNA,

  • the basic building blocks of DNA  and proteins,
  • of nucleotide and protein-protein interactions,
  • protein folding, allostericity, genomic structure,
  • DNA replication,
  • nuclear polyribosome interaction, and
  • metabolic control.

In addition, the emergence of methods for

  • copying,
  • removal and insertion, and
  • improvements in structural analysis as well as
  • developments in applied mathematics have transformed the research framework.

Three-Dimensional Folding and Functional Organization Principles of The Drosophila Genome Sexton T, Yaffe E, Kenigeberg E, Bantignies F,…Cavalli G. Institute de Genetique Humaine, Montpelliere GenomiX, and Weissman Institute, France and Israel. Cell 2012; 148(3): 458-472.       http://dx.doi.org/10.1016/j.cell.2012.01.010   http://www.ncbi.nlm.nih.gov/pubmed/22265598 Chromosomes are the physical realization of genetic information and thus form the basis for its

  •   readout and propagation.

Here we present a high-resolution chromosomal contact map derived from a modified genome-wide chromosome conformation capture approach applied to Drosophila embryonic nuclei. The entire genome is linearly partitioned into well-demarcated physical domains that overlap extensively with

  •   active and repressive epigenetic marks.

Chromosomal contacts are hierarchically organized between domains. Global modeling of contact density and clustering of domains show that

  •   inactive domains are condensed and confined to their chromosomal territories, whereas
  •  active domains reach out of the territory to form remote intra- and interchromosomal contacts.
  •  we systematically identify specific long-range intrachromosomal contacts between Polycomb-repressed domains

Together, these observations allow for quantitative prediction of the Drosophila chromosomal contact map, laying the foundation for detailed studies of

  • chromosome structure and function in a genetically tractable system.

“Mr. President; The Genome is Fractal !” Eric Lander (Science Adviser to the President and Director of Broad Institute) et al. delivered the message on Science Magazine cover (Oct. 9, 2009) and generated interest in this by the International HoloGenomics Society at a Sept meeting. First, it may seem to be trivial to rectify the statement in “About cover” of Science Magazine by AAAS. The statement

  • “the Hilbert curve is a one-dimensional fractal trajectory” needs mathematical clarification.

While the paper itself does not make this statement, the new Editorship of the AAAS Magazine might be even more advanced if the previous Editorship did not reject (without review)

  • a Manuscript by 20+ Founders of (formerly) International PostGenetics Society in December, 2006.

Second, it may not be sufficiently clear for the reader that the reasonable requirement for the

  • DNA polymerase to crawl along a “knot-free” (or “low knot”) structure does not need fractals.

A “knot-free” structure could be spooled by an ordinary “knitting globule” (such that the DNA polymerase does not bump into a “knot” when duplicating the strand; just like someone knitting can go through the entire thread without encountering an annoying knot):

  • Just to be “knot-free” you don’t need fractals.

Note, however, that the “strand” can be accessed only at its beginning – it is impossible to e.g.

  • to pluck a segment from deep inside the “globulus”.

This is where certain fractals provide a major advantage – that could be the “Eureka” moment. For instance, the mentioned Hilbert-curve is not only “knot free” – but provides an easy access to

  • “linearly remote” segments of the strand.

If the Hilbert curve starts from the lower right corner and ends at the lower left corner, for instance

  • the path shows the very easy access of what would be the mid-point if the Hilbert-curve
  • is measured by the Euclidean distance along the zig-zagged path.

Likewise, even the path from the beginning of the Hilbert-curve is about equally easy to access – easier than to reach from the origin a point that is about 2/3 down the path. The Hilbert-curve provides an easy access between two points within the “spooled thread”; from a point that is about 1/5 of the overall length to about 3/5 is also in a “close neighborhood”. This marvellous fractal structure is illustrated by the 3D rendering of the Hilbert-curve. Once you observe such fractal structure,

  • you’ll never again think of a chromosome as a “brillo mess”, would you?

It will dawn on you that the genome is orders of magnitudes more finessed than we ever thought so. Those embarking at a somewhat complex review of some historical aspects of the power of fractals may wish to consult the ouvre of Mandelbrot (also, to celebrate his 85th birthday). For the more sophisticated readers, even the fairly simple Hilbert-curve (a representative of the Peano-class) becomes even more stunningly brilliant than just some “see through density”. Those who are familiar with the classic “Traveling Salesman Problem” know that “the shortest path along which every given n locations can be visited once, and only once” requires fairly sophisticated algorithms (and tremendous amount of computation if n>10 (or much more). Some readers will be amazed, therefore, that for n=9 the underlying Hilbert-curve helps to provide an empirical solution. refer to pellionisz@junkdna.com Briefly, the significance of the above realization, that the (recursive) Fractal Hilbert Curve is intimately connected to the (recursive) solution of TravelingSalesman Problem, a core-concept of Artificial Neural Networks can be summarized as below. Accomplished physicist John Hopfield (already a member of the National Academy of Science) aroused great excitement in 1982 with his (recursive) design of artificial neural networks and learning algorithms which were able to find solutions to combinatorial problems such as the Traveling SalesmanProblem. (Book review Clark Jeffries, 1991; see  J Anderson, Rosenfeld, and A Pellionisz (eds.), Neurocomputing 2: Directions for research, MIT Press, Cambridge, MA, 1990): “Perceptions were modeled chiefly with neural connections in a “forward” direction: A -> B -* C — D. The analysis of networks with strong backward coupling proved intractable. All our interesting results arise as consequences of the strong back-coupling” (Hopfield, 1982). The Principle of Recursive Genome Function surpassed obsolete axioms that blocked, for half a Century, entry of recursive algorithms to interpretation of the structure-and function of (Holo)Genome.  This breakthrough,

  • by uniting the two largely separate fields of Neural Networks and Genome Informatics,

is particularly important for those who focused on Biological (actually occurring) Neural Networks (rather than  abstract algorithms that may not, or because of their core-axioms, simply could not represent neural networks under the governance of DNA information). If biophysicist Andras Pellionisz is correct, genetic science may be on the verge of yielding its third — and by far biggest — surprise. With a doctorate in physics, Pellionisz is the holder of Ph.D.’s in computer sciences and experimental biology from the prestigious Budapest Technical University and the Hungarian National Academy of Sciences. A biophysicist by training, the 59-year-old is a former research associate professor of physiology and biophysics at New York University, author of numerous papers in respected scientific journals and textbooks, a past winner of the prestigious Humboldt Prize for scientific research, a former consultant to NASA and holder of a patent on the world’s first artificial cerebellum, a technology that has already been integrated into research on advanced avionics systems. Because of his background, the Hungarian-born brain researcher might also become one of the first people to successfully launch a new company by

  • using the Internet to gather momentum for a novel scientific idea.

The genes we know about today, Pellionisz says, can be thought of as something similar to machines that make bricks (proteins, in the case of genes), with certain junk-DNA sections providing a blueprint for the different ways those proteins are assembled. The notion that at least certain parts of junk DNA might have a purpose for example, many researchers

  • now refer to with a far less derogatory term: introns.

In a provisional patent application filed July 31, Pellionisz claims to have

  • unlocked a key to the hidden role junk DNA

plays in growth — and in life itself. His patent application covers all attempts to

  • count,
  • measure and
  • compare

the fractal properties of introns for diagnostic and therapeutic purposes.

The FractoGene Decade from Inception in 2002 Proofs of Concept and Impending Clinical Applications by 2012Junk DNA Revisited (SF Gate, 2002)The Future of Life, 50th Anniversary of DNA (Monterey, 2003)Mandelbrot and Pellionisz (Stanford, 2004)Morphogenesis, Physiology and Biophysics (Simons, Pellionisz 2005)PostGenetics; Genetics beyond Genes (Budapest, 2006)ENCODE-conclusion (Collins, 2007)The Principle of Recursive Genome Function (paper, YouTube, 2008)You Tube Cold Spring Harbor presentation of FractoGene (Cold Spring Harbor, 2009)Mr. President, the Genome is Fractal! (2009)HolGenTech, Inc. Founded (2010)Pellionisz on the Board of Advisers in the USA and India (2011)ENCODE – final admission (2012) Recursive Genome Function is Clogged by Fractal Defects in Hilbert-Curve (2012) Geometric Unification of Neuroscience and Genomics (2012) US Patent Office issues FractoGene 8,280,641 to Pellionisz (2012) http://www.junkdna.com/the_fractogene_decade.pdf

The Hidden Fractal Language of Intron DNA

To fully understand Pellionisz’ idea, one must first know what a fractal is. Fractals are a way that nature organizes matter. Fractal patterns can be found in anything that has a nonsmooth surface (unlike a billiard ball), such as

  • coastal seashores,
  • the branches of a tree or
  • the contours of a neuron (a nerve cell in the brain).

Some, but not all, fractals are self-similar and stop repeating their patterns at some stage;

  • the branches of a tree, for example, can get only so small.

Because they are geometric, meaning they have a shape, fractals can be described in mathematical terms. It’s similar to the way a circle can be described by using a number to represent its radius (the distance from its center to its outer edge). When that number is known, it’s possible to draw the circle it represents without ever having seen it before. Although the math is much more complicated, the same is true of fractals. If one has the formula for a given fractal, it’s possible to use that formula to construct, or reconstruct, an image of whatever structure it represents, no matter how complicated. The mysteriously repetitive but not identical strands of genetic material are in reality

  • building instructions organized in a special type of pattern known as a fractal.

It’s this pattern of fractal instructions, he says, that tells genes what they must do in order to form living tissue, everything from the wings of a fly to the entire body of a full-grown human. In a move sure to alienate some scientists, Pellionisz chose the unorthodox route of making his initial disclosures online on his own Web site. He picked that strategy, he says, because it is the fastest way he can document his claims and find scientific collaborators and investors. Most mainstream scientists usually blanch at such approaches, preferring more traditionally credible methods, such as publishing articles in peer-reviewed journals. Pellionisz’ idea is that a fractal set of building instructions in the DNA plays a role in organizing life itself. Decode the language, and in theory it could be reverse engineered. Just as knowing the radius of a circle lets one create that circle. The fractal-based formula

  • would allow us to understand how a heart or disease-fighting antibodies is created.

The idea is  encourage new collaborations across the boundaries that separate the intertwined

  • disciplines of biology, mathematics and computer sciences.

Hal Plotkin, Special to SF Gate. Thursday, November 21, 2002. http://www.junkdna.com/ http://www.junkdna.com/the_fractogene_decade.pdf http://www.sciencentral.com/articles/view.php3?article_id=218392305 http://www.news-medical.net/health/Junk-DNA-What-is-Junk-DNA.aspx http://www.kurzweilai.net/junk-dna-plays-active-role-in-cancer-progression-researchers-find http://marginalrevolution.com/marginalrevolution/2013/05/the-battle-over-junk-dna http://profiles.nlm.nih.gov/SC/B/B/F/T/_/scbbft.pdf

Human Genome is Multifractal

The human genome: a multifractal analysis. Moreno PA, Vélez PE, Martínez E, et al.    BMC Genomics 2011, 12:506. http://www.biomedcentral.com/1471-2164/12/506 Several studies have shown that genomes can be studied via a multifractal formalism. These researchers used a multifractal approach to study the genetic information content of the Caenorhabditis elegans genome. They investigated the possibility that the human genome shows a similar behavior to that observed in the nematode. They report

  • multifractality in the human genome sequence.

This behavior correlates strongly on the presence of Alu elements and to a lesser extent on CpG islands and (G+C) content.

  1. Gene function,
  2. cluster of orthologous genes,
  3. metabolic pathways, and
  4. exons
  • tended to increase their frequencies with ranges of multifractality and
  • large gene families were located in genomic regions with varied multifractality.
  • a multifractal map and classification for human chromosomes are proposed.

They propose a descriptive non-linear model for the structure of the human genome. This model reveals a multifractal regionalization where many regions coexist that are

  • far from equilibrium and this non-linear organization has significant
  • molecular and medical genetic implications for understanding the role of Alu elements in genome stability and structure of the human genome.

Given the role of Alu sequences in

  • gene regulation
  • genetic diseases
  • human genetic diversity
  • adaptation and phylogenetic analyses

these quantifications are especially useful.

MiIP: The Monomer Identification and Isolation Program

Bun C, Ziccardi W, Doering J and Putonti C. Evolutionary Bioinformatics 2012:8 293-300. http://dx.doi.org/10.4137/EBO.S9248 Repetitive elements within genomic DNA are both functionally and evolution-wise informative. Discovering these sequences ab initio is computationally challenging, compounded by the fact that sequence identity between repetitive elements can vary significantly. These investigators present a new application, the Monomer Identification and Isolation Program (MiIP),

  • which provides functionality to both search for a particular repeat as well as
  • discover repetitive elements within a larger genomic sequence.

To compare MiIP’s performance with other repeat detection tools, analysis was conducted for synthetic sequences as well as several a21-II clones and HC21 BAC sequences. The main benefit of MiIP is

  • it is a single tool capable of searching for both known monomeric sequences
  • discovering the occurrence of repeats ab initio

Triplex DNA: A third strand for DNA

The DNA double helix can under certain conditions accommodate

  • a third strand in its major groove.

Researchers in the UK  presented a complete set of four variant nucleotides that makes it

  • possible to use this phenomenon in gene regulation and mutagenesis.

Natural DNA only forms a triplex if the targeted strand is rich in purines – guanine (G) and adenine (A) – which in addition to the bonds of the Watson-Crick base pairing

  • can form two further hydrogen bonds,
  •  the ‘third strand’ oligonucleotide has the matching sequence of pyrimidines – cytosine (C) and thymine (T).

Any Cs or Ts in the target strand of the duplex will only bind very weakly, as

  • they contribute just one hydrogen bond.

Moreover, the recognition of G requires the C in the probe strand to be protonated,

  • triplex formation will only work at low pH.

To overcome all these problems, the groups of Tom Brown and Keith Fox at the University of Southampton have developed modified building blocks, and have now completed

  • a set of four new nucleotides, each of which will bind to one DNA nucleotide from the major groove of the double helix.

They tested the binding of a 19-mer of these designer nucleotides to a double helix target sequence in comparison with the corresponding triplex-forming oligonucleotide made from natural DNA bases. Using fluorescence-monitored thermal melting and DNase I footprinting, the researchers showed that

  • their construct forms stable triplex even at neutral pH. 

Tests with mutated versions of the target sequence showed that

  • three of the novel nucleotides are highly selective for their target base pair,
  • while the ‘S’ nucleotide, designed to bind to T, also tolerates C.

References

DA Rusling et al, Nucleic Acids Res. 2005, 33, 3025 http://nucleicacidsres.com/Rusling_DA KM Vasquez et al, Science 2000, 290, 530 http://Science.org/2000/290.530/Vazquez_KM/ Frank-Kamenetskii MD, Mirkin SM. Annual Rev Biochem 1995; 64:69-95. http://www.annualreviews.org/aronline/1995/Frank-Kamenetski_MD/64.69/ Since the pioneering work of Felsenfeld, Davies, & Rich, double-stranded polynucleotides containing purines in one strand and pydmidines in the other strand [such as poly(A)/poly(U), poly(dA)/poly(dT), or poly(dAG)/ poly(dCT)] have been known to be able to undergo a stoichiometric transition forming a triple-stranded structure containing one polypurine and two poly-pyrimidine strands. Early on, it was assumed that the third strand was located in the major groove and associated with the duplex via non-Watson-Crick interactions now

  • known as Hoogsteen pairing.

Triple helices consisting of one pyrimidine and two purine strands were also proposed. However, notwithstanding the fact that single-base triads in tRNA structures were well- documented, triple-helical DNA escaped wide attention before the mid-1980s. The interest in DNA triplexes arose due to two partially independent developments.

  1.  homopurine-homopyrimidine stretches in super-coiled plasmids were found to adopt an unusual DNA structure, called H-DNA which includes a triplex.
  2. several groups demonstrated that homopyrimidine and some purine-rich oligonucleotides
  • can form stable and sequence-specific complexes with
  • corresponding homopurine-homopyrimidine sites on duplex DNA.

These complexes were shown to be triplex structures rather than D-loops, where

  • the oligonucleotide invades the double helix and displaces one strand.

A characteristic feature of all these triplexes is that the two

  • chemically homologous strands (both pyrimidine or both purine) are antiparallel.

These findings led explosive growth in triplex studies. One can easily imagine numerous “geometrical” ways to form a triplex, and those that have been studied experimentally. The canonical intermolecular triplex consists of either

  • three independent
  • oligonucleotide chains or of a
  • long DNA duplex carrying homopurine-homopyrimidine insert
    • and the corresponding oligonucleotide.

Triplex formation strongly depends on the oligonucleotide(s) concentration. A single DNA

  • chain may also fold into a triplex connected by two loops.

To comply with the sequence and polarity requirements for triplex formation, such a DNA strand must have a peculiar sequence: It contains a mirror repeat

  1. (homopyrimidine for YR*Y triplexes and homopurine for YR*R triplexes)
  2. flanked by a sequence complementary to one half of this repeat.

Such DNA sequences fold into triplex configuration much more readily than do the corresponding intermolecular triplexes, because all triplex forming segments are brought together within the same molecule. It has become clear that both

  • sequence requirements and chain polarity rules for triplex formation

can be met by DNA target sequences built of clusters of purines and pyrimidines. The third strand consists of adjacent homopurine and homopyrimidine blocks forming Hoogsteen hydrogen bonds with purines on alternate strands of the target duplex, and

  • this strand switch preserves the proper chain polarity.

These structures, called alternate-strand triplexes, have been experimentally observed as both intra- and inter-molecular triplexes. These results increase the number of potential targets for triplex formation in natural DNAs somewhat by adding sequences composed of purine and pyrimidine clusters, although arbitrary sequences are still not targetable because

  • strand switching is energetically unfavorable.

References: Lyamichev VI, Mirkin SM, Frank-Kamenetskii MD. J. Biomol. Stract. Dyn. 1986; 3:667-69. http://JbiomolStractDyn.com/1986/Lyamichev_VI/3.667/ Filippov SA, Frank-Kamenetskii MD. Nature 1987; 330:495-97. http://Nature.com/1987/Fillipov_SA/330.495/ Demidov V, Frank-Kamenetskii MD, Egholm M, Buchardt O, Nielsen PE. Nucleic Acids Res. 1993; 21:2103-7. http://NucleicAcidsResearch.com/1993/Demidov_V/21.2103/ Mirkin SM, Frank-Kamenetskii MD. Anna. Rev. Biophys. Biomol. Struct. 1994; 23:541-76. http://AnnRevBiophysBiomolecStructure.com/1994/Mirkin_SM/23.541/ Hoogsteen K. Acta Crystallogr. 1963; 16:907-16 http://ActaCrystallogr.com/1963/Hoogsteen_K/16.907/ Malkov VA, Voloshin ON, Veselkov AG, Rostapshov VM, Jansen I, et al. Nucleic Acids Res. 1993; 21:105-11. http://NucleicAcidsResearch.com/1993/Malkov_VA/21.105 Malkov VA, Voloshin ON, Soyfer VN, Frank-Kamenetskii MD. Nucleic Acids Res. 1993; 21:585-91 http://NucleicAcidsRes.com/1993/Malkov_VA/21.585/ Chemy DY, Belotserkovskii BP, Frank-Kamenetskii MD, Egholm M, Buchardt O, et al. Proc. Natl. Acad. Sci. USA 1993; 90:1667-70 http://PNAS.org/1993/Chemy_DY/90.1667/ Triplex forming oligonucleotides Triplex forming oligonucleotides: sequence-specific tools for genetic targeting. Knauert MP, Glazer PM. Human Molec Genetics 2001; 10(20):2243-2251. http://HumanMolecGenetics.com/2001/Knauert_ MP/10.2243/ Triplex forming oligonucleotides (TFOs) bind in the major groove of duplex DNA with a

  • high specificity and affinity.

Because of these characteristics, TFOs have been proposed as

  • homing devices for genetic manipulation in vivo.

These investigators review work demonstrating the ability of TFOs and related molecules

  • to alter gene expression and mediate gene modification in mammalian cells.

TFOs can mediate targeted gene knock out in mice, providing a foundation for potential

  • application of these molecules in human gene therapy.

The Triplex Genetic Code

Novagon DNA John Allen Berger, founder of Novagon DNA and The Triplex Genetic Code Over the past 12+ years, Novagon DNA has amassed a vast array of empirical findings which

  • challenge the “validity” of the “central dogma theory”, especially the current five nucleotide
  • Watson-Crick DNA and RNA genetic codes. DNA = A1T1G1C1, RNA =A2U1G2C2.

We propose that our new Novagon DNA 6 nucleotide Triplex Genetic Code has more validity than

  • the existing 5 nucleotide (A1T1U1G1C1) Watson-Crick genetic codes.

Our goal is to conduct a “world class” validation study to replicate and extend our findings.

Methods for Examining Genomic and Proteomic Interactions.

An Integrated Statistical Approach to Compare Transcriptomics Data Across Experiments: A Case Study on the Identification of Candidate Target Genes of the Transcription Factor PPARα Ullah MO, Müller M and Hooiveld GJEJ. Bioinformatics and Biology Insights 2012;6: 145–154. http://dx.doi.org/10.4137/BBI.S9529 http://www.ncbi.nlm.nih.gov/pubmed/22783064 Corresponding author email: guido.hooiveld@wur.nl       http://edepot.wur.nl/213859 An effective strategy to elucidate the signal transduction cascades activated by a transcription factor

  • is to compare the transcriptional profiles of wild type and transcription factor knockout models.

Many statistical tests have been proposed for analyzing gene expression data, but

  • most tests are based on pair-wise comparisons.

Since the analysis of microarrays involves the testing of multiple hypotheses within one study,

  • it is generally accepted to control for false positives by the false discovery rate (FDR).

However, this may be an inappropriate metric for

    • comparing data across different experiments.

Here we propose  the simultaneous testing and integration of

  • the three hypotheses (contrasts) using the cell means ANOVA model.

These three contrasts test for the effect of a treatment in

  1. wild type,
  2. gene knockout, and
  3. globally over all experimental groups

We compare differential expression of genes across experiments while

  • controlling for multiple hypothesis testing,
  • managing biological complexity across orthologs
  • with a visual knowledgebase of documented biomolecular interactions.

Vincent Van Buren & Hailin Chen. Scientific Reports 2012; 2, Article number: 1011 http://dx.doi.org/10.1038/srep01011 The complexity of biomolecular interactions and influences is a major obstacle

  • to their comprehension and elucidation.

Visualizing knowledge of biomolecular interactions increases

  • comprehension and facilitates the development of new hypotheses.

The rapidly changing landscape of high-content experimental results also presents a challenge

  • for the maintenance of comprehensive knowledgebases.

Distributing the responsibility for maintenance of a knowledgebase to a community of

  • experts is an effective strategy for large, complex and rapidly changing knowledgebases.

Cognoscente serves these needs

  • by building visualizations for queries of biomolecular interactions on demand,
  • by managing the complexity of those visualizations, and
  • by crowdsourcing to promote the incorporation of current knowledge from the literature.

Imputing functional associations

  • between biomolecules and imputing directionality of regulation
  • for those predictions each require a corpus of existing knowledge as a framework.

Comprehension of the complexity of this corpus of knowledge will be facilitated by effective

  • visualizations of the corresponding biomolecular interaction networks.

Cognoscente (http://vanburenlab.medicine.tamhsc.edu/cognoscente.html) was designed and implemented to serve these roles as a knowledgebase and as

  • an effective visualization tool for systems biology research and education.

Cognoscente currently contains over 413,000 documented interactions, with coverage across multiple species. Perl, HTML, GraphViz1, and a MySQL database were used in the development of Cognoscente. Cognoscente was motivated by the need to update the knowledgebase of

  • biomolecular interactions at the user level, and
  • flexibly visualize multi-molecule query results for
    • heterogeneous interaction types across different orthologs.

Satisfying these needs provides a strong foundation for developing new hypotheses about

  • regulatory and metabolic pathway topologies.

Several existing tools provide functions that are similar to Cognoscente.

Hilbert 3D curve, iteration 3

Hilbert 3D curve, iteration 3 (Photo credit: Wikipedia)

3-dimensionnal Hilbert cube.

3-dimensionnal Hilbert cube. (Photo credit: Wikipedia)

0tj, 1st and 2nd iteration of Hilbert curve in...

0tj, 1st and 2nd iteration of Hilbert curve in 3D. If you’re looking for the source file, contact me. (Photo credit: Wikipedia)

8 first steps of the building of the Hilbert c...

8 first steps of the building of the Hilbert curve in animated gif (Photo credit: Wikipedia)

Reporter: Aviva Lev-Ari, PhD, RN

 

About 13th Annual Biotech in Europe Investor Forum

The 13th Annual Biotech in Europe Investor Forum will be held on 30th September – 1st October 2013 in the Hilton Zurich Airport Hotel, Switzerland. The forum is recognised as the leading international stage for those interested in investing in the biotech and life science industry and is highly transactional, drawing together an exciting cross-section of early-stage/pre-IPO, late-stage and public companies with leading investors, analysts, money managers and pharmas. Supported and designed by leading figures within Europe’s bio industry, this event will once again be covered by our regular media partners. We expect around 350 delegates and 80 presenting companies.

The two-day conference programme will give you access to some of the leading players in the industry and will provide you with truly excellent networking opportunities as it boasts an online partnering system with 500 meeting slots available for all attendees to book before the event.


The program will comprise of plenary panels including:

  • Public Markets and M&A
  • Investment
  • Platform Technologies and Novel Therapeutics
  • Oncology
  • Vaccines
  • Neuroscience
  • Partnering
  • Medtech & Device
  • Diagnostics
  • Emerging Companies

Speakers & Chairs confirmed for the event include:

  • Anthony Rosenberg, Head, Partnering & Emerging BusinessesNovartis Pharma AG
  • Bernd Goergen, Senior Investment ManagerHigh-Tech Gründerfonds Management
  • Carole Nuechterlein, HeadRoche Venture Fund
  • Deborah Harland, General PartnerSR One
  • Douglas Famborough, CEODicerna Pharmaceuticals
  • Esteban Pombo-Villar, COOOxford BioTherapeutics
  • Fintan Walton, Founder and CEOPharmaVentures Ltd.
  • Florian Schoedel, CEOPhillimune LLC
  • Frank Grams, Head Alliance Management & ContractingSanofi R&D
  • Genghis Lloyd-Harris, PartnerAbingworth Management
  • Graeme Martin, President and CEOTakeda Research Investment, Inc.
  • Hans Christinger, Director Licensing and AcquisitionsAbbot Laboratories
  • Heinz Schwer, Senior DirectorMorphoSys AG
  • Judith Hills, Vice President Corporate Business DevelopmentIpsen Biopharm Ltd
  • Katya Smirnyagina, Venture PartnerCapricorn Venture Partners
  • Klaus Langner, Executive Vice President and Chief Operating Officer, Grunenthal Innovations,Grunenthal
  • Margarita Chavez, DirectorAbbott Biotech Ventures
  • Markus Hosang, General PartnerBioMedPartners
  • Martine Kaufmann, Managing DirectorMartina Kaufmann Strategic Consulting
  • Naveed Siddiqi, PartnerPhase4 Ventures Limited
  • Oliver Middendorp, Chief Business OfficerNumab AG
  • Philippe Calais, CEOAntisense Pharma GmbH
  • Rafaèle Tordjman, PartnerSofinnova Partners
  • Rainer Strohmenger, Partner, Life SciencesWellington Partners
  • Sarah Holland, Global Head, Strategic PartneringRoche Partnering
  • Thomas Kronbach, CEOBiocrea GmbH
  • Vincent Charlon, CEOAnergis
  • Wilder Fulford, PrincipalTorreya Partners
  • Willem van Weperen, CEOTo-BBB Technologies BV
  • And More TBA…

NEW FOR 2013:
Personalised Medicine and Companion Diagnostics:
from development to market.

A series of lectures and panels covering;

  • Regulation
  • Clinical trials design
  • Pharma interaction
  • Market access & Reimbursement

Confirmed speakers include:

  • Eddie Blair, CEOIntegrated Magnetic Systems Ltd.
  • Lorenza Castellon, Health and Biotech Equity AnalystEquity Development Ltd.
  • Martina Kaufmann, Managing DirectorMartina Kaufmann Strategic Consulting

Presenting Opportunities

Presenting at the forum offers excellent opportunities to showcase your company to some of the leading global investors and corporates. It will offer you the opportunity to communicate your projected capital raising plans or simply help you in finding the right partner for your business.

Presenting companies from Europe and the US will benefit from specially designed panels and keynote addresses from leading industry figures as well as access to some of the leading analysts and investors from Europe and beyond. This year, we aim to expand the audience and provide once again, opportunities for executive-level networking, deal-making and strategic partnering.

The forum is recognised as the leading international stage for those interested in investing in the biotech and life science industry and is highly transactional, drawing together an exciting cross-section of early-stage/pre-IPO, late-stage and public companies with leading investors, analysts, money managers and pharmas. Supported and designed by leading figures within Europe’s bio industry, this event will once again be covered by our regular media partners.

Sponsorship

Sachs Associates has developed an extensive knowledge of the key individuals operating within the European and global biotech industry. This together with a growing reputation for excellence puts Sachs Associates at the forefront of the industry and provides a powerful tool by which to increase the position of your company in this market.

Raise your company’s profile directly with your potential clients. All of our sponsorship packages are tailor made to each client, allowing your organisation to gain the most out of attending our industry driven events.

To learn more about presenting, exhibition or sponsorship opportunities, please contact
Zoe Harris + 44 (0)203 463 4890 or by email: Zoe Harris.

SOURCE:

http://www.sachsforum.com/zurich13/zurich13-speakers.html

Finding the Genetic Links in Common Disease:  Caveats of Whole Genome Sequencing Studies

Writer and Reporter: Stephen J. Williams, Ph.D.

In the November 23, 2012 issue of Science, Jocelyn Kaiser reports (Genetic Influences On Disease Remain Hidden in News and Analysis)[1] on the difficulties that many genomic studies are encountering correlating genetic variants to high risk of type 2 diabetes and heart disease.  At the recent American Society of Human Genetics annual 2012 meeting, results of several DNA sequencing studies reported difficulties in finding genetic variants and links to high risk type 2 diabetes and heart disease.  These studies were a part of an international effort to determine the multiple genetic events contributing to complex, common diseases like diabetes.  Unlike Mendelian inherited diseases (like ataxia telangiectasia) which are characterized by defects mainly in one gene, finding genetic links to more complex diseases may pose a problem as outlined in the article:

  • Variants may be so rare that massive number of patient’s genome would need to be analyzed
  • For most diseases, individual SNPs (single nucleotide polymorphisms) raise risk modestly
  • Hard to find isolated families (hemophilia) or isolated populations (Ashkenazi Jew)
  • Disease-influencing genes have not been weeded out by natural selection after human population explosion (~5000 years ago) resulted in numerous gene variants
  • What percentage variants account for disease heritability (studies have shown this is as low as 26% for diabetes with the remaining risk determined by environment)

Although many genome-wide-associations studies have found SNPs that have causality to increasing risk diseases such as cancer, diabetes, and heart disease, most individual SNPs for common diseases raise risk by about only 20-40% and would be useless for predicting an individual’s chance they will develop disease and be a candidate for a personalized therapy approach.  Therefore, for common diseases, investigators are relying on direct exome sequencing and whole-genome sequencing to detect these medium-rare risk variants, rather than relying on genome-wide association studies (which are usually fine for detecting the higher frequency variants associated with common diseases).

Three of the many projects (one for heart risk and two for diabetes risk) are highlighted in the article:

1.  National Heart, Lung and Blood Institute Exome Sequencing Project (ESP)[2]: heart, lung, blood

  • Sequenced 6,700 exomes of European or African descent
  • Majority of variants linked to disease too rare (as low as one variant)
  • Groups of variants in the same gene confirmed link between APOC3 and higher risk for early-onset heart attack
  • No other significant gene variants linked with heart disease

2.  T2D-GENES Consortium: diabetes

Sequenced 5,300 exomes of type 2 diabetes patients and controls from five ancestry groups
SNP in PAX4 gene associated with disease in East Asians
No low-frequency variant with large effect though

3.  GoT2D: diabetes

  • After sequencing 2700 patient’s exomes and whole genome no new rare variants above 1.5% frequency with a strong effect on diabetes risk

A nice article by Dr. Sowmiya Moorthie entitled Involvement of rare variants in common disease can be found at the PGH Foundation site http://www.phgfoundation.org/news/5164/ further discusses this conundrum,  and is summarized below:

“Although GWAs have identified many SNPs associated with common disease, they have as yet had little success in identifying the causative genetic variants. Those that have been identified have only a weak effect on disease risk, and therefore only explain a small proportion of the heritable, genetic component of susceptibility to that disease. This has led to the common disease-common variant hypothesis, which predicts that common disease-causing genetic variants exist in all human populations, but each individual variant will necessarily only have a small effect on disease susceptibility (i.e. a low associated relative risk).

An alternative hypothesis is the common disease, many rare variants hypothesis, which postulates that disease is caused by multiple strong-effect variants, each of which is only found in a few individuals. Dickson et al. in a paper in PLoS Biology postulate that these rare variants can be indirectly associated with common variants; they call these synthetic associations and demonstrate how further investigation could help explain findings from GWA studies [Dickson et al. (2010) PLoS Biol. 8(1):e1000294][3].  In simulation experiments, 30% of synthetic associations were caused by the presence of rare causative variants and furthermore, the strength of the association with common variants also increased if the number of rare causative variants increased. “

one_of_many rare variants

Figure from Dr. Moorthie’s article showing the problem of “finding one in many”.

(please   click to enlarge)

Indeed, other examples of such issues concerning gene variant association studies occur with other common diseases such as neurologic diseases and obesity, where it has been difficult to clearly and definitively associate any variant with prediction of risk.

For example, Nuytemans et. al.[4] used exome sequencing to find variants in the vascular protein sorting 3J (VPS35) and eukaryotic transcription initiation factor 4  gamma1 (EIF4G1) genes, tow genes causally linked to Parkinson’s Disease (PD).  Although they identified novel VPS35 variants none of these variants could be correlated to higher risk of PD.   One EIF4G1 variant seemed to be a strong Parkinson’s Disease risk factor however there was “no evidence for an overall contribution of genetic variability in VPS35 or EIF4G1 to PD development”.

These negative results may have relevance as companies such as 23andme (www.23andme.com) claim to be able to test for Parkinson’s predisposition.  To see a description of the LLRK2 mutational analysis which they use to determine risk for the disease please see the following link: https://www.23andme.com/health/Parkinsons-Disease/. This company and other like it have been subjects of posts on this site (Personalized Medicine: Clinical Aspiration of Microarrays)

However there seems to be more luck with strategies focused on analyzing intronic sequence rather than exome sequence. Jocelyn Kaiser’s Science article notes this in a brief interview with Harry Dietz of Johns Hopkins University where he suspects that “much of the missing heritability lies in gene-gene interactions”.  Oliver Harismendy and Kelly Frazer and colleagues’ recent publication in Genome Biology  http://genomebiology.com/content/11/11/R118 support this notion[5].  The authors used targeted resequencing of two endocannabinoid metabolic enzyme genes (fatty-acid-amide hydrolase (FAAH) and monoglyceride lipase (MGLL) in 147 normal weight and 142 extremely obese patients.

These patients were enrolled in the CRESCENDO trial and patients analyzed were of European descent. However, instead of just exome sequencing, the group resequenced exome AND intronic sequence, especially focusing on promoter regions.   They identified 1,448 single nucleotide variants but using a statistical filter (called RareCover which is referred to as a collapsing method) they found 4 variants in the promoters and intronic areas of the FAAH and MGLL genes which correlated to body mass index.  It should be noted that anandamide, a substrate for FAAH, is elevated in obese patients. The authors did note some issues though mentioning that “some other loci, more weakly or inconsistently associated in the original GWASs, were not replicated in our samples, which is not too surprising given the sample size of our cohort is inadequate to replicate modest associations”.

PLEASE WATCH VIDEO on the National Heart, Lung and Blood Institute Exome Sequencing Project

https://www.youtube.com/watch?v=-Qr5ahk1HEI

REFERENCES

http://www.phgfoundation.org/news/5164/  PHG Foundation

1.            Kaiser J: Human genetics. Genetic influences on disease remain hidden. Science 2012, 338(6110):1016-1017.

2.            Tennessen JA, Bigham AW, O’Connor TD, Fu W, Kenny EE, Gravel S, McGee S, Do R, Liu X, Jun G et al: Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science 2012, 337(6090):64-69.

3.            Dickson SP, Wang K, Krantz I, Hakonarson H, Goldstein DB: Rare variants create synthetic genome-wide associations. PLoS biology 2010, 8(1):e1000294.

4.            Nuytemans K, Bademci G, Inchausti V, Dressen A, Kinnamon DD, Mehta A, Wang L, Zuchner S, Beecham GW, Martin ER et al: Whole exome sequencing of rare variants in EIF4G1 and VPS35 in Parkinson disease. Neurology 2013, 80(11):982-989.

5.            Harismendy O, Bansal V, Bhatia G, Nakano M, Scott M, Wang X, Dib C, Turlotte E, Sipe JC, Murray SS et al: Population sequencing of two endocannabinoid metabolic genes identifies rare and common regulatory variants associated with extreme obesity and metabolite level. Genome biology 2010, 11(11):R118.

Other posts on this site related to Genomics include:

Cancer Biology and Genomics for Disease Diagnosis

Diagnosis of Cardiovascular Disease, Treatment and Prevention: Current & Predicted Cost of Care and the Promise of Individualized Medicine Using Clinical Decision Support Systems

Ethical Concerns in Personalized Medicine: BRCA1/2 Testing in Minors and Communication of Breast Cancer Risk

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013

Genomics-based cure for diabetes on-the-way

Personalized Medicine: Clinical Aspiration of Microarrays

Late Onset of Alzheimer’s Disease and One-carbon Metabolism

Genetics of Disease: More Complex is How to Creating New Drugs

Genetics of Conduction Disease: Atrioventricular (AV) Conduction Disease (block): Gene Mutations – Transcription, Excitability, and Energy Homeostasis

Centers of Excellence in Genomic Sciences (CEGS): NHGRI to Fund New CEGS on the Brain: Mental Disorders and the Nervous System

Cancer Genomic Precision Therapy: Digitized Tumor’s Genome (WGSA) Compared with Genome-native Germ Line: Flash-frozen specimen and Formalin-fixed paraffin-embedded Specimen Needed

Mitochondrial Metabolism and Cardiac Function

Pancreatic Cancer: Genetics, Genomics and Immunotherapy

Issues in Personalized Medicine in Cancer: Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing

Quantum Biology And Computational Medicine

Personalized Cardiovascular Genetic Medicine at Partners HealthCare and Harvard Medical School

Centers of Excellence in Genomic Sciences (CEGS): NHGRI to Fund New CEGS on the Brain: Mental Disorders and the Nervous System

LEADERS in Genome Sequencing of Genetic Mutations for Therapeutic Drug Selection in Cancer Personalized Treatment: Part 2

Consumer Market for Personal DNA Sequencing: Part 4

Personalized Medicine: An Institute Profile – Coriell Institute for Medical Research: Part 3

Whole-Genome Sequencing Data will be Stored in Coriell’s Spin off For-Profit Entity

 

DNA Nanotechnology

Author: Tilda Barliya PhD

The field of DNA and RNA nanotechnologies  are considered one of the most dynamic research areas in the field of drug delivery in molecular medicine. Both DNA and RNA have a wide aspect of medical application including: drug deliveries, for genetic immunization, for metabolite and nucleic acid detection, gene regulation, siRNA delivery for cancer treatment (I), and even analytical and therapeutic applications.

Seeman (6,7) pioneered the concept 30 years ago of using DNA as a material for creating nanostructures; this has led to an explosion of knowledge in the now well-established field of DNA nanotechnology. The unique properties in terms of free energy, folding, noncanonical base-pairing, base-stacking, in vivo transcription and processing that distinguish RNA from DNA provides sufficient rationale to regard RNA nanotechnology as its own technological discipline. Herein, we will discuss the advantages of DNA nanotechnology and it’s use in medicine.

So What is the rational of using DNA nanotechnology(3)?

  • Genetic studies – its application in various biological fields like biomedicine, cancer research, medical devices  and genetic engineering.
  • Its unique properties of structural stability, programmability of sequences, and predictable self-assembly.
DNA origami

Structures made from DNA using the DNA-origami method (Rothemund, 2006)

Structural DNA nanotechnology rests on three pillars: [1] Hybridization; [2] Stably branched DNA; and [3] Convenient synthesis of designed sequences.

Hybridization

Hybridization. The self-association (self=assembly) of complementary nucleic acid molecules or parts of molecules, is implicit in all aspects of structural DNA nanotechnology. Individual motifs are formed by the hybridization of strands designed to produce particular topological species. A key aspect of hybridization is the use of sticky ended cohesion to combine pieces of linear duplex DNA; this has been a fundamental component of genetic engineering for over 35 years (7). Not only is hybridization critical to the formation of structure, but it is deeply involved in almost all the sequence-dependent nanomechanical devices that have been constructed, and it is central to many attempts to build structural motifs in a sequential fashion (7,8 ).

Stably Branched DNA

branched DNA molecules are central to DNA nanotechnology. It is the combination of in vitro hybridization and synthetic branched DNA that leads to the ability to use DNA as a construction material. Such branched DNA is thought to be intermediates in genetic recombination (such as Holliday junctions).

Convenient Synthesis of Designed Sequences

Biologically derived branched DNA molecules, such as Holliday junctions, are inherently unstable, because they exhibit sequence symmetry; i.e., the four strands actually consist of two pairs of strands with the same sequence. This symmetry enables an isomerization known as branch migration that allows the branch point to relocate.  DNA nanotechnology entailed sequence design that attempted to minimize sequence symmetry in every way possible.

One of the most remarkable innovations in structural DNA-nanotechnology in recent years is DNA origami, which was invented in 2006 by Paul Rothemund (1) (see Fig above). DNA origami utilizes the genome from a virus together with a large number of shorter DNA strands to enable the creation of numerous DNA-based structures (Figure 1). The shorter DNA strands forces the long viral DNA to fold into a pattern that is defined by the interaction between the long and the short DNA strands (1,2).

Rothemund believes that an  application of patterned DNA origami would be the creation of a ‘nanobreadboard’, to which diverse components could be added. The attachment of proteins23, for example, might allow novel biological experiments aimed at modelling complex protein assemblies and examining the effects of spatial organization, whereas molecular electronic or plasmonic circuits might be created by attaching nanowires, carbon nanotubes or gold nanoparticles (1).

DNA nanotechnology and Biological Application

The physical and chemical properties of nanomaterials such as polymers, semiconductors, and metals present diverse advantages for various in vivo applications (3,9 ). For example:

  • Therapeutics – In cancer for example, nanosystems that are designed from biological materials such as DNA and RNA are ‘programmed’ to be able to evade most, if not all, drug-resistance mechanisms. Based on these properties, most nanosystems are able to deliver high concentrations of drugs to cancer cells while curtailing damage to surrounding healthy cells (2b, 3, 9, 11, 15).
  • Biosensors – capable of picking up very specific biological signals and converting them into electrical outputs that can be analyzed for identification. Biosensors are efficient as they have a high ratio of surface area to volume as well as adjustable electronic, magnetic, optical, and biological properties (3, 12, 13, 14).
  • **Amin and colleagues have developed a biotinylated DNA thin film-coated fiber optic reflectance biosensor for the detection of streptavidin aerosols. DNA thin films were prepared by dropping DNA samples into a polymer optical fiber which responded quickly to the specific biomolecules in the atmosphere. This approach of coating optical fibers with DNA nanostructures could be very useful in the future for detecting atmospheric bio-aerosols with high sensitivity and specificity (3, 14)
  • Computing – Another aspect uses the programmability of DNA to create devices that are capable of computing. Here, the structure of the assembled DNA is not of primary interest. Instead, control of the DNA sequence is used in the creation of computational algorithms, like e.g. artificial neural networks. Qian et al for example, built on the richness of DNA computing and strand displacement circuitry, they showed how molecular systems can exhibit autonomous brain-like behaviours. Using a simple DNA gate architecture that allows experimental scale-up of multilayer digital circuits, they systematically transform arbitrary linear threshold circuits (an artificial neural network model) into DNA strand displacement cascades that function as small neural networks (3, 10).
  • Additional features: 3rd generation DNA sequencers (II), Biomimetic systems, Energy transfer and photonics etc

Summary:

DNA nanotechnology is an evolving field that affects medicine, computation, material sciences, and physics. DNA nanostructures offer unprecedented control over shape, size, mechanical flexibility and anisotropic surface  modification. Clearly, proper control over these aspects can increase  circulation times by orders of magnitude, as can be seen for longcirculating particles such as erythrocytes and various pathogenic particles evolved to overcome this issue.  The use of DNA in DNA/protein-based matrices makes these structures inherently amenable to structural tunability. More research in this direction  will certainly be developed, making DNA a promising biomaterial  in tissue engineering. future development of novel ways in which DNA would be utilized to have a much more comprehensive role in biological computation and data storage is envisaged.

REFERENCES

1. Paul W. K. Rothemund. Folding DNA to create nanoscale shapes and patterns. NATURE 2006 (March 16)|Vol 440: 297-302. http://www.nature.com/nature/journal/v440/n7082/full/nature04586.html

http://www.dna.caltech.edu/Papers/DNAorigami-nature.pdf

2. Andre V. Pinheiro, Dongran Han, William M. Shih and Hao Yan. Challenges and opportunities for structural DNA nanotechnology. Nature Nanotechnology 2011 Dec | VOL 6: 763-772.  http://www.nature.com/nnano/journal/v6/n12/pdf/nnano.2011.187.pdf

2b. Thi Huyen La, Thi Thu Thuy Nguyen, Van Phuc Pham, Thi Minh Huyen Nguyen and Quang Huan Le.  Using DNA nanotechnology to produce a drug delivery system. Adv. Nat. Sci.: Nanosci. Nanotechnol. 4 (2013) 015002 (7pp). http://iopscience.iop.org/2043-6262/4/1/015002http://iopscience.iop.org/2043-6262/4/1/015002/pdf/2043-6262_4_1_015002.pdf

3. Muniza Zahid, Byeonghoon Kim, Rafaqat Hussain, Rashid Amin and Sung H Park. DNA nanotechnology: a future perspective. Nanoscale Research Letters 2013, 8:119. http://www.nanoscalereslett.com/content/8/1/119

4.By: Cientifica Ltd 2007. The Nanotech Revolution in Drug Delivery.  http://www.cientifica.com/WhitePapers/054_Drug%20Delivery%20White%20Paper.pdf

5. Gemma Campbell. Nanotechnology and its implications for the health of the E.U citizen: Diagnostics, drug discovery and drug delivery. Institute of Nanotechnology and Nanoforum. http://www.nano.org.uk/nanomednet/images/stories/Reports/diagnostics,%20drug%20discovery%20and%20drug%20delivery.pdf

6.Peixuan Guo., Haque F., Brent Hallahan, Randall Reif and Hui Li. Uniqueness, Advantages, Challenges, Solutions, and Perspectives in Therapeutics Applying RNA Nanotechnology. Nucleic Acid Ther. 2012 August; 22(4): 226–245. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3426230/

7. SEEMAN N.C. Nanomaterials based on DNA. Annu. Rev. Biochem. 2010;79:65–87. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3454582/

8. Yin P, Choi HMT, Calvert CR, Pierce NA. Programming biomolecular self-assembly pathways. Nature.2008;451:318–323.  http://www.ncbi.nlm.nih.gov/pubmed/18202654

9. Yan Lee P, Wong KY: Nanomedicine: a new frontier in cancer therapeutics. Curr Drug Deliv 2011, 8(3):245-253. OpenURLhttp://www.eurekaselect.com/73728/article

10. Qian, L.L., Winfree, E., and Bruck, J. Neural Network Computation with DNA Strand Displacement Cascades. Nature 2011 475, 368-372.  http://www.nature.com/nature/journal/v475/n7356/full/nature10262.html

11. Acharya S, Dilnawaz F, Sahoo SK: Targeted epidermal growth factor receptor nanoparticle bioconjugates for breast cancer therapy. Biomaterials 2009, 30(29):5737-5750. http://www.sciencedirect.com/science/article/pii/S0142961209006929

12. Bohunicky B, Mousa SA: Biosensors: the new wave in cancer diagnosisNanotechnology, Science and Applications 2011, 4:1-10. http://www.dovepress.com/biosensors-the-new-wave-in-cancer-diagnosis-peer-reviewed-article-NSA-recommendation1

13. Sanvicens N, Mannelli I, Salvador J, Valera E, Marco M: Biosensors for pharmaceuticals based on novel technologyTrends Anal Chem 2011, 30:541-553. http://www.sciencedirect.com/science/article/pii/S016599361100015X

14. Amin R, Kulkarni A, Kim T, Park SH: DNA thin film coated optical fiber biosensorCurr Appl Phys 2011, 12(3):841-845. http://www.sciencedirect.com/science/article/pii/S1567173911005888

15. Choi, Y.; Baker, J. R. Targeting Cancer Cells with DNA Assembled Dendrimers: A Mix and Match Strategy for Cancer. Cell Cycle 2005, 4, 669–671. http://www.ncbi.nlm.nih.gov/pubmed/15846063  http://www.landesbioscience.com/journals/cc/article/1684/

Other related articles on this Open Access Online Scientific Journal include the following

I. By: Ziv Raviv PhD. The Development of siRNA-Based Therapies for Cancer. http://pharmaceuticalintelligence.com/2013/05/09/the-development-of-sirna-based-therapies-for-cancer/

II. By: Tilda Barliya PhD. Nanotechnology, personalized medicine and DNA sequencing. http://pharmaceuticalintelligence.com/2013/01/09/nanotechnology-personalized-medicine-and-dna-sequencing/

III. By: Larry Bernstein MD FACP. DNA Sequencing Technology. http://pharmaceuticalintelligence.com/2013/03/03/dna-sequencing-technology/

IV. By: Venkat S Karra PhD. Measuring glucose without needle pricks: nano-sized biosensors made the test easy. http://pharmaceuticalintelligence.com/2012/09/04/measuring-glucose-without-needle-pricks-nano-sized-biosensors-made-the-test-easy/

Diagnostics and Biomarkers: Novel Genomics Industry Trends vs Present Market Conditions and Historical Scientific Leaders Memoirs

Larry H Bernstein, MD, FCAP, Author and Curator

This article has two parts:

  • Part 1: Novel Genomics Industry Trends in Diagnostics and Biomarkers vs Present Market Transient Conditions

and

  • Part 2: Historical Scientific Leaders Memoirs

 

Part 1: Novel Genomics Industry Trends in Diagnostics and Biomarkers vs Present Market Transient Conditions

 

Based on “Forging a path from companion diagnostics to holistic decision support”, L.E.K.

Executive Insights, 2013;14(12). http://www.LEK.com

Companion diagnostics and their companion therapies is defined here as a method enabling

  • LIKELY responders to therapies that are specific for patients with ma specific molecular profile.

The result of this statement is that the diagnostics permitted to specific patient types gives access to

  • novel therapies that may otherwise not be approve or reimbursed in other, perhaps “similar” patients
  • who lack a matching identification of the key identifier(s) needed to permit that therapy,
  • thus, entailing a poor expected response.

The concept is new because:

(1) The diagnoses may be closely related by classical criteria, but at the same time they are
not alike with respect to efficacy of treatment with a standard therapy.
(2) The companion diagnostics is restricted to dealing with a targeted drug-specific question
without regard to other clinical issues.
(3) The efficacy issue it clarifies is reliant on a deep molecular/metabolic insight that is not available, except through
emergent genomic/proteomic analysis that has become available and which has rapidly declining cost to obtain.

The limitation example given is HER2 testing for use of Herceptin in therapy for non-candidates (HER2 negative patients).
The problem is that the current format is a “one test/one drug” match, but decision support  may require a combination of

  • validated biomakers obtained on a small biopsy sample (technically manageable) with confusing results.

While HER2 negative patients are more likely to be pre-menopausal with a more aggressive tumor than postmenopausal,

  • the HER2 negative designation does not preclude treatment with Herceptin.

So the Herceptin would be given in combination, but with what other drug in a non-candidate?

The point that L.E.K. makes is that providing highly validated biomarkers linked to approved therapies, it is necessary to pursue more holistic decision support tests that interrogate multiple biomarkers (panels of companion diagnostic markers) and discovery of signatures for treatments that are also used with a broad range of information, such as,

  • traditional tests,
  • imaging,
  • clinical trials,
  • outcomes data,
  • EMR data,
  • reimbursement and coverage data.

A comprehensive solution of this nature appears to be a distance from realization.  However, is this the direction that will lead to tomorrows treatment decision support approaches?

 Surveying the Decision Support Testing Landscape

As a starting point, L.E.K. characterized the landscape of available tests in the U.S. that inform treatment decisions compiled from ~50 leading diagnostics companies operating in the U.S. between 2004-2011. L.E.K. identified more than 200 decision support tests that were classified by test purpose, and more specifically,  whether tests inform treatment decisions for a single drug/class (e.g., companion diagnostics) vs. more holistic treatment decisions across multiple drugs/classes (i.e., multiagent response tests).

 Treatment Decision Support Tests

Companion Diagnostics
Single drug/class
Predict response/safety or guide dosing of a single drug or class

HercepTest   Dako
Determines HER2 protein overexpression for Herceptin treatment selection

Multiple drugs/classes

Vysis ALK Break
Apart FISH
Abbott Labs Predicts the NSCLC patient response to Xalkori

Other Decision Support
Provide prognostic and predictive information on the benefit of treatment

Oncotype Dx    Genomic Health, Inc.
Predicts both recurrence of breast cancer and potential patient benefit to chemotherapy regimens

PML-RARα     Clarient, Inc.
Predicts response to all-trans retinoic acid (ATRA) and other chemotherapy agents

TRUGENE    Siemens
Measures resistence to multiple  HIV-1 anti-retroviral agents

Multi-agent Response

Inform targeted therapy class selection by interrogating a panel of biomarkers
Target Now  Caris Life Sciences
Examines tumor’s molecular profile to tailor treatment options

ResponseDX: Lung    Response Genetics, Inc.
Examines multiple biomarkers to guide therapeutic treatment decisions for NSCLC patients

Source: L.E.K. Analysis

Includes IVD and LDT tests from

  1. top-15 IVD test suppliers,
  2. top-four large reference labs,
  3. top-five AP labs, and
  4. top-20 specialty reference labs.

For descriptive purposes only, may not map to exact regulatory labeling

Most tests are companion diagnostics and other decision support tests that provide guidance on

  • single drug/class therapy decisions.

However, holistic decision support tests (e.g., multi-agent response) are growing the fastest at 56% CAGR.
The emergence of multi-agent response tests suggests diagnostics companies are already seeing the need to aggregate individual tests (e.g., companion diagnostics) into panels of appropriate markers addressing a given clinical decision need. L.E.K. believes this trend is likely to continue as

  • increasing numbers of  biomarkers become validated for diseases and multiplexing tools
  • enabling the aggregation of multiple biomarker interrogations into a single test

to become deployed in the clinic.

Personalized Medicine Partnerships

L.E.K. also completed an assessment of publicly available personalized medicine partnership activity from 2009-2011 for ~150 leading organizations operating in the U.S. to look at broader decision support trends and emergence of more holistic solutions beyond diagnostic tests.

Survey of partnerships deals was conducted for

  • top-10 academic medical centers research institutions,
  • top-25 biopharma,
  • top-four healthcare IT companies,
  • top-three healthcare imaging companies,
  • top-20 IVD manufacturers,
  • top-20 laboratories,
  • top-10 payers/PBMs,
  • top-15 personalized healthcare companies,
  • top-10 regulatory/guideline entities, and
  • top-20 tools vendors for the period of 01/01/2009 – 12/31/2011.
    Source: Company websites, GenomeWeb, L.E.K. analysis

Across the sample we identified 189 publicly announced partnerships of which ~65% focused on more traditional areas (biomarker discovery, companion diagnostics and targeted therapies). However, a significant portion (~30%) included elements geared towards creating more holistic decision support models.

Partnerships categorized as holistic decision support by L.E.K. were focused on

  • mining large patient datasets (e.g., from payers or providers),
  • molecular profiling (e.g., deploying next-generation sequencing),
  • creating information technology (IT) infrastructure needed to enable holistic decision support models and
  • integrating various datasets to create richer decision support solutions.

Interestingly, holistic decision support partnerships often included stakeholders outside of biopharma and diagnostics such as

  • research tools,
  • payers/PBMs,
  • healthcare IT companies as well as
  • emerging personalized healthcare (PHC) companies (e.g., Knome, Foundation Medicine and 23andMe).

This finding suggests that these new stakeholders will be increasingly important in influencing care decisions going forward.

Holistic Treatment Decision Support

Holistic Decision   Support Focus

Technology Provider Partners
Stakeholder Deploying the Solution

Holistic Decision
Support Activities
Molecular Profiling

Life Technologies

TGEN/US
Oncology

Sequencing of triple-negative breast  cancer patients to identify potential treatment strategies

Foundation Medicine

Novartis

Deployment of cancer genomics analysis platform to support Novartis clinical research efforts
Predictive genomics

Clarient, Inc.
(GE Healthcare)

Acorn
Research

Biomarker profiling of patients within Acorn’s network of providers to support clinical research efforts

GenomeQuest

Beth Israel Deaconess
Medical Center

Whole genome analysis and to guide patient management
Outcomes Data Mining

AstraZeneca

WellPoint

Evaluate comparative effectiveness of selected marketed therapies

23andMe

NIH

Leverage information linking drug response and CYP2C9/CYP2C19 variation

Pfizer

Medco

Leverage patient genotype, phenotype and outcome for treatment decisions and target therapeutics
Healthcare IT Infrastructure

IBM

WellPoint

Deploy IBM’s Watson-based solution to evidence-based healthcare decision-making support

Oracle

Moffitt Cancer Center

Deploy Oracle’s informatics platform to store and manage patient medical information
Data Integration

Siemens Diagnostics

Susquehanna Health

Integration of imaging and laboratory diagnostics

Cernostics

Geisinger
Health

Integration of advanced tissue diagnostics, digital pathology, annotated biorepository and EMR
to create solutions
next-generation treatment decision support solutions

CardioDx

GE Healthcare

Integration of genomics with imaging data in CVD

Implications

L.E.K. believes the likely debate won’t center on which models and companies will prevail. It appears that the industry is now moving along the continuum to a truly holistic capability.
The mainstay of personalized medicine today will become integrated and enhanced by other data.

The companies that succeed will be able to capture vast amounts of information

  • and synthesize it for personalized care.

Holistic models will be powered by increasingly larger datasets and sophisticated decision-making algorithms.
This will require the participation of an increasingly broad range of participants to provide the

  • science, technologies, infrastructure and tools necessary for deployment.

There are a number of questions posed by this study, but only some are of interest to this discussion:

Group A.    Pharmaceuticals and Devices

  •  How will holistic decision support impact the landscape ?
    (e.g., treatment /testing algorithms, decision making, clinical trials)

Group B.     Diagnostics and   Decision Support

  •   What components will be required to build out holistic solutions?

– Testing technologies

– Information (e.g., associations, outcomes, trial databases, records)

– IT infrastructure for data integration and management, simulation and reporting

  •  How can various components be brought together to build seamless holistic  decision support solutions?

Group C.      Providers and Payers

  •  In which areas should models be deployed over time?
  • Where are clinical and economic arguments  most compelling?

Part 2: Historical Scientific Leaders Memoirs – Realtime Clinical Expert Support

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman,
in the Yale University Applied Mathematics Program,

A software system that is the equivalent of an intelligent Electronic Health Records Dashboard that

  • provides empirical medical reference and
  • suggests quantitative diagnostics options.

The current design of the Electronic Medical Record (EMR) is a linear presentation of portions of the record

  • by services
  • by diagnostic method, and
  • by date, to cite examples.

This allows perusal through a graphical user interface (GUI) that partitions the information or necessary reports

  • in a workstation entered by keying to icons.

This requires that the medical practitioner finds the

  • history,
  • medications,
  • laboratory reports,
  • cardiac imaging and
  • EKGs, and
  • radiology in different workspaces.

The introduction of a DASHBOARD has allowed a presentation of

  • drug reactions
  • allergies
  • primary and secondary diagnoses, and
  • critical information

about any patient the care giver needing access to the record.

The advantage of this innovation is obvious.  The startup problem is what information is presented and

  • how it is displayed, which is a source of variability and a key to its success.

We are proposing an innovation that supercedes the main design elements of a DASHBOARD and utilizes

  • the conjoined syndromic features of the disparate data elements.

So the important determinant of the success of this endeavor is that

  • it facilitates both the workflow and the decision-making process with a reduction of medical error.

Continuing work is in progress in extending the capabilities with model datasets, and sufficient data because

  • the extraction of data from disparate sources will, in the long run, further improve this process.

For instance, the finding of  both ST depression on EKG coincident with an elevated cardiac biomarker (troponin), particularly in the absence of substantially reduced renal function. The conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates

  • the review of a peripheral smear.

While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the

  • production
  • release
  • or suppression

of the formed elements from the blood-forming organ into the circulation. In the hemogram one can view

  • data reflecting the characteristics of a broad spectrum of medical conditions.

Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of

  • size
  • density, and
  • concentration,

resulting in many characteristic features of classification. In the diagnosis of hematological disorders

  • proliferation of marrow precursors, the
  • domination of a cell line, and features of
  • suppression of hematopoiesis

provide a two dimensional model.  Other dimensions are created by considering

  • the maturity of the circulating cells.

The application of rules-based, automated problem solving should provide a valid approach to

  • the classification and interpretation of the data used to determine a knowledge-based clinical opinion.

The exponential growth of knowledge since the mapping of the human genome enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.

As the complexity of statistical models has increased

  • the dependencies have become less clear to the individual.

Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
The development of an evidence-based inference engine that can substantially interpret the data at hand and

  • convert it in real time to a “knowledge-based opinion”

could improve clinical decision-making by incorporating

  • multiple complex clinical features as well as duration of onset into the model.

An example of a difficult area for clinical problem solving is found in the diagnosis of SIRS and associated sepsis. SIRS (and associated sepsis) is a costly diagnosis in hospitalized patients.   Failure to diagnose sepsis in a timely manner creates a potential financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria by the clinician.

  • temperature
  • heart rate
  • respiratory rate and
  • WBC count

The application of those clinical criteria, however, defines the condition after it has developed and

  • has not provided a reliable method for the early diagnosis of SIRS.

The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including

  • transthyretin
  • C-reactive protein
  • procalcitonin
  • mean arterial pressure

Immature granulocyte (IG) measurement has been proposed as a

  • readily available indicator of the presence of granulocyte precursors (left shift).

The use of such markers, obtained by automated systems

  • in conjunction with innovative statistical modeling, provides
  • a promising approach to enhance workflow and decision making.

Such a system utilizes the conjoined syndromic features of

  • disparate data elements with an anticipated reduction of medical error.

How we frame our expectations is so important that it determines

  • the data we collect to examine the process.

In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.
This has meaning for

  • hospital operations,
  • for nonhospital laboratory operations,
  • for companies in the diagnostic business, and
  • for planning of health systems.

The problem stated by LL  WEED in “Idols of the Mind” (Dec 13, 2006): “ a root cause of a major defect in the health care system is that, while we falsely admire and extol the intellectual powers of highly educated physicians, we do not search for the external aids their minds require”.  HIT use has been

  • focused on information retrieval, leaving
  • the unaided mind burdened with information processing.

We deal with problems in the interpretation of data presented to the physician, and how through better

  • design of the software that presents this data the situation could be improved.

The computer architecture that the physician uses to view the results is more often than not presented

  • as the designer would prefer, and not as the end-user would like.

In order to optimize the interface for physician, the system would have a “front-to-back” design, with
the call up for any patient ideally consisting of a dashboard design that presents the crucial information

  • that the physician would likely act on in an easily accessible manner.

The key point is that each item used has to be closely related to a corresponding criterion needed for a decision.

Feature Extraction.

This further breakdown in the modern era is determined by genetically characteristic gene sequences
that are transcribed into what we measure.  Eugene Rypka contributed greatly to clarifying the extraction
of features in a series of articles, which

  • set the groundwork for the methods used today in clinical microbiology.

The method he describes is termed S-clustering, and

  • will have a significant bearing on how we can view laboratory data.

He describes S-clustering as extracting features from endogenous data that

  • amplify or maximize structural information to create distinctive classes.

The method classifies by taking the number of features

  • with sufficient variety to map into a theoretic standard.

The mapping is done by

  • a truth table, and each variable is scaled to assign values for each: message choice.

The number of messages and the number of choices forms an N-by N table.  He points out that the message

  • choice in an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Even though there may be a large number of measured values, the variety is reduced

  • by this compression, even though there is risk of loss of information.

Yet the real issue is how a combination of variables falls into a table with meaningful information. We are concerned with accurate assignment into uniquely variable groups by information in test relationships. One determines the effectiveness of each variable by

  • its contribution to information gain in the system.

The reference or null set is the class having no information.  Uncertainty in assigning to a classification is

  • only relieved by providing sufficient information.

The possibility for realizing a good model for approximating the effects of factors supported by data used

  • for inference owes much to the discovery of Kullback-Liebler distance or “information”, and Akaike
  • found a simple relationship between K-L information and Fisher’s maximized log-likelihood function.

In the last 60 years the application of entropy comparable to

  • the entropy of physics, information, noise, and signal processing,
  • has been fully developed by Shannon, Kullback, and others, and has been integrated with modern statistics,
  • as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and work by Coifman.

Gil David et al. introduced an AUTOMATED processing of the data available to the ordering physician and

  • can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common
  • causes of hospital admission that carry a high cost and morbidity.

For example: anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome); pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia; multiple organ failure and hemodynamic shock; electrolyte/acid base balance disorders; acute and chronic liver disease; acute and chronic renal disease; diabetes mellitus; protein-energy malnutrition; acute respiratory distress of the newborn; acute coronary syndrome; congestive heart failure; disordered bone mineral metabolism; hemostatic disorders; leukemia and lymphoma; malabsorption syndromes; and cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH (Chairman). Prealbumin in Nutritional Care Consensus Group.

Measurement of visceral protein status in assessing protein and energy malnutrition: standard of care. Nutrition 1995; 11:169-171.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction: integration of serum markers and clinical descriptors using information theory. Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.; Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering, 33(7):2170–2174, July 1994.

R. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression. C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

Realtime Clinical Expert Support and validation System

We have developed a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options.

The primary purpose is to

  1. gather medical information,
  2. generate metrics,
  3. analyze them in realtime and
  4. provide a differential diagnosis,
  5. meeting the highest standard of accuracy.

The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community. The

  • main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies
  • in which anomalous subprofiles are extracted and compared to potentially relevant cases.

As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise. We anticipate that the effect of implementing this diagnostic amplifier would result in

  • higher physician productivity at a time of great human resource limitations,
  • safer prescribing practices,
  • rapid identification of unusual patients,
  • better assignment of patients to observation, inpatient beds,
    intensive care, or referral to clinic,
  • shortened length of patients ICU and bed days.

The main benefit is a real time assessment as well as diagnostic options based on

  • comparable cases,
  • flags for risk and potential problems

as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

Graphical presentation of patient status

The patient was treated for SIRS and the blood tests were repeated during the following week. The full combined record of our system’s assessment of the patient, as derived from the further hematology tests, is illustrated below. The yellow line shows the diagnosis that corresponds to the first blood test (as also shown in the image above). The red line shows the next diagnosis that was performed a week later.

Progression changes in patient ICU stay with SIRS

Chemistry of Herceptin [Trastuzumab] is explained with images in

http://www.chm.bris.ac.uk/motm/herceptin/index_files/Page450.htm

 

REFERENCES

The Cost Burden of Disease: U.S. and Michigan CHRT Brief. January 2010.
@www.chrt.org

The National Hospital Bill: The Most Expensive Conditions by Payer, 2006. HCUP Brief #59.

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction: integration of serum markers and clinical descriptors using information theory. Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.; Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering 1994; 33(7):2170–2174.

  1. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression. C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categories and 338 exemplars. Behavior Research Methods, Instruments, & Computers 2004; 36 (3): 506–515.

De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms.  Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts.  Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories. Lewandowsky, S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993:2(5):433.

Retinal prosthetic strategy with the capacity to restore normal vision, Sheila Nirenberg and Chethan Pandarinath

http://www.pnas.org/content/109/37/15012

 

Other related articles published in http://pharmaceuticalintelligence.com include the following:

 

  • The Automated Second Opinion Generator

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/13/the-automated-second-opinion-generator/

 

  • The electronic health record: How far we have travelled and where is journeys end

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-have-travelled-and-where-is-journeys-end/

 

  • The potential contribution of informatics to healthcare is more than currently estimated.

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-informatics-to-healthcare-is-more-than-currently-estimated/

 

  • Clinical Decision Support Systems for Management Decision Making of Cardiovascular Diseases

Justin Pearlman, MD, PhD, FACC and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/05/04/cardiovascular-diseases-decision-support-systems-for-disease-management-decision-making/

 

  • Demonstration of a diagnostic clinical laboratory neural network applied to three laboratory data conditioning problems

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/13/demonstration-of-a-diagnostic-clinical-laboratory-neural-network-agent-applied-to-three-laboratory-data-conditioning-problems/

 

  • CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics & Computational Genomics

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2014/08/30/cracking-the-code-of-human-life-the-birth-of-bioinformatics-computational-genomics/

 

  • Genetics of conduction disease atrioventricular AV conduction disease block gene mutations transcription excitability and energy homeostasis

Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/04/28/genetics-of-conduction-disease-atrioventricular-av-conduction-disease-block-gene-mutations-transcription-excitability-and-energy-homeostasis/

 

  • Identification of biomarkers that are related to the actin cytoskeleton

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/12/10/identification-of-biomarkers-that-are-related-to-the-actin-cytoskeleton/

 

  • Regression: A richly textured method for comparison of predictor variables

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-for-comparison-and-classification-of-predictor-variables/

 

  • Diagnostic evaluation of SIRS by immature granulocytes

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/02/diagnostic-evaluation-of-sirs-by-immature-granulocytes/

 

  • Big data in genomic medicine

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/12/17/big-data-in-genomic-medicine/

 

  • Automated inferential diagnosis of SIRS, sepsis, septic shock

Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/01/automated-inferential-diagnosis-of-sirs-sepsis-septic-shock/

 

  • A Software Agent for Diagnosis of ACUTE MYOCARDIAL INFARCTION

Isaac E. Mayzlin, Ph.D., David Mayzlin and Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2012/08/12/1815/

 

  • Artificial Vision: Cornell and Stanford Researchers crack Retinal Code

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2012/08/15/1946/

 

  • Vinod Khosla: 20 doctor included speculations, musings of a technology optimist or technology will replace 80 percent of what doctors do

Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/05/13/vinod-khosla-20-doctor-included-speculations-musings-of-a-technology-optimist-or-technology-will-replace-80-of-what-doctors-do/

 

  • Biomaterials Technology: Models of Tissue Engineering for Reperfusion and Implantable Devices for Revascularization

Larry H Bernstein, MD, FACP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/

 

  • The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN

Aviva Lev-Ari, PhD, RN 2/28/2013

https://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-based-pharmacological-therapy-including-thymosin/

 

  • FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology

Aviva Lev-Ari, PhD, RN 1/28/2013

https://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-cardiovascular-imaging-technology/

 

  • PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma Fibrinogen not Platelet Reactivity

Aviva Lev-Ari, PhD, RN 1/10/2013

https://pharmaceuticalintelligence.com/2013/01/10/pci-outcomes-increased-ischemic-risk-associated-with-elevated-plasma-fibrinogen-not-platelet-reactivity/

 

  • The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX, and Clinical SYNTAX

Aviva Lev-Ari, PhD, RN 1/3/2013

https://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-established-risk-scores-timi-grace-syntax-and-clinical-syntax/

 

  • Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by Serum Protein Profiles

Aviva Lev-Ari, PhD, RN 12/29/2012

https://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles

 

  • New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia

Aviva Lev-Ari, PhD, RN 8/27/2012

https://pharmaceuticalintelligence.com/2012/08/27/new-definition-of-mi-unveiled-fractional-flow-reserve-ffrct-for-tagging-ischemia/
 

Additional Related articles

  • Hospital EHRs Inadequate for Big Data; Need for Specialized -Omics Systems(labsoftnews.typepad.com)
  • Apple Inc. (AAPL), QUALCOMM, Inc. (QCOM): Disruptions Needed(insidermonkey.com)
  • Netsmart Names Dr. Ian Chuang Senior Vice President, Healthcare Informatics and Chief Medical Officer(prweb.com)
  • Strategic partnership signals new age of stratified medicine(prweb.com)
  • Personalized breast cancer therapeutic with companion diagnostic poised for clinical trials in H2(medcitynews.com)