Feeds:
Posts
Comments

Archive for the ‘Frontiers in Cardiology and Cardiovascular Disorders’ Category


Risks from Dual Antiplatelet Therapy (DAPT) may be reduced by Genotyping Guidance of Cardiac Patients

Reporter: Aviva Lev-Ari, PhD, RN

 

Genotyping Cardiac Patients May Reduce Risks From DAPT

-STEMI patient study reaches noninferiority mark for adverse cardiac events

In the investigational arm, all 1,242 patients were tested for CYP2C19 loss-of-function alleles *2 or *3. Carriers received ticagrelor or prasugrel, while noncarriers received clopidogrel, considered to be less powerful.

No genetic testing was performed in the standard treatment arm (n=1,246), in which patients largely went on to receive ticagrelor or prasugrel. Nearly all patients in both cohorts received dual antiplatelet therapy (DAPT) with aspirin.

Following primary PCI, patients went on to the P2Y12 inhibitor for at least 12 months, with drug adherence similar between the genotype-guided (84.5%) and standard groups (82.0%).

For patients with CYP2C19 loss-of-function alleles in the genotype-guided arm, 38% received ticagrelor and 1% received prasugrel. The remaining 61% of patients — the noncarriers — received clopidogrel. In the control arm, 91% were treated with ticagrelor, 2% with prasugrel, and 7% with clopidogrel, according to local protocol.

Ten Berg said that prasugrel is not typically used in the Netherlands, where eight of the centers in the trial were located, but that this might change given that the drug lowered rates of ischemic events versus ticagrelor in the head-to-head ISAR REACT 5 trial, which was also presented at ESC.

Reviewed by Robert Jasmer, MD Associate Clinical Professor of Medicine, University of California, San Francisco

Read Full Post »


Injectable inclisiran (siRNA) as 3rd anti-PCSK9 behind mAbs Repatha and Praluent

 

Reporter: Aviva Lev-Ari, PhD, RN

Next stop, filing for approval. The Medicines Company has said it plans to submit inclisiran for FDA review by the end of 2019 and EMA review in the first quarter of 2020. If the drug’s approved it’ll be the third anti-PCSK9 behind mAbs Repatha and Praluent, and could try to compete on price to gain market share.

The company’s been very careful not to disclose its pricing plans for inclisiran, said ORION-10 principal investigator Dr. Scott Wright, professor and cardiologist at the Mayo Clinic. But, Wright said, The Medicines Co. and other companies he advises on clinical trial design “have learned the lesson from the sponsors of the monoclonal antibodies [against PCSK9], they’re not going to come in and price a drug that’s out of proportion to what the market will bear.” 

Because the anti-PCSK9 mAbs were initially priced beyond what patients and insurers were willing to pay, “now most of the physicians that I meet have a resistance to using them just because they’re fearful about the pre-approval process” with insurers, said Wright. “They’re much easier to get approved and paid for today than they’ve ever been, but that message is not out in the medical community yet.”

SOURCE

From: “STAT: AHA in 30 Seconds” <newsletter@statnews.com>

Reply-To: “STAT: AHA in 30 Seconds” <newsletter@statnews.com>

Date: Monday, November 18, 2019 at 2:59 PM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Interim look at Amarin data, an inclisiran update, & Philly’s giant heart

Read Full Post »


Transthyretin amyloid cardiomyopathy (ATTR-CM): U.S. FDA APPROVES VYNDAQEL® AND VYNDAMAX™ for this Rare and Fatal Disease

 

Reporter: Aviva Lev-Ari, PhD, RN

UPDATED on 11/22/2019

Trialists Attack $225K Heart Drug Price Tag

Cardiologists who helped run the pivotal study of Pfizer’s heart drug tafamidis (Vyndaqel/Vyndamax) are criticizing the drug’s $225,000 annual price tag, Bloomberg reports.

Mathew Maurer, MD, of Columbia University, and three other doctors involved in the trial started speaking out after seeing patients’ financial struggles after the drug’s market launch earlier this year.

For example: John Rufenacht, a 73-year-old interior designer in Kansas City, Missouri, has Medicare but his out-of-pocket cost was $6,000 for a 90-day supply of the drug, which treats cardiac transthyretin amyloidosis. Rufenacht doesn’t qualify for Pfizer’s patient assistance programs, most of which direct patients to charities to help them pay.

Maurer aired his complaints in front of colleagues at the Heart Failure Society of America meeting in September, and at the American Heart Association meeting earlier this week, where he and colleagues reported a cost-effectiveness study on the drug, showing it’s only cost-effective with a more than 90% price reduction — a cost of $16,563 a year.

Pfizer says its price is appropriate, given the small number of patients in the U.S. with the condition who will receive it — some 100,000 to 150,000, the company estimates. But Maurer and critics say that’s likely an underestimate. Diagnosis requires an invasive heart biopsy; there was little incentive to do that when no approved treatment was available.

The company promised to cut the price if more patients start taking the drug.

SOURCE

https://www.medpagetoday.com/publichealthpolicy/ethics/83459?xid=nl_badpractice_2019-11-22&eun=g99985d0r&utm_source=Sailthru&utm_medium=email&utm_campaign=BadPractice_112219&utm_term=NL_Gen_Int_Bad_Practice%20-%20Active

 

Click here to learn more about Pfizer’s Rare Disease portfolio and how we empower patients, engage communities in our clinical development programs, and support programs that heighten disease awareness.

 

U.S. FDA APPROVES VYNDAQEL® AND VYNDAMAX™ FOR USE IN PATIENTS WITH TRANSTHYRETIN AMYLOID CARDIOMYOPATHY, A RARE AND FATAL DISEASE

— First and only medicines approved for patients with either wild-type or hereditary transthyretin amyloid cardiomyopathy —

Monday, May 6, 2019 – 6:45am
EDT

NEW YORK–(BUSINESS WIRE)–Pfizer Inc. (NYSE:PFE) announced today that the U.S. Food and Drug Administration (FDA) has approved both VYNDAQEL® (tafamidis meglumine) and VYNDAMAX (tafamidis) for the treatment of the cardiomyopathy of wild-type or hereditary transthyretin-mediated amyloidosis (ATTR-CM) in adults to reduce cardiovascular mortality and cardiovascular-related hospitalization. VYNDAQEL and VYNDAMAX are two oral formulations of the first-in-class transthyretin stabilizer tafamidis, and the first and only medicines approved by the FDA to treat ATTR-CM.

Transthyretin amyloid cardiomyopathy is a rare, life-threatening disease characterized by the buildup of abnormal deposits of misfolded protein called amyloid in the heart and is defined by restrictive cardiomyopathy and progressive heart failure. Previously, there were no medicines approved to treat ATTR-CM; the only available options included symptom management, and, in rare cases, heart (or heart and liver) transplant. It is estimated that the prevalence of ATTR-CM is approximately 100,000 people in the U.S. and only one to two percent of those patients are diagnosed today.

“The approvals of VYNDAQEL and VYNDAMAX are a testament to the significant research and development investment in our innovative cardiovascular outcomes trial, ATTR-ACT. We are proud to bring these medicines to ATTR-CM patients who are in dire need of treatment,” said Brenda Cooperstone, MD, Senior Vice President and Chief Development Officer, Rare Disease, Pfizer Global Product Development. “VYNDAQEL and VYNDAMAX reduce cardiovascular mortality and the frequency of cardiovascular-related hospital stays in patients with wild-type or hereditary forms of this rare disease, giving them a chance for more time with their loved ones.”

“Pfizer’s purpose is to deliver breakthrough medicines that change patients’ lives. The approvals of VYNDAQEL and VYNDAMAX deliver on this promise for patients with ATTR-CM,” said Paul Levesque, Global President, Rare Disease. “This milestone is a gamechanger for patients, who until today had no approved medicines for this rare, debilitating and fatal disease. We will continue to focus efforts on working with the physician community to increase awareness and ultimately detection and diagnosis of this disease.”

The recommended dosage is either VYNDAQEL 80 mg orally once-daily, taken as four 20 mg capsules, or VYNDAMAX 61 mg orally once-daily, taken as a single capsule. VYNDAMAX was developed for patient convenience; VYNDAQEL and VYNDAMAX are not substitutable on a per milligram basis.

“ATTR-CM is not only fatal, but also significantly underdiagnosed, with some patients cycling through multiple doctors and a myriad of tests over a period of years while the disease progresses,” said Isabelle Lousada, Founder and CEO, Amyloidosis Research Consortium. “ATTR-CM is a rare disease for which more education and awareness is needed. The approval of these medicines represents an important advance for patients; however, it is equally important that we work as a community to recognize the critical importance of early diagnosis.”

The FDA approval was based on data from the pivotal Phase 3 Transthyretin Amyloidosis Cardiomyopathy Clinical Trial (ATTR-ACT), the first global, double-blind, randomized, placebo-controlled clinical study to investigate a pharmacological therapy for the treatment of this disease. In ATTR-ACT, VYNDAQEL significantly reduced the hierarchical combination of all-cause mortality and frequency of cardiovascular-related hospitalizations compared to placebo over a 30-month period (p=0.0006). Additionally, individual components of the primary analysis demonstrated a relative reduction in the risk of all-cause mortality and frequency of cardiovascular-related hospitalization of 30% (p=0.026) and 32% (p<0.0001), respectively, with VYNDAQEL versus placebo. Approximately 80% of total deaths were cardiovascular-related in both treatment groups. VYNDAQEL also had significant and consistent treatment effects compared to placebo on functional capacity and health status first observed at six months and continuing through 30 months. Specifically, VYNDAQEL reduced the decline in performance on the six-minute walk test (p<0.0001) and reduced the decline in health status as measured by the Kansas City Cardiomyopathy Questionnaire – Overall Summary score (p<0.0001). VYNDAQEL was well tolerated in this study, with an observed safety profile comparable to placebo. The frequency of adverse events in patients treated with VYNDAQEL was similar to placebo, and similar proportions of VYNDAQEL-treated patients and placebo-treated patients discontinued the study drug because of an adverse event.

Pfizer is committed to helping eligible ATTR-CM patients who have been prescribed VYNDAQEL or VYNDAMAX gain appropriate access. Pfizer supports patients by helping them understand their insurance coverage requirements and can connect eligible patients with financial assistance resources which may be available including the Pfizer Patient Assistance Program.*

About ATTR-CM
Transthyretin amyloid cardiomyopathy (ATTR-CM) is a rare and fatal condition that is caused by destabilization of a transport protein called transthyretin, which is composed of four identical sub units (a tetramer). When unstable transthyretin tetramers dissociate, they result in misfolded proteins that aggregate into amyloid fibrils and deposit in the heart, causing the heart muscle to become stiff, eventually resulting in heart failure. There are two sub-types of ATTR-CM: hereditary, also known as variant, which is caused by a mutation in the transthyretin gene and can occur in people as early as their 50s and 60s; or with no mutation and associated with aging, known as the wild-type form, which is thought to be more common and usually affects men after age 60. Often ATTR-CM is diagnosed only after symptoms have become severe. Once diagnosed, the median life expectancy in patients with ATTR-CM, dependent on sub-type, is approximately two to 3.5 years.

About VYNDAQEL (tafamidis meglumine) and VYNDAMAX (tafamidis)
VYNDAQEL (tafamidis meglumine) and VYNDAMAX (tafamidis) are oral transthyretin stabilizers that selectively bind to transthyretin, stabilizing the tetramer of the transthyretin transport protein and slowing the formation of amyloid that causes ATTR-CM.

VYNDAMAX 61 mg is a once-daily oral capsule developed for patient convenience. VYNDAQEL and VYNDAMAX are not substitutable on a per milligram basis.

VYNDAQEL was granted Orphan Drug Designation for ATTR-CM in both the EU and U.S. in 2012 and in Japan in 2018. In June 2017 and May 2018, respectively, the FDA granted VYNDAQEL Fast Track and Breakthrough Therapy designations for ATTR-CM. In November 2018, the FDA granted Priority Review for the new drug application (NDA) for VYNDAQEL.

In March 2019, the Ministry of Labor Health and Welfare in Japan approved VYNDAQEL, under SAKIGAKE designation, for patients with wild-type and variant forms of ATTR-CM. Regulatory submissions for the use of VYNDAQEL in patients with ATTR-CM have been submitted to the European Medicines Agency (EMA) and are under review.

VYNDAQEL was first approved in 2011 in the EU for the treatment of transthyretin amyloid polyneuropathy (ATTR-PN), in adult patients with early-stage symptomatic polyneuropathy to delay peripheral neurologic impairment. ATTR-PN is a neurodegenerative form of amyloidosis that leads to sensory loss, pain and weakness in the lower limbs and impairment of the autonomic nervous system, Currently, it is approved for ATTR-PN in 40 countries, including Japan, countries in Europe, Brazil, Mexico, Argentina, Israel, Russia, and South Korea. VYNDAQEL and VYNDAMAX are not approved for the treatment of ATTR-PN in the U.S.

SOURCE

https://www.pfizer.com/news/press-release/press-release-detail/u_s_fda_approves_vyndaqel_and_vyndamax_for_use_in_patients_with_transthyretin_amyloid_cardiomyopathy_a_rare_and_fatal_disease

Read Full Post »


Post TAVR: Management of conduction disturbances and number of valve recapture and/or repositioning attempts – Optimize self-expanding transcatheter aortic valve replacement (TAVR) positioning reduced the need for permanent pacemaker (PPM) implants down the road

Reporter: Aviva Lev-Ari, PhD, RN

  • The PPM rate dropped from 9.7% to 3.0% (P=0.035), according to a team led by Hasan Jilaihawi, MD, of NYU Langone Health in New York City.
  • the PARTNER 3 and CoreValve Low Risk trials in patients at low surgical risk showed PPM implant rates of 17.4% with the Evolut line, 6.6% with the balloon-expandable Sapien 3, and 4.1%-6.1% with surgery.

 

  • “The His bundle passes through the membranous septum, a few millimeters beneath the non-coronary/right coronary cusps. It is therefore not surprising that a deeper valve implantation increases the likelihood of mechanical damage of the His bundle leading to a transient or persistent conduction disturbance,” according to Rodés-Cabau.

To capture factors that contributed to need for PPM implantation, Jilaihawi and colleagues performed a detailed restrospective analysis on 248 consecutive Evolut recipients at Langone treated with the standard TAVR approach — aiming for 3-4 mm implant depth (in relation to the non-coronary cusp) and recapturing and repositioning when the device landed considerably lower. Patients with prior PPM implantation were excluded. Devices used were Medtronic’s Evolut R, Evolut Pro, and Evolut 34XL.

This analysis revealed that use of the large Evolut 34XL (OR 4.96, 95% CI 1.68-14.63) and implant depth exceeding membranous septum length (OR 8.04, 95% CI 2.58-25.04) were independent predictors of later PPM implantation.

From there, operators came up with the MIDAS technique and applied it prospectively to another 100 consecutive patients.

Besides bringing down the PPM implant rate to 3.0%, there were no more cases of valve embolization, dislocation, or need for a second valve.

The standard and MIDAS groups shared similar membranous septum lengths but diverged in average actual device depth, such that the standard group tended to have Evolut devices positioned deeper (3.3 mm vs 2.3 mm, P<0.001).

SOURCE

https://www.medpagetoday.com/cardiology/pci/81849

 

Read Full Post »


Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Cardiology is a vast field that focuses on a large number of diseases specifically dealing with the heart, the circulatory system, and its functions. As such, similar symptomatologies and diagnostic features may be present in an individual, making it difficult for a doctor to easily isolate the actual heart-related problem. Consequently, the use of artificial intelligence aims to relieve doctors from this hurdle and extend better quality to patients. Results of screening tests such as echocardiograms, MRIs, or CT scans have long been proposed to be analyzed using more advanced techniques in the field of technology. As such, while artificial intelligence is not yet widely-used in clinical practice, it is seen as the future of healthcare.

 

The continuous development of the technological sector has enabled the industry to merge with medicine in order to create new integrated, reliable, and efficient methods of providing quality health care. One of the ongoing trends in cardiology at present is the proposed utilization of artificial intelligence (AI) in augmenting and extending the effectiveness of the cardiologist. This is because AI or machine-learning would allow for an accurate measure of patient functioning and diagnosis from the beginning up to the end of the therapeutic process. In particular, the use of artificial intelligence in cardiology aims to focus on research and development, clinical practice, and population health. Created to be an all-in-one mechanism in cardiac healthcare, AI technologies incorporate complex algorithms in determining relevant steps needed for a successful diagnosis and treatment. The role of artificial intelligence specifically extends to the identification of novel drug therapies, disease stratification or statistics, continuous remote monitoring and diagnostics, integration of multi-omic data, and extension of physician effectivity and efficiency.

 

Artificial intelligence – specifically a branch of it called machine learning – is being used in medicine to help with diagnosis. Computers might, for example, be better at interpreting heart scans. Computers can be ‘trained’ to make these predictions. This is done by feeding the computer information from hundreds or thousands of patients, plus instructions (an algorithm) on how to use that information. This information is heart scans, genetic and other test results, and how long each patient survived. These scans are in exquisite detail and the computer may be able to spot differences that are beyond human perception. It can also combine information from many different tests to give as accurate a picture as possible. The computer starts to work out which factors affected the patients’ outlook, so it can make predictions about other patients.

 

In current medical practice, doctors will use risk scores to make treatment decisions for their cardiac patients. These are based on a series of variables like weight, age and lifestyle. However, they do not always have the desired levels of accuracy. A particular example of the use of artificial examination in cardiology is the experimental study on heart disease patients, published in 2017. The researchers utilized cardiac MRI-based algorithms coupled with a 3D systolic cardiac motion pattern to accurately predict the health outcomes of patients with pulmonary hypertension. The experiment proved to be successful, with the technology being able to pick-up 30,000 points within the heart activity of 250 patients. With the success of the aforementioned study, as well as the promise of other researches on artificial intelligence, cardiology is seemingly moving towards a more technological practice.

 

One study was conducted in Finland where researchers enrolled 950 patients complaining of chest pain, who underwent the centre’s usual scanning protocol to check for coronary artery disease. Their outcomes were tracked for six years following their initial scans, over the course of which 24 of the patients had heart attacks and 49 died from all causes. The patients first underwent a coronary computed tomography angiography (CCTA) scan, which yielded 58 pieces of data on the presence of coronary plaque, vessel narrowing and calcification. Patients whose scans were suggestive of disease underwent a positron emission tomography (PET) scan which produced 17 variables on blood flow. Ten clinical variables were also obtained from medical records including sex, age, smoking status and diabetes. These 85 variables were then entered into an artificial intelligence (AI) programme called LogitBoost. The AI repeatedly analysed the imaging variables, and was able to learn how the imaging data interacted and identify the patterns which preceded death and heart attack with over 90% accuracy. The predictive performance using the ten clinical variables alone was modest, with an accuracy of 90%. When PET scan data was added, accuracy increased to 92.5%. The predictive performance increased significantly when CCTA scan data was added to clinical and PET data, with accuracy of 95.4%.

 

Another study findings showed that applying artificial intelligence (AI) to the electrocardiogram (ECG) enables early detection of left ventricular dysfunction and can identify individuals at increased risk for its development in the future. Asymptomatic left ventricular dysfunction (ALVD) is characterised by the presence of a weak heart pump with a risk of overt heart failure. It is present in three to six percent of the general population and is associated with reduced quality of life and longevity. However, it is treatable when found. Currently, there is no inexpensive, noninvasive, painless screening tool for ALVD available for diagnostic use. When tested on an independent set of 52,870 patients, the network model yielded values for the area under the curve, sensitivity, specificity, and accuracy of 0.93, 86.3 percent, 85.7 percent, and 85.7 percent, respectively. Furthermore, in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction compared with those with a negative screen.

 

In recent years, the analysis of big data database combined with computer deep learning has gradually played an important role in biomedical technology. For a large number of medical record data analysis, image analysis, single nucleotide polymorphism difference analysis, etc., all relevant research on the development and application of artificial intelligence can be observed extensively. For clinical indication, patients may receive a variety of cardiovascular routine examination and treatments, such as: cardiac ultrasound, multi-path ECG, cardiovascular and peripheral angiography, intravascular ultrasound and optical coherence tomography, electrical physiology, etc. By using artificial intelligence deep learning system, the investigators hope to not only improve the diagnostic rate and also gain more accurately predict the patient’s recovery, improve medical quality in the near future.

 

The primary issue about using artificial intelligence in cardiology, or in any field of medicine for that matter, is the ethical issues that it brings about. Physicians and healthcare professionals prior to their practice swear to the Hippocratic Oath—a promise to do their best for the welfare and betterment of their patients. Many physicians have argued that the use of artificial intelligence in medicine breaks the Hippocratic Oath since patients are technically left under the care of machines than of doctors. Furthermore, as machines may also malfunction, the safety of patients is also on the line at all times. As such, while medical practitioners see the promise of artificial technology, they are also heavily constricted about its use, safety, and appropriateness in medical practice.

 

Issues and challenges faced by technological innovations in cardiology are overpowered by current researches aiming to make artificial intelligence easily accessible and available for all. With that in mind, various projects are currently under study. For example, the use of wearable AI technology aims to develop a mechanism by which patients and doctors could easily access and monitor cardiac activity remotely. An ideal instrument for monitoring, wearable AI technology ensures real-time updates, monitoring, and evaluation. Another direction of cardiology in AI technology is the use of technology to record and validate empirical data to further analyze symptomatology, biomarkers, and treatment effectiveness. With AI technology, researchers in cardiology are aiming to simplify and expand the scope of knowledge on the field for better patient care and treatment outcomes.

 

References:

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.bhf.org.uk/informationsupport/heart-matters-magazine/research/artificial-intelligence

 

https://www.medicaldevice-network.com/news/heart-attack-artificial-intelligence/

 

https://www.nature.com/articles/s41569-019-0158-5

 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5711980/

 

www.j-pcs.org/article.asp

http://www.onlinejacc.org/content/71/23/2668

http://www.scielo.br/pdf/ijcs/v30n3/2359-4802-ijcs-30-03-0187.pdf

 

https://www.escardio.org/The-ESC/Press-Office/Press-releases/How-artificial-intelligence-is-tackling-heart-disease-Find-out-at-ICNC-2019

 

https://clinicaltrials.gov/ct2/show/NCT03877614

 

https://www.europeanpharmaceuticalreview.com/news/82870/artificial-intelligence-ai-heart-disease/

 

https://www.frontiersin.org/research-topics/10067/current-and-future-role-of-artificial-intelligence-in-cardiac-imaging

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.sciencedaily.com/releases/2019/05/190513104505.htm

 

Read Full Post »


Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel

 

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.

Abstract

BACKGROUND:

There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.

METHODS AND FINDINGS:

A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.

CONCLUSION:

Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text

 

 

Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

 

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

 

Read Full Post »


scPopCorn: A New Computational Method for Subpopulation Detection and their Comparative Analysis Across Single-Cell Experiments

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Present day technological advances have facilitated unprecedented opportunities for studying biological systems at single-cell level resolution. For example, single-cell RNA sequencing (scRNA-seq) enables the measurement of transcriptomic information of thousands of individual cells in one experiment. Analyses of such data provide information that was not accessible using bulk sequencing, which can only assess average properties of cell populations. Single-cell measurements, however, can capture the heterogeneity of a population of cells. In particular, single-cell studies allow for the identification of novel cell types, states, and dynamics.

 

One of the most prominent uses of the scRNA-seq technology is the identification of subpopulations of cells present in a sample and comparing such subpopulations across samples. Such information is crucial for understanding the heterogeneity of cells in a sample and for comparative analysis of samples from different conditions, tissues, and species. A frequently used approach is to cluster every dataset separately, inspect marker genes for each cluster, and compare these clusters in an attempt to determine which cell types were shared between samples. This approach, however, relies on the existence of predefined or clearly identifiable marker genes and their consistent measurement across subpopulations.

 

Although the aligned data can then be clustered to reveal subpopulations and their correspondence, solving the subpopulation-mapping problem by performing global alignment first and clustering second overlooks the original information about subpopulations existing in each experiment. In contrast, an approach addressing this problem directly might represent a more suitable solution. So, keeping this in mind the researchers developed a computational method, single-cell subpopulations comparison (scPopCorn), that allows for comparative analysis of two or more single-cell populations.

 

The performance of scPopCorn was tested in three distinct settings. First, its potential was demonstrated in identifying and aligning subpopulations from single-cell data from human and mouse pancreatic single-cell data. Next, scPopCorn was applied to the task of aligning biological replicates of mouse kidney single-cell data. scPopCorn achieved the best performance over the previously published tools. Finally, it was applied to compare populations of cells from cancer and healthy brain tissues, revealing the relation of neoplastic cells to neural cells and astrocytes. Consequently, as a result of this integrative approach, scPopCorn provides a powerful tool for comparative analysis of single-cell populations.

 

This scPopCorn is basically a computational method for the identification of subpopulations of cells present within individual single-cell experiments and mapping of these subpopulations across these experiments. Different from other approaches, scPopCorn performs the tasks of population identification and mapping simultaneously by optimizing a function that combines both objectives. When applied to complex biological data, scPopCorn outperforms previous methods. However, it should be kept in mind that scPopCorn assumes the input single-cell data to consist of separable subpopulations and it is not designed to perform a comparative analysis of single cell trajectories datasets that do not fulfill this constraint.

 

Several innovations developed in this work contributed to the performance of scPopCorn. First, unifying the above-mentioned tasks into a single problem statement allowed for integrating the signal from different experiments while identifying subpopulations within each experiment. Such an incorporation aids the reduction of biological and experimental noise. The researchers believe that the ideas introduced in scPopCorn not only enabled the design of a highly accurate identification of subpopulations and mapping approach, but can also provide a stepping stone for other tools to interrogate the relationships between single cell experiments.

 

References:

 

https://www.sciencedirect.com/science/article/pii/S2405471219301887

 

https://www.tandfonline.com/doi/abs/10.1080/23307706.2017.1397554

 

https://ieeexplore.ieee.org/abstract/document/4031383

 

https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0927-y

 

https://www.sciencedirect.com/science/article/pii/S2405471216302666

 

 

Read Full Post »

Older Posts »