Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘machine learning’


Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel

 

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.

Abstract

BACKGROUND:

There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.

METHODS AND FINDINGS:

A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.

CONCLUSION:

Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text

 

 

Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

 

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

 

Advertisements

Read Full Post »


The Health Care Benefits of Combining Wearables and AI

Reporter: Gail S. Thornton, M.A.

 

 

This article is excerpted from the Harvard Business Review, May 28, 2019

By Moni Miyashita, Michael Brady

In southeast England, patients discharged from a group of hospitals serving 500,000 people are being fitted with a Wi-Fi-enabled armband that remotely monitors vital signs such as respiratory rate, oxygen levels, pulse, blood pressure, and body temperature.

Under a National Health Service pilot program that now incorporates artificial intelligence to analyze all that patient data in real time, hospital readmission rates are down, and emergency room visits have been reduced. What’s more, the need for costly home visits has dropped by 22%. Longer term, adherence to treatment plans have increased to 96%, compared to the industry average of 50%.

The AI pilot is targeting what Harvard Business School Professor and Innosight co-founder Clay Christensen calls “non-consumption.”  These are opportunity areas where consumers have a job to be done that isn’t currently addressed by an affordable or convenient solution.

Before the U.K. pilot at the Dartford and Gravesham hospitals, for instance, home monitoring had involved dispatching hospital staffers to drive up to 90 minutes round-trip to check in with patients in their homes about once per week. But with algorithms now constantly searching for warning signs in the data and alerting both patients and professionals instantly, a new capability is born: providing healthcare before you knew you even need it.

The biggest promise of artificial intelligence — accurate predictions at near-zero marginal cost — has rightly generated substantial interest in applying AI to nearly every area of healthcare. But not every application of AI in healthcare is equally well-suited to benefit. Moreover, very few applications serve as an appropriate strategic response to the largest problems facing nearly every health system: decentralization and margin pressure.

Take for example, medical imaging AI tools — an area in which hospitals are projected to spend $2 billion annually within four years. Accurately diagnosing diseases from cancers to cataracts is a complex task, with difficult-to-quantify but typically major consequences. However, the task is currently typically part of larger workflows performed by extensively trained, highly specialized physicians who are among some of the world’s best minds. These doctors might need help at the margins, but this is a job already being done. Such factors make disease diagnosis an extraordinarily difficult area for AI to create transformative change. And so the application of AI in such settings  —  even if beneficial  to patient outcomes —  is unlikely to fundamentally improve the way healthcare is delivered or to substantially lower costs in the near-term.

However, leading organizations seeking to decentralize care can deploy AI to do things that have never been done before. For example: There’s a wide array of non-acute health decisions that consumers make daily. These decisions do not warrant the attention of a skilled clinician but ultimately play a large role in determining patient’s health — and ultimately the cost of healthcare.

According to the World Health Organization, 60% of related factors to individual health and quality of life are correlated to lifestyle choices, including taking prescriptions such as blood-pressure medications correctly, getting exercise, and reducing stress. Aided by AI-driven models, it is now possible to provide patients with interventions and reminders throughout this day-to-day process based on changes to the patient’s vital signs.

Home health monitoring itself isn’t new. Active programs and pilot studies are underway through leading institutions ranging from Partners Healthcare, United Healthcare, and the Johns Hopkins School of Medicine, with positive results. But those efforts have yet to harness AI to make better judgements and recommendations in real time. Because of the massive volumes of data involved, machine learning algorithms are particularly well suited to scaling that task for large populations. After all, large sets of data are what power AI by making those algorithms smarter.

By deploying AI, for instance, the NHS program is not only able to scale up in the U.K. but also internationally. Current Health, the venture-capital backed maker of the patient monitoring devices used in the program, recently received FDA clearance to pilot the system in the U.S. and is now testing it with New York’s Mount Sinai Hospital. It’s part of an effort to reduce patient readmissions, which costs U.S. hospitals about $40 billion annually.

The early success of such efforts drives home three lessons in using AI to address non-consumption in the new world of patient-centric healthcare:

1) Focus on impacting critical metrics – for example, reducing costly hospital readmission rates.

Start small to home in on the goal of making an impact on a key metric tied to both patient outcomes and financial sustainability. As in the U.K. pilot, this can be done through a program with select hospitals or provider locations. In another case Grady Hospital, the largest public hospital in Atlanta, points to $4M in saving from reduced readmission rates by 31% over two years thanks to the adoption of an AI tool which identifies ‘at-risk’ patients. The system alerts clinical teams to initiate special patient touch points and interventions.

2) Reduce risk by relying on new kinds of partners.

Don’t try to do everything alone. Instead, form alliances with partners that are aiming to tackle similar problems. Consider the Synaptic Healthcare Alliance, a collaborative pilot program between Aetna, Ascension, Humana, Optum, and others. The alliance is using Blockchain to create a giant dataset across various health care providers, with AI trials on the data getting underway. The aim is to streamline health care provider data management with the goal of reducing the cost of processing claims while also improving access to care. Going it alone can be risky due to data incompatibility issues alone. For instance, the M.D. Anderson Cancer Center had to write off millions in costs for a failed AI project due in part to incompatibility with its electronic health records system. By joining forces, Synaptic’s dataset will be in a standard format that makes records and results transportable.

3) Use AI to collaborate, not compete, with highly-trained professionals.

Clinicians are often looking to augment their knowledge and reasoning, and AI can help. Many medical AI applications do actually compete with doctors. In radiology, for instance, some algorithms have performed image-bases diagnosis as well as or better than human experts. Yet it’s unclear if patients and medical institutions will trust AI to automate that job entirely. A University of California at San Diego pilot in which AI successfully diagnosed childhood diseases more accurately than junior-level pediatricians still required senior doctors to personally review and sign off on the diagnosis. The real aim is always going to be to use AI to collaborate with clinicians seeking higher precision — not try to replace them.

MIT and MGH have developed a deep learning model which identifies patients likely to develop breast cancer in the future. Learning from data on 60,000 prior patients, the AI system allows physicians to personalize their approach to breast cancer screening, essentially creating a detailed risk profile for each patient.

Taken together, these three lessons paired with solutions targeted at non-consumption have the potential to provide a clear path to effectively harnessing a technology that has been subject to rampant over-promising. Longer term, we believe the one of the transformative benefits of AI will be deepening relationships between health providers and patients. The U.K. pilot, for instance, is resulting in more frequent proactive check-ins that never would have happened before. That’s good for both improving health as well as customer loyalty in the emerging consumer-centric healthcare marketplace.

Source:

https://hbr.org/2019/05/the-health-care-benefits-of-combining-wearables-and-ai

 

Read Full Post »


These twelve artificial intelligence innovations are expected to start impacting clinical care by the end of the decade.

Reporter: Gail S. Thornton, M.A.

 

This article is excerpted from Health IT Analytics, April 11, 2019.

 By Jennifer Bresnick

April 11, 2019 – There’s no question that artificial intelligence is moving quickly in the healthcare industry.  Even just a few months ago, AI was still a dream for the next generation: something that would start to enter regular care delivery in a couple of decades – maybe ten or fifteen years for the most advanced health systems.

Even Partners HealthCare, the Boston-based giant on the very cutting edge of research and reform, set a ten-year timeframe for artificial intelligence during its 2018 World Medical Innovation Forum, identifying a dozen AI technologies that had the potential to revolutionize patient care within the decade.

But over the past twelve months, research has progressed so rapidly that Partners has blown up that timeline. 

Instead of viewing AI as something still lingering on the distant horizon, this year’s Disruptive Dozen panel was tasked with assessing which AI innovations will be ready to fundamentally alter the delivery of care by 2020 – now less than a year away.

Sixty members of the Partners faculty participated in nominating and narrowing down the tools they think will have an almost immediate benefit for patients and providers, explained Erica Shenoy, MD, PhD, an infectious disease specialist at Massachusetts General Hospital (MGH).

“These are innovations that have a strong potential to make significant advancement in the field, and they are also technologies that are pretty close to making it to market,” she said.

The results include everything from mental healthcare and clinical decision support to coding and communication, offering patients and their providers a more efficient, effective, and cost-conscious ecosystem for improving long-term outcomes.

In order from least to greatest potential impact, here are the twelve artificial intelligence innovations poised to become integral components of the next decade’s data-driven care delivery system.

NARROWING THE GAPS IN MENTAL HEALTHCARE

Nearly twenty percent of US patients struggle with a mental health disorder, yet treatment is often difficult to access and expensive to use regularly.  Reducing barriers to access for mental and behavioral healthcare, especially during the opioid abuse crisis, requires a new approach to connecting patients with services.

AI-driven applications and therapy programs will be a significant part of the answer.

“The promise and potential for digital behavioral solutions and apps is enormous to address the gaps in mental healthcare in the US and across the world,” said David Ahern, PhD, a clinical psychologist at Brigham & Women’s Hospital (BWH). 

Smartphone-based cognitive behavioral therapy and integrated group therapy are showing promise for treating conditions such as depression, eating disorders, and substance abuse.

While patients and providers need to be wary of commercially available applications that have not been rigorously validated and tested, more and more researchers are developing AI-based tools that have the backing of randomized clinical trials and are showing good results.

A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.
A panel of experts from Partners HealthCare presents the Disruptive Dozen at WMIF19.

Source: Partners HealthCare

STREAMLINING WORKFLOWS WITH VOICE-FIRST TECHNOLOGY

Natural language processing is already a routine part of many behind-the-scenes clinical workflows, but voice-first tools are expected to make their way into the patient-provider encounter in a new way. 

Smart speakers in the clinic are prepping to relieve clinicians of their EHR burdens, capturing free-form conversations and translating the content into structured documentation.  Physicians and nurses will be able to collect and retrieve information more quickly while spending more time looking patients in the eye.

Patients may benefit from similar technologies at home as the consumer market for virtual assistants continues to grow.  With companies like Amazon achieving HIPAA compliance for their consumer-facing products, individuals may soon have more robust options for voice-first chronic disease management and patient engagement.

IDENTIFYING INDIVIDUALS AT HIGH RISK OF DOMESTIC VIOLENCE

Underreporting makes it difficult to know just how many people suffer from intimate partner violence (IPV), says Bharti Khurana, MD, an emergency radiologist at BWH.  But the symptoms are often hiding in plain sight for radiologists.

Using artificial intelligence to flag worrisome injury patterns or mismatches between patient-reported histories and the types of fractures present on x-rays can alert providers to when an exploratory conversation is called for.

“As a radiologist, I’m very excited because this will enable me to provide even more value to the patient instead of simply evaluating their injuries.  It’s a powerful tool for clinicians and social workers that will allow them to approach patients with confidence and with less worry about offending the patient or the spouse,” said Khurana.

REVOLUTIONIZING ACUTE STROKE CARE

Every second counts when a patient experiences a stroke.  In far-flung regions of the United States and in the developing world, access to skilled stroke care can take hours, drastically increasing the likelihood of significant long-term disability or death.

Artificial intelligence has the potential to close the gaps in access to high-quality imaging studies that can identify the type of stroke and the location of the clot or bleed.  Research teams are currently working on AI-driven tools that can automate the detection of stroke and support decision-making around the appropriate treatment for the individual’s needs.  

In rural or low-resource care settings, these algorithms can compensate for the lack of a specialist on-site and ensure that every stroke patient has the best possible chance of treatment and recovery.

AI revolutionizing stroke care

Source: Getty Images

REDUCING ADMINISTRATIVE BURDENS FOR PROVIDERS

The costs of healthcare administration are off the charts.  Recent data from the Center for American progress states that providers spend about $282 billion per year on insurance and medical billing, and the burdens are only going to keep getting bigger.

Medical coding and billing is a perfect use case for natural language processing and machine learning.  NLP is well-suited to translating free-text notes into standardized codes, which can move the task off the plates of physicians and reduce the time and effort spent on complying with convoluted regulations.

“The ultimate goal is to help reduce the complexity of the coding and billing process through automation, thereby reducing the number of mistakes – and, in turn, minimizing the need for such intense regulatory oversight,” Partners says.

NLP is already in relatively wide use for this task, and healthcare organizations are expected to continue adopting this strategy as a way to control costs and speed up their billing cycles.

UNLEASHING HEALTH DATA THROUGH INFORMATION EXCHANGE

AI will combine with another game-changing technology, known as FHIR, to unlock siloes of health data and support broader access to health information.

Patients, providers, and researchers will all benefit from a more fluid health information exchange environment, especially since artificial intelligence models are extremely data-hungry.

Stakeholders will need to pay close attention to maintaining the privacy and security of data as it moves across disparate systems, but the benefits have the potential to outweigh the risks.

“It completely depends on how everyone in the medical community advocates for, builds, and demands open interfaces and open business models,” said Samuel Aronson, Executive Director of IT at Partners Personalized Medicine.

“If we all row in the same direction, there’s a real possibility that we will see fundamental improvements to the healthcare system in 3 to 5 years.”

OFFERING NEW APPROACHES FOR EYE HEALTH AND DISEASE

Image-heavy disciplines have started to see early benefits from artificial intelligence since computers are particularly adept at analyzing patterns in pixels.  Ophthalmology is one area that could see major changes as AI algorithms become more accurate and more robust.

From glaucoma to diabetic retinopathy, millions of patients experience diseases that can lead to irreversible vision loss every year.  Employing AI for clinical decision support can extend access to eye health services in low-resource areas while giving human providers more accurate tools for catching diseases sooner.

REAL-TIME MONITORING OF BRAIN HEALTH

The brain is still the body’s most mysterious organ, but scientists and clinicians are making swift progress unlocking the secrets of cognitive function and neurological disease.  Artificial intelligence is accelerating discovery by helping providers interpret the incredibly complex data that the brain produces.

From predicting seizures by reading EEG tests to identifying the beginnings of dementia earlier than any human, artificial intelligence is allowing providers to access more detailed, continuous measurements – and helping patients improve their quality of life.

Seizures can happen in patients with other serious illnesses, such as kidney or liver failure, explained, Bandon Westover, MD, PhD, executive director of the Clinical Data Animation Center at MGH, but many providers simply don’t know about it.

“Right now, we mostly ignore the brain unless there’s a special need for suspicion,” he said.  “In a year’s time, we’ll be catching a lot more seizures and we’ll be doing it with algorithms that can monitor patients continuously and identify more ambiguous patterns of dysfunction that can damage the brain in a similar manner to seizures.”

AUTOMATING MALARIA DETECTION IN DEVELOPING REGIONS

Malaria is a daily threat for approximately half the world’s population.  Nearly half a million people died from the mosquito-borne disease in 2017, according to the World Health Organization, and the majority of the victims are children under the age of five.

Deep learning tools can automate the process of quantifying malaria parasites in blood samples, a challenging task for providers working without pathologist partners.  One such tool achieved 90 percent accuracy and specificity, putting it on par with pathology experts.

This type of software can be run on a smartphone hooked up to a camera on a microscope, dramatically expanding access to expert-level diagnosis and monitoring.

AI for diagnosing and detecting malaria

Source: Getty Images

AUGMENTING DIAGNOSTICS AND DECISION-MAKING

Artificial intelligence has made especially swift progress in diagnostic specialties, including pathology. AI will continue to speed down the road to maturity in this area, predicts Annette Kim, MD, PhD, associate professor of pathology at BWH and Harvard Medical School.

“Pathology is at the center of diagnosis, and diagnosis underpins a huge percentage of all patient care.  We’re integrating a huge amount of data that funnels through us to come to a diagnosis.  As the number of data points increases, it negatively impacts the time we have to synthesize the information,” she said.

AI can help automate routine, high-volume tasks, prioritize and triage cases to ensure patients are getting speedy access to the right care, and make sure that pathologists don’t miss key information hidden in the enormous volumes of clinical and test data they must comb through every day.

“This is where AI can have a huge impact on practice by allowing us to use our limited time in the most meaningful manner,” Kim stressed.

PREDICTING THE RISK OF SUICIDE AND SELF-HARM

Suicide is the tenth leading cause of death in the United States, claiming 45,000 lives in 2016.  Suicide rates are on the rise due to a number of complex socioeconomic and mental health factors, and identifying patients at the highest risk of self-harm is a difficult and imprecise science.

Natural language processing and other AI methodologies may help providers identify high-risk patients earlier and more reliably.  AI can comb through social media posts, electronic health record notes, and other free-text documents to flag words or concepts associated with the risk of harm.

Researchers also hope to develop AI-driven apps to provide support and therapy to individuals likely to harm themselves, especially teenagers who commit suicide at higher rates than other age groups.

Connecting patients with mental health resources before they reach a time of crisis could save thousands of lives every year.

REIMAGINING THE WORLD OF MEDICAL IMAGING

Radiology is already one of AI’s early beneficiaries, but providers are just at the beginning of what they will be able to accomplish in the next few years as machine learning explodes into the imaging realm.

AI is predicted to bring earlier detection, more accurate assessment of complex images, and less expensive testing for patients across a huge number of clinical areas.

But as leaders in the AI revolution, radiologists also have a significant responsibility to develop and deploy best practices in terms of trustworthiness, workflow, and data protection.

“We certainly feel the onus on the radiology community to make sure we do deliver and translate this into improved care,” said Alexandra Golby, MD, a neurosurgeon and radiologist at BWH and Harvard Medical School.

“Can radiology live up to the expectations?  There are certainly some challenges, including trust and understanding of what the algorithms are delivering.  But we desperately need it, and we want to equalize care across the world.”

Radiologists have been among the first to overcome their trepidation about the role of AI in a changing clinical world, and are eagerly embracing the possibilities of this transformative approach to augmenting human skills.”

“All of the imaging societies have opened their doors to the AI adventure,” Golby said.  “The community very anxious to learn, codevelop, and work with all of the industry partners to turn this technology into truly valuable tools. We’re very optimistic and very excited, and we look forward to learning more about how AI can improve care.”

Source:

https://healthitanalytics.com/news/top-12-artificial-intelligence-innovations-disrupting-healthcare-by-2020

 

Read Full Post »


Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence: Realizing Precision Medicine One Patient at a Time

Reporter: Stephen J Williams, PhD @StephenJWillia2

The impact of Machine Learning (ML) and Artificial Intelligence (AI) during the last decade has been tremendous. With the rise of infobesity, ML/AI is evolving to an essential capability to help mine the sheer volume of patient genomics, omics, sensor/wearables and real-world data, and unravel the knot of healthcare’s most complex questions.

Despite the advancements in technology, organizations struggle to prioritize and implement ML/AI to achieve the anticipated value, whilst managing the disruption that comes with it. In this session, panelists will discuss ML/AI implementation and adoption strategies that work. Panelists will draw upon their experiences as they share their success stories, discuss how to implement digital diagnostics, track disease progression and treatment, and increase commercial value and ROI compared against traditional approaches.

  • most of trials which are done are still in training AI/ML algorithms with training data sets.  The best results however have been about 80% accuracy in training sets.  Needs to improve
  • All data sets can be biased.  For example a professor was looking at heartrate using a IR detector on a wearable but it wound up that different types of skin would generate a different signal to the detector so training sets maybe population biases (you are getting data from one group)
  • clinical grade equipment actually haven’t been trained on a large set like commercial versions of wearables, Commercial grade is tested on a larger study population.  This can affect the AI/ML algorithms.
  • Regulations:  The regulatory bodies responsible is up to debate.  Whether FDA or FTC is responsible for AI/ML in healtcare and healthcare tech and IT is not fully decided yet.  We don’t have the guidances for these new technologies
  • some rules: never use your own encryption always use industry standards especially when getting personal data from wearables.  One hospital corrupted their system because their computer system was not up to date and could not protect against a virus transmitted by a wearable.
  • pharma companies understand they need to increase value of their products so very interested in how AI/ML can be used.

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

Read Full Post »


Real Time Coverage @BIOConvention #BIO2019: Precision Medicine Beyond Oncology June 5 Philadelphia PA

Reporter: Stephen J Williams PhD @StephenJWillia2

Precision Medicine has helped transform cancer care from one-size-fits-all chemotherapy to a new era, where patients’ tumors can be analyzed and therapy selected based on their genetic makeup. Until now, however, precision medicine’s impact has been far less in other therapeutic areas, many of which are ripe for transformation. Efforts are underway to bring the successes of precision medicine to neurology, immunology, ophthalmology, and other areas. This move raises key questions of how the lessons learned in oncology can be used to advance precision medicine in other fields, what types of data and tools will be important to personalizing treatment in these areas, and what sorts of partnerships and payer initiatives will be needed to support these approaches and their ultimate commercialization and use. The panel will also provide an in depth look at precision medicine approaches aimed at better understanding and improving patient care in highly complex disease areas like neurology.
Speaker panel:  The big issue now with precision medicine is there is so much data and hard to put experimental design and controls around randomly collected data.
  • The frontier is how to CURATE randomly collected data to make some sense of it
  • One speaker was at a cancer meeting and the oncologist had no idea what to make of genomic reports they were given.  Then there is a lack of action or worse a misdiagnosis.
  • So for e.g. with Artificial Intelligence algorithms to analyze image data you can see things you can’t see with naked eye but if data quality not good the algorithms are useless – if data not curated properly data is wasted
Data needs to be organized and curated. 
If relying of AI for big data analysis the big question still is: what are the rates of false negative and false positives?  Have to make sure so no misdiagnosis.

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

Read Full Post »


A Nonlinear Methodology to Explain Complexity of the Genome and Bioinformatic Information

Reporter: Stephen J. Williams, Ph.D.

Multifractal bioinformatics: A proposal to the nonlinear interpretation of genome

The following is an open access article by Pedro Moreno on a methodology to analyze genetic information across species and in particular, the evolutionary trends of complex genomes, by a nonlinear analytic approach utilizing fractal geometry, coined “Nonlinear Bioinformatics”.  This fractal approach stems from the complex nature of higher eukaryotic genomes including mosaicism, multiple interdispersed  genomic elements such as intronic regions, noncoding regions, and also mobile elements such as transposable elements.  Although seemingly random, there exists a repetitive nature of these elements. Such complexity of DNA regulation, structure and genomic variation is felt best understood by developing algorithms based on fractal analysis, which can best model the regionalized and repetitive variability and structure within complex genomes by elucidating the individual components which contributes to an overall complex structure rather than using a “linear” or “reductionist” approach looking at individual coding regions, which does not take into consideration the aforementioned factors leading to genetic complexity and diversity.

Indeed, many other attempts to describe the complexities of DNA as a fractal geometric pattern have been described.  In a paper by Carlo Cattani “Fractals and Hidden Symmetries in DNA“, Carlo uses fractal analysis to construct a simple geometric pattern of the influenza A virus by modeling the primary sequence of this viral DNA, namely the bases A,G,C, and T. The main conclusions that

fractal shapes and symmetries in DNA sequences and DNA walks have been shown and compared with random and deterministic complex series. DNA sequences are structured in such a way that there exists some fractal behavior which can be observed both on the correlation matrix and on the DNA walks. Wavelet analysis confirms by a symmetrical clustering of wavelet coefficients the existence of scale symmetries.

suggested that, at least, the viral influenza genome structure could be analyzed into its basic components by fractal geometry.
This approach has been used to model the complex nature of cancer as discussed in a 2011 Seminars in Oncology paper
Abstract: Cancer is a highly complex disease due to the disruption of tissue architecture. Thus, tissues, and not individual cells, are the proper level of observation for the study of carcinogenesis. This paradigm shift from a reductionist approach to a systems biology approach is long overdue. Indeed, cell phenotypes are emergent modes arising through collective non-linear interactions among different cellular and microenvironmental components, generally described by “phase space diagrams”, where stable states (attractors) are embedded into a landscape model. Within this framework, cell states and cell transitions are generally conceived as mainly specified by gene-regulatory networks. However, the system s dynamics is not reducible to the integrated functioning of the genome-proteome network alone; the epithelia-stroma interacting system must be taken into consideration in order to give a more comprehensive picture. Given that cell shape represents the spatial geometric configuration acquired as a result of the integrated set of cellular and environmental cues, we posit that fractal-shape parameters represent “omics descriptors of the epithelium-stroma system. Within this framework, function appears to follow form, and not the other way around.

As authors conclude

” Transitions from one phenotype to another are reminiscent of phase transitions observed in physical systems. The description of such transitions could be obtained by a set of morphological, quantitative parameters, like fractal measures. These parameters provide reliable information about system complexity. “

Gene expression also displays a fractal nature. In a Frontiers in Physiology paper by Mahboobeh Ghorbani, Edmond A. Jonckheere and Paul Bogdan* “Gene Expression Is Not Random: Scaling, Long-Range Cross-Dependence, and Fractal Characteristics of Gene Regulatory Networks“,

the authors describe that gene expression networks display time series display fractal and long-range dependence characteristics.

Abstract: Gene expression is a vital process through which cells react to the environment and express functional behavior. Understanding the dynamics of gene expression could prove crucial in unraveling the physical complexities involved in this process. Specifically, understanding the coherent complex structure of transcriptional dynamics is the goal of numerous computational studies aiming to study and finally control cellular processes. Here, we report the scaling properties of gene expression time series in Escherichia coliand Saccharomyces cerevisiae. Unlike previous studies, which report the fractal and long-range dependency of DNA structure, we investigate the individual gene expression dynamics as well as the cross-dependency between them in the context of gene regulatory network. Our results demonstrate that the gene expression time series display fractal and long-range dependence characteristics. In addition, the dynamics between genes and linked transcription factors in gene regulatory networks are also fractal and long-range cross-correlated. The cross-correlation exponents in gene regulatory networks are not unique. The distribution of the cross-correlation exponents of gene regulatory networks for several types of cells can be interpreted as a measure of the complexity of their functional behavior.

 

Given that multitude of complex biomolecular networks and biomolecules can be described by fractal patterns, the development of bioinformatic algorithms  would enhance our understanding of the interdependence and cross funcitonality of these mutiple biological networks, particularly in disease and drug resistance.  The article below by Pedro Moreno describes the development of such bioinformatic algorithms.

Pedro A. Moreno
Escuela de Ingeniería de Sistemas y Computación, Facultad de Ingeniería, Universidad del Valle, Cali, Colombia
E-mail: pedro.moreno@correounivalle.edu.co

Eje temático: Ingeniería de sistemas / System engineering
Recibido: 19 de septiembre de 2012
Aceptado: 16 de diciembre de 2013


 

 


Abstract

The first draft of the human genome (HG) sequence was published in 2001 by two competing consortia. Since then, several structural and functional characteristics for the HG organization have been revealed. Today, more than 2.000 HG have been sequenced and these findings are impacting strongly on the academy and public health. Despite all this, a major bottleneck, called the genome interpretation persists. That is, the lack of a theory that explains the complex puzzles of coding and non-coding features that compose the HG as a whole. Ten years after the HG sequenced, two recent studies, discussed in the multifractal formalism allow proposing a nonlinear theory that helps interpret the structural and functional variation of the genetic information of the genomes. The present review article discusses this new approach, called: “Multifractal bioinformatics”.

Keywords: Omics sciences, bioinformatics, human genome, multifractal analysis.


1. Introduction

Omic Sciences and Bioinformatics

In order to study the genomes, their life properties and the pathological consequences of impairment, the Human Genome Project (HGP) was created in 1990. Since then, about 500 Gpb (EMBL) represented in thousands of prokaryotic genomes and tens of different eukaryotic genomes have been sequenced (NCBI, 1000 Genomes, ENCODE). Today, Genomics is defined as the set of sciences and technologies dedicated to the comprehensive study of the structure, function and origin of genomes. Several types of genomic have arisen as a result of the expansion and implementation of genomics to the study of the Central Dogma of Molecular Biology (CDMB), Figure 1 (above). The catalog of different types of genomics uses the Latin suffix “-omic” meaning “set of” to mean the new massive approaches of the new omics sciences (Moreno et al, 2009). Given the large amount of genomic information available in the databases and the urgency of its actual interpretation, the balance has begun to lean heavily toward the requirements of bioinformatics infrastructure research laboratories Figure 1 (below).

The bioinformatics or Computational Biology is defined as the application of computer and information technology to the analysis of biological data (Mount, 2004). An interdisciplinary science that requires the use of computing, applied mathematics, statistics, computer science, artificial intelligence, biophysical information, biochemistry, genetics, and molecular biology. Bioinformatics was born from the need to understand the sequences of nucleotide or amino acid symbols that make up DNA and proteins, respectively. These analyzes are made possible by the development of powerful algorithms that predict and reveal an infinity of structural and functional features in genomic sequences, as gene location, discovery of homologies between macromolecules databases (Blast), algorithms for phylogenetic analysis, for the regulatory analysis or the prediction of protein folding, among others. This great development has created a multiplicity of approaches giving rise to new types of Bioinformatics, such as Multifractal Bioinformatics (MFB) that is proposed here.

1.1 Multifractal Bioinformatics and Theoretical Background

MFB is a proposal to analyze information content in genomes and their life properties in a non-linear way. This is part of a specialized sub-discipline called “nonlinear Bioinformatics”, which uses a number of related techniques for the study of nonlinearity (fractal geometry, Hurts exponents, power laws, wavelets, among others.) and applied to the study of biological problems (https://pharmaceuticalintelligence.com/tag/fractal-geometry/). For its application, we must take into account a detailed knowledge of the structure of the genome to be analyzed and an appropriate knowledge of the multifractal analysis.

1.2 From the Worm Genome toward Human Genome

To explore a complex genome such as the HG it is relevant to implement multifractal analysis (MFA) in a simpler genome in order to show its practical utility. For example, the genome of the small nematode Caenorhabditis elegans is an excellent model to learn many extrapolated lessons of complex organisms. Thus, if the MFA explains some of the structural properties in that genome it is expected that this same analysis reveals some similar properties in the HG.

The C. elegans nuclear genome is composed of about 100 Mbp, with six chromosomes distributed into five autosomes and one sex chromosome. The molecular structure of the genome is particularly homogeneous along with the chromosome sequences, due to the presence of several regular features, including large contents of genes and introns of similar sizes. The C. elegans genome has also a regional organization of the chromosomes, mainly because the majority of the repeated sequences are located in the chromosome arms, Figure 2 (left) (C. elegans Sequencing Consortium, 1998). Given these regular and irregular features, the MFA could be an appropriate approach to analyze such distributions.

Meanwhile, the HG sequencing revealed a surprising mosaicism in coding (genes) and noncoding (repetitive DNA) sequences, Figure 2 (right) (Venter et al., 2001). This structure of 6 Gbp is divided into 23 pairs of chromosomes (diploid cells) and these highly regionalized sequences introduce complex patterns of regularity and irregularity to understand the gene structure, the composition of sequences of repetitive DNA and its role in the study and application of life sciences. The coding regions of the genome are estimated at ~25,000 genes which constitute 1.4% of GH. These genes are involved in a giant sea of various types of non-coding sequences which compose 98.6% of HG (misnamed popularly as “junk DNA”). The non-coding regions are characterized by many types of repeated DNA sequences, where 10.6% consists of Alu sequences, a type of SINE (short and dispersed repeated elements) sequence and preferentially located towards the genes. LINES, MIR, MER, LTR, DNA transposons and introns are another type of non-coding sequences which form about 86% of the genome. Some of these sequences overlap with each other; as with CpG islands, which complicates the analysis of genomic landscape. This standard genomic landscape was recently clarified, the last studies show that 80.4% of HG is functional due to the discovery of more than five million “switches” that operate and regulate gene activity, re-evaluating the concept of “junk DNA”. (The ENCODE Project Consortium, 2012).

Given that all these genomic variations both in worm and human produce regionalized genomic landscapes it is proposed that Fractal Geometry (FG) would allow measuring how the genetic information content is fragmented. In this paper the methodology and the nonlinear descriptive models for each of these genomes will be reviewed.

1.3 The MFA and its Application to Genome Studies

Most problems in physics are implicitly non-linear in nature, generating phenomena such as chaos theory, a science that deals with certain types of (non-linear) but very sensitive dynamic systems to initial conditions, nonetheless of deterministic rigor, that is that their behavior can be completely determined by knowing initial conditions (Peitgen et al, 1992). In turn, the FG is an appropriate tool to study the chaotic dynamic systems (CDS). In other words, the FG and chaos are closely related because the space region toward which a chaotic orbit tends asymptotically has a fractal structure (strange attractors). Therefore, the FG allows studying the framework on which CDS are defined (Moon, 1992). And this is how it is expected for the genome structure and function to be organized.

The MFA is an extension of the FG and it is related to (Shannon) information theory, disciplines that have been very useful to study the information content over a sequence of symbols. Initially, Mandelbrot established the FG in the 80’s, as a geometry capable of measuring the irregularity of nature by calculating the fractal dimension (D), an exponent derived from a power law (Mandelbrot, 1982). The value of the D gives us a measure of the level of fragmentation or the information content for a complex phenomenon. That is because the D measures the scaling degree that the fragmented self-similarity of the system has. Thus, the FG looks for self-similar properties in structures and processes at different scales of resolution and these self-similarities are organized following scaling or power laws.

Sometimes, an exponent is not sufficient to characterize a complex phenomenon; so more exponents are required. The multifractal formalism allows this, and applies when many subgroups of fractals with different scalar properties with a large number of exponents or fractal dimensions coexist simultaneously. As a result, when a spectrum of multifractal singularity measurement is generated, the scaling behavior of the frequency of symbols of a sequence can be quantified (Vélez et al, 2010).

The MFA has been implemented to study the spatial heterogeneity of theoretical and experimental fractal patterns in different disciplines. In post-genomics times, the MFA was used to study multiple biological problems (Vélez et al, 2010). Nonetheless, very little attention has been given to the use of MFA to characterize the content of the structural genetic information of the genomes obtained from the images of the Chaos Representation Game (CRG). First studies at this level were made recently to the analysis of the C. elegans genome (Vélez et al, 2010) and human genomes (Moreno et al, 2011). The MFA methodology applied for the study of these genomes will be developed below.

2. Methodology

The Multifractal Formalism from the CGR

2.1 Data Acquisition and Molecular Parameters

Databases for the C. elegans and the 36.2 Hs_ refseq HG version were downloaded from the NCBI FTP server. Then, several strategies were designed to fragment the genomic DNA sequences of different length ranges. For example, the C. elegans genome was divided into 18 fragments, Figure 2 (left) and the human genome in 9,379 fragments. According to their annotation systems, the contents of molecular parameters of coding sequences (genes, exons and introns), noncoding sequences (repetitive DNA, Alu, LINES, MIR, MER, LTR, promoters, etc.) and coding/ non-coding DNA (TTAGGC, AAAAT, AAATT, TTTTC, TTTTT, CpG islands, etc.) are counted for each sequence.

2.2 Construction of the CGR 2.3 Fractal Measurement by the Box Counting Method

Subsequently, the CGR, a recursive algorithm (Jeffrey, 1990; Restrepo et al, 2009) is applied to each selected DNA sequence, Figure 3 (above, left) and from which an image is obtained, which is quantified by the box-counting algorithm. For example, in Figure 3 (above, left) a CGR image for a human DNA sequence of 80,000 bp in length is shown. Here, dark regions represent sub-quadrants with a high number of points (or nucleotides). Clear regions, sections with a low number of points. The calculation for the D for the Koch curve by the box-counting method is illustrated by a progression of changes in the grid size, and its Cartesian graph, Table 1

The CGR image for a given DNA sequence is quantified by a standard fractal analysis. A fractal is a fragmented geometric figure whose parts are an approximated copy at full scale, that is, the figure has self-similarity. The D is basically a scaling rule that the figure obeys. Generally, a power law is given by the following expression:

Where N(E) is the number of parts required for covering the figure when a scaling factor E is applied. The power law permits to calculate the fractal dimension as:

The D obtained by the box-counting algorithm covers the figure with disjoint boxes ɛ = 1/E and counts the number of boxes required. Figure 4 (above, left) shows the multifractal measure at momentum q=1.

2.4 Multifractal Measurement

When generalizing the box-counting algorithm for the multifractal case and according to the method of moments q, we obtain the equation (3) (Gutiérrez et al, 1998; Yu et al, 2001):

Where the Mi number of points falling in the i-th grid is determined and related to the total number Mand ɛ to box size. Thus, the MFA is used when multiple scaling rules are applied. Figure 4 (above, right) shows the calculation of the multifractal measures at different momentum q (partition function). Here, linear regressions must have a coefficient of determination equal or close to 1. From each linear regression D are obtained, which generate an spectrum of generalized fractal dimensions Dfor all q integers, Figure 4 (below, left). So, the multifractal spectrum is obtained as the limit:

The variation of the q integer allows emphasizing different regions and discriminating their fractal a high Dq is synonymous of the structure’s richness and the properties of these regions. Negative values emphasize the scarce regions; a high Dindicates a lot of structure and properties in these regions. In real world applications, the limit Dqreadily approximated from the data using a linear fitting: the transformation of the equation (3) yields:

Which shows that ln In(Mi )= for set q is a linear function in the ln(ɛ), Dq can therefore be evaluated as q the slope of a fixed relationship between In(Mi )= and (q-1) ln(ɛ). The methodologies and approaches for the method of box-counting and MFA are detailed in Moreno et al, 2000, Yu et al, 2001; Moreno, 2005. For a rigorous mathematical development of MFA from images consult Multifractal system, wikipedia.

2.5 Measurement of Information Content

Subsequently, from the spectrum of generalized dimensions Dq, the degree of multifractality ΔDq(MD) is calculated as the difference between the maximum and minimum values of : ΔD qq Dqmax – Dqmin (Ivanov et al, 1999). When qmaxqmin ΔDis high, the multifractal spectrum is rich in information and highly aperiodic, when ΔDq is small, the resulting dimension spectrum is poor in information and highly periodic. It is expected then, that the aperiodicity in the genome would be related to highly polymorphic genomic aperiodic structures and those periodic regions with highly repetitive and not very polymorphic genomic structures. The correlation exponent t(q) = (– 1)DqFigure 4 (below, right ) can also be obtained from the multifractal dimension Dq. The generalized dimension also provides significant specific information. D(q = 0) is equal to the Capacity dimension, which in this analysis is the size of the “box count”. D(q = 1) is equal to the Information dimension and D(q = 2) to the Correlation dimension. Based on these multifractal parameters, many of the structural genomic properties can be quantified, related, and interpreted.

2.6 Multifractal Parameters and Statistical and Discrimination Analyses

Once the multifractal parameters are calculated (D= (-20, 20), ΔDq, πq, etc.), correlations with the molecular parameters are sought. These relations are established by plotting the number of genome molecular parameters versus MD by discriminant analysis with Cartesian graphs in 2-D, Figure 5 (below, left) and 3-D and combining multifractal and molecular parameters. Finally, simple linear regression analysis, multivariate analysis, and analyses by ranges and clusterings are made to establish statistical significance.

3 Results and Discussion

3.1 Non-linear Descriptive Model for the C. elegans Genome

When analyzing the C. elegans genome with the multifractal formalism it revealed what symmetry and asymmetry on the genome nucleotide composition suggested. Thus, the multifractal scaling of the C. elegans genome is of interest because it indicates that the molecular structure of the chromosome may be organized as a system operating far from equilibrium following nonlinear laws (Ivanov et al, 1999; Burgos and Moreno-Tovar, 1996). This can be discussed from two points of view:

1) When comparing C. elegans chromosomes with each other, the X chromosome showed the lowest multifractality, Figure 5 (above). This means that the X chromosome is operating close to equilibrium, which results in an increased genetic instability. Thus, the instability of the X could selectively contribute to the molecular mechanism that determines sex (XX or X0) during meiosis. Thus, the X chromosome would be operating closer to equilibrium in order to maintain their particular sexual dimorphism.

2) When comparing different chromosome regions of the C. elegans genome, changes in multifractality were found in relation to the regional organization (at the center and arms) exhibited by the chromosomes, Figure 5 (below, left). These behaviors are associated with changes in the content of repetitive DNA, Figure 5 (below, right). The results indicated that the chromosome arms are even more complex than previously anticipated. Thus, TTAGGC telomere sequences would be operating far from equilibrium to protect the genetic information encoded by the entire chromosome.

All these biological arguments may explain why C. elegans genome is organized in a nonlinear way. These findings provide insight to quantify and understand the organization of the non-linear structure of the C. elegans genome, which may be extended to other genomes, including the HG (Vélez et al, 2010).

3.2 Nonlinear Descriptive Model for the Human Genome

Once the multifractal approach was validated in C. elegans genome, HG was analyzed exhaustively. This allowed us to propose a nonlinear model for the HG structure which will be discussed under three points of view.

1) It was found that the HG high multifractality depends strongly on the contents of Alu sequences and to a lesser extent on the content of CpG islands. These contents would be located primarily in highly aperiodic regions, thus taking the chromosome far from equilibrium and giving to it greater genetic stability, protection and attraction of mutations, Figure 6 (A-C). Thus, hundreds of regions in the HG may have high genetic stability and the most important genetic information of the HG, the genes, would be safeguarded from environmental fluctuations. Other repeated elements (LINES, MIR, MER, LTRs) showed no significant relationship,

Figure 6 (D). Consequently, the human multifractal map developed in Moreno et al, 2011 constitutes a good tool to identify those regions rich in genetic information and genomic stability. 2) The multifractal context seems to be a significant requirement for the structural and functional organization of thousands of genes and gene families. Thus, a high multifractal context (aperiodic) appears to be a “genomic attractor” for many genes (KOGs, KEEGs), Figure 6 (E) and some gene families, Figure 6 (F) are involved in genetic and deterministic processes, in order to maintain a deterministic regulation control in the genome, although most of HG sequences may be subject to a complex epigenetic control.

3) The classification of human chromosomes and chromosome regions analysis may have some medical implications (Moreno et al, 2002; Moreno et al, 2009). This means that the structure of low nonlinearity exhibited by some chromosomes (or chromosome regions) involve an environmental predisposition, as potential targets to undergo structural or numerical chromosomal alterations in Figure 6 (G). Additionally, sex chromosomes should have low multifractality to maintain sexual dimorphism and probably the X chromosome inactivation.

All these fractals and biological arguments could explain why Alu elements are shaping the HG in a nonlinearly manner (Moreno et al, 2011). Finally, the multifractal modeling of the HG serves as theoretical framework to examine new discoveries made by the ENCODE project and new approaches about human epigenomes. That is, the non-linear organization of HG might help to explain why it is expected that most of the GH is functional.

4. Conclusions

All these results show that the multifractal formalism is appropriate to quantify and evaluate genetic information contents in genomes and to relate it with the known molecular anatomy of the genome and some of the expected properties. Thus, the MFB allows interpreting in a logic manner the structural nature and variation of the genome.

The MFB allows understanding why a number of chromosomal diseases are likely to occur in the genome, thus opening a new perspective toward personalized medicine to study and interpret the GH and its diseases.

The entire genome contains nonlinear information organizing it and supposedly making it function, concluding that virtually 100% of HG is functional. Bioinformatics in general, is enriched with a novel approach (MFB) making it possible to quantify the genetic information content of any DNA sequence and their practical applications to different disciplines in biology, medicine and agriculture. This novel breakthrough in computational genomic analysis and diseases contributes to define Biology as a “hard” science.

MFB opens a door to develop a research program towards the establishment of an integrative discipline that contributes to “break” the code of human life. (http://pharmaceuticalintelligence. com/page/3/).

5. Acknowledgements

Thanks to the directives of the EISC, the Universidad del Valle and the School of Engineering for offering an academic, scientific and administrative space for conducting this research. Likewise, thanks to co authors (professors and students) who participated in the implementation of excerpts from some of the works cited here. Finally, thanks to Colciencias by the biotechnology project grant # 1103-12-16765.


6. References

Blanco, S., & Moreno, P.A. (2007). Representación del juego del caos para el análisis de secuencias de ADN y proteínas mediante el análisis multifractal (método “box-counting”). In The Second International Seminar on Genomics and Proteomics, Bioinformatics and Systems Biology (pp. 17-25). Popayán, Colombia.         [ Links ]

Burgos, J.D., & Moreno-Tovar, P. (1996). Zipf scaling behavior in the immune system. BioSystem , 39, 227-232.         [ Links ]

C. elegans Sequencing Consortium. (1998). Genome sequence of the nematode C. elegans: a platform for investigating biology. Science , 282, 2012-2018.         [ Links ]

Gutiérrez, J.M., Iglesias A., Rodríguez, M.A., Burgos, J.D., & Moreno, P.A. (1998). Analyzing the multifractals structure of DNA nucleotide sequences. In, M. Barbie & S. Chillemi (Eds.) Chaos and Noise in Biology and Medicine (cap. 4). Hackensack (NJ): World Scientific Publishing Co.         [ Links ]

Ivanov, P.Ch., Nunes, L.A., Golberger, A.L., Havlin, S., Rosenblum, M.G., Struzikk, Z.R., & Stanley, H.E. (1999). Multifractality in human heartbeat dynamics. Nature , 399, 461-465.         [ Links ]

Jeffrey, H.J. (1990). Chaos game representation of gene structure. Nucleic Acids Research , 18, 2163-2175.         [ Links ]

Mandelbrot, B. (1982). La geometría fractal de la naturaleza. Barcelona. España: Tusquets editores.         [ Links ]

Moon, F.C. (1992). Chaotic and fractal dynamics. New York: John Wiley.         [ Links ]

Moreno, P.A. (2005). Large scale and small scale bioinformatics studies on the Caenorhabditis elegans enome. Doctoral thesis. Department of Biology and Biochemistry, University of Houston, Houston, USA.         [ Links ]

Moreno, P.A., Burgos, J.D., Vélez, P.E., Gutiérrez, J.M., & et al., (2000). Multifractal analysis of complete genomes. In P roceedings of the 12th International Genome Sequencing and Analysis Conference (pp. 80-81). Miami Beach (FL).         [ Links ]

Moreno, P.A., Rodríguez, J.G., Vélez, P.E., Cubillos, J.R., & Del Portillo, P. (2002). La genómica aplicada en salud humana. Colombia Ciencia y Tecnología. Colciencias , 20, 14-21.         [ Links ]

Moreno, P.A., Vélez, P.E., & Burgos, J.D. (2009). Biología molecular, genómica y post-genómica. Pioneros, principios y tecnologías. Popayán, Colombia: Editorial Universidad del Cauca.         [ Links ]

Moreno, P.A., Vélez, P.E., Martínez, E., Garreta, L., Díaz, D., Amador, S., Gutiérrez, J.M., et. al. (2011). The human genome: a multifractal analysis. BMC Genomics , 12, 506.         [ Links ]

Mount, D.W. (2004). Bioinformatics. Sequence and ge nome analysis. New York: Cold Spring Harbor Laboratory Press.         [ Links ]

Peitgen, H.O., Jürgen, H., & Saupe D. (1992). Chaos and Fractals. New Frontiers of Science. New York: Springer-Verlag.         [ Links ]

Restrepo, S., Pinzón, A., Rodríguez, L.M., Sierra, R., Grajales, A., Bernal, A., Barreto, E. et. al. (2009). Computational biology in Colombia. PLoS Computational Biology, 5 (10), e1000535.         [ Links ]

The ENCODE Project Consortium. (2012). An integrated encyclopedia of DNA elements in the human genome. Nature , 489, 57-74.         [ Links ]

Vélez, P.E., Garreta, L.E., Martínez, E., Díaz, N., Amador, S., Gutiérrez, J.M., Tischer, I., & Moreno, P.A. (2010). The Caenorhabditis elegans genome: a multifractal analysis. Genet and Mol Res , 9, 949-965.         [ Links ]

Venter, J.C., Adams, M.D., Myers, E.W., Li, P.W., & et al. (2001). The sequence of the human genome. Science , 291, 1304-1351.         [ Links ]

Yu, Z.G., Anh, V., & Lau, K.S. (2001). Measure representation and multifractal analysis of complete genomes. Physical Review E: Statistical, Nonlinear, and Soft Matter Physics , 64, 031903.         [ Links ]

 

Other articles on Bioinformatics on this Open Access Journal include:

Bioinformatics Tool Review: Genome Variant Analysis Tools

2017 Agenda – BioInformatics: Track 6: BioIT World Conference & Expo ’17, May 23-35, 2017, Seaport World Trade Center, Boston, MA

Better bioinformatics

Broad Institute, Google Genomics combine bioinformatics and computing expertise

Autophagy-Modulating Proteins and Small Molecules Candidate Targets for Cancer Therapy: Commentary of Bioinformatics Approaches

CRACKING THE CODE OF HUMAN LIFE: The Birth of BioInformatics & Computational Genomics

Read Full Post »


Role of Informatics in Precision Medicine: Notes from Boston Healthcare Webinar: Can It Drive the Next Cost Efficiencies in Oncology Care?

Reporter: Stephen J. Williams, Ph.D.

 

Boston Healthcare sponsored a Webinar recently entitled ” Role of Informatics in Precision Medicine: Implications for Innovators”.  The webinar focused on the different informatic needs along the Oncology Care value chain from drug discovery through clinicians, C-suite executives and payers. The presentation, by Joseph Ferrara and Mark Girardi, discussed the specific informatics needs and deficiencies experienced by all players in oncology care and how innovators in this space could create value. The final part of the webinar discussed artificial intelligence and the role in cancer informatics.

 

Below is the mp4 video and audio for this webinar.  Notes on each of the slides with a few representative slides are also given below:

Please click below for the mp4 of the webinar:

 

 


  • worldwide oncology related care to increase by 40% in 2020
  • big movement to participatory care: moving decision making to the patient. Need for information
  • cost components focused on clinical action
  • use informatics before clinical stage might add value to cost chain

 

 

 

 

Key unmet needs from perspectives of different players in oncology care where informatics may help in decision making

 

 

 

  1.   Needs of Clinicians

– informatic needs for clinical enrollment

– informatic needs for obtaining drug access/newer therapies

2.  Needs of C-suite/health system executives

– informatic needs to help focus of quality of care

– informatic needs to determine health outcomes/metrics

3.  Needs of Payers

– informatic needs to determine quality metrics and managing costs

– informatics needs to form guidelines

– informatics needs to determine if biomarkers are used consistently and properly

– population level data analytics

 

 

 

 

 

 

 

 

 

 

 

 

What are the kind of value innovations that tech entrepreneurs need to create in this space? Two areas/problems need to be solved.

  • innovations in data depth and breadth
  • need to aggregate information to inform intervention

Different players in value chains have different data needs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Data Depth: Cumulative Understanding of disease

Data Depth: Cumulative number of oncology transactions

  • technology innovators rely on LEGACY businesses (those that already have technology) and these LEGACY businesses either have data breath or data depth BUT NOT BOTH; (IS THIS WHERE THE GREATEST VALUE CAN BE INNOVATED?)
  • NEED to provide ACTIONABLE as well as PHENOTYPIC/GENOTYPIC DATA
  • data depth more important in clinical setting as it drives solutions and cost effective interventions.  For example Foundation Medicine, who supplies genotypic/phenotypic data for patient samples supplies high data depth
  • technologies are moving to data support
  • evidence will need to be tied to umbrella value propositions
  • Informatic solutions will have to prove outcome benefit

 

 

 

 

 

How will Machine Learning be involved in the healthcare value chain?

  • increased emphasis on real time datasets – CONSTANT UPDATES NEED TO OCCUR. THIS IS NOT HAPPENING BUT VALUED BY MANY PLAYERS IN THIS SPACE
  • Interoperability of DATABASES Important!  Many Players in this space don’t understand the complexities integrating these datasets

Other Articles on this topic of healthcare informatics, value based oncology, and healthcare IT on this OPEN ACCESS JOURNAL include:

Centers for Medicare & Medicaid Services announced that the federal healthcare program will cover the costs of cancer gene tests that have been approved by the Food and Drug Administration

Broad Institute launches Merkin Institute for Transformative Technologies in Healthcare

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Paradoxical Findings in HealthCare Delivery and Outcomes: Economics in MEDICINE – Original Research by Anupam “Bapu” Jena, the Ruth L. Newhouse Associate Professor of Health Care Policy at HMS

Google & Digital Healthcare Technology

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

The Future of Precision Cancer Medicine, Inaugural Symposium, MIT Center for Precision Cancer Medicine, December 13, 2018, 8AM-6PM, 50 Memorial Drive, Cambridge, MA

Live Conference Coverage @Medcity Converge 2018 Philadelphia: Oncology Value Based Care and Patient Management

2016 BioIT World: Track 5 – April 5 – 7, 2016 Bioinformatics Computational Resources and Tools to Turn Big Data into Smart Data

The Need for an Informatics Solution in Translational Medicine

 

 

 

 

Read Full Post »

Older Posts »