Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘bigdata’


Reporter: Aviva Lev-Ari, PhD, RN

Engineers work to help biologists cope with big data

Tue, 01/08/2013 – 10:15am

Liang Dong is developing an instrument that will allow plant scientists to simultaneously study thousands of plants grown in precisely controlled condition. Photo: Bob ElbertLiang Dong is developing an instrument that will allow plant scientists to simultaneously study thousands of plants grown in precisely controlled condition. Photo: Bob ElbertLiang Dong held up a clear plastic cube, an inch or so across, just big enough to hold 10 to 20 tiny seeds.

Using sophisticated sensors and software, researchers can precisely control the light, temperature, humidity, and carbon dioxide inside that cube.

Dong—an Iowa State University assistant professor of electrical and computer engineering and of chemical and biological engineering—calls it a “microsystem instrument.” Put hundreds of those cubes together and researchers can simultaneously grow thousands of seeds and seedlings in different conditions and see what happens. How, for example, do the plants react when it is hot and dry? Or carbon dioxide levels change? Or light intensity is adjusted very slightly?

The instrument designed and built by Dong’s research group will keep track of all that by using a robotic arm to run a camera over the cubes and take thousands of images of the growing seeds and seedlings.

Plant scientists will use the images to analyze the plants’ observable characteristics—the leaf color, the root development, the shoot size. All those observations are considered a plant’s phenotype. And while plant scientists understand plant genetics very well, Dong says they don’t have a lot of data about how genetics and environment combine to influence phenotype.

Dong’s instrument will provide researchers with lots of data—too much for scientists to easily sort and analyze. That’s a problem known as big data. And it’s increasingly common in the biological sciences.

“We’re seeing a proliferation of new instruments in the biological sciences,” says Srinivas Aluru, the Ross Martin Mehl and Marylyne Munas Mehl Professor of Computer Engineering at Iowa State. “And the rate of data collection is increasing. So we have to have a solution to analyze all this data.”

Aluru is leading a College of Engineering initiative to build research teams capable of solving big data problems in next-generation DNA sequencing, systems biology, and phenomics. The researchers are developing computing solutions that take advantage of emerging technologies such as cloud computing and high-performance computers. They’re also building partnerships with technology companies such as IBM, Micron, NVIDIA, Illumina Inc., Life Technologies Corp., Monsanto Co., and Roche.

The project is one of the three Dean’s Research Initiatives launched by Jonathan Wickert, former dean of the College of Engineering and currently Iowa State’s senior vice president and provost. The initiatives in high-throughput computational biology, wind energy, and a carbon-negative economy were launched in March 2011 with $500,000 each over three years. That money is to build interdisciplinary, public-private research teams ready to compete for multi-million dollar grants and projects.

Patrick Schnable, Iowa State’s Baker Professor of Agronomy and director of the centers for Plant Genomics and Carbon Capturing Crops, remembers when biologists had no interest in working with computer specialists. That was before they tried to work with billions of data points to, say, accurately predict harvests based on plant genotype, soil type and weather conditions.

“Now we’re getting huge, absolutely huge, data sets,” Schnable says. “There is no way to analyze these data sets without extraordinary computer resources. There’s no way we could do this without the collaboration of engineers.”

To date, the computational biology initiative has attracted $5.5 million for four major research projects. One of the latest grants is a three-year, $2 million award from the BIGDATA program of the National Science Foundation and the National Institutes of Health. The grant will allow Aluru and researchers from Iowa State, Stanford University, Virginia Tech, and the University of Michigan to work together to develop a computing toolbox that helps scientists manage all the data from today’s DNA sequencing instruments.

Aluru says the research initiative helped prepare Iowa State researchers to go after that grant.

“When the BIGDATA call came in, we had the credibility to compete,” he says. “We were already working on leading edge problems and had established relationships with companies.”

The initiative, the grants and the industry partnerships are helping Iowa State faculty and students move to the front of the developing field.

“One computing company wanted to set up a life science research group and it came here for advice,” Aluru says. “Iowa State is known as a big data leader in the biosciences.”

Source: Iowa State University

SOURCE:

http://www.rdmag.com/news/2013/01/engineers-work-help-biologists-cope-big-data?et_cid=3031227&et_rid=461755519&linkid=http%3a%2f%2fwww.rdmag.com%2fnews%2f2013%2f01%2fengineers-work-help-biologists-cope-big-data

 

Advertisements

Read Full Post »


The Relationship Between “Big Data” and Health Care – Value or Rubbish?

Author: Alan Fleischman, MBA      E-mail: a.fleischman@verizon.net

A blog (pathcareblog.com) entitled Why Big Data for Healthcare is Rubbish

http://pathcareblog.com/why-big-data-for-healthcare-is-rubbish/?goback=%2Eanb_1839273_*2_*1_*1_*1_*1_*1 takes direct aim at a recent report by the McKinsey Global Institute (Big Data: The Next Frontier for Innovation, Competition, and Productivity) http://www.mckinsey.com/insights/mgi/research/technology_and_innovation/big_data_the_next_frontier_for_innovation that projects substantial quantitative and qualitative benefits from implementing Big Data initiatives in health care.  Pathcare essentially states that McKinsey and Big Data ignore the two major stakeholders in healthcare – doctors and patients: “The study does not cite a single interview with a primary care physician or even a CEO of a healthcare organization that might support or validate their theories about big data value for healthcare. This is shoddy research, no matter how well packaged.” http://pathcareblog.com/why-big-data-for-healthcare-is-rubbish/?goback=%2Eanb_1839273_*2_*1_*1_*1_*1_*1

An article in Businessweek (The Health-Care Industry Turns to Big Data by Jordan Robertson, May 17, 2012) http://www.businessweek.com/articles/2012-05-17/the-health-care-industry-turns-to-big-data quotes benefits experienced by New York-Presbyterian Hospital from several data initiatives – including reducing “the rate of potentially fatal blood clots by about a third”, according to surgeon Nicholas Morrissey.  Morrisey is also working to develop a big data driven system to assess risk factors on new patients in the emergency room and the admission wards.  Along with hospitals, NSF and NIH have launched an initiative on Big Data to accelerate progress in biomedical research.

This article will not attempt to defend the research methodology utilized by McKinsey or the magnitude of the benefits projected, but it will defend the premise that medicine must improve its processes and procedures. Information systems are essential to this improvement and large amounts of data will need to be exchanged, integrated, and analyzed as a result. Evidence based medicine, effectiveness research, and performance assessments require the analysis of large amounts of data.  Like it or not, medicine is an industry with massive amounts of data, whether it is clinical, administrative, performance, or business.  Medicine can no longer function as a guild where senior craftsmen dispense tricks of the trade to apprentices and society grins and bears the results in terms of lives impacted and national treasure dispensed.  What is truly alarming to this author is the fact that healthcare has been so slow to adopt methods that have been proven effective in other industries – even low-tech methods.  This may explain the positive reception given to the use of simple checklists that have been advocated by the Institute for Healthcare Improvement and A. Gawandi in his book The Checklist Manifesto. http://gawande.com/the-checklist-manifesto  Checklists have been used in the airline industry since its inception.  Other industries have already demonstrated the benefits of Big Data over a substantial time frame – including finance, transportation, manufacturing, and retail. To be sure, I do not believe that Big Data is a cure-all for what ails medicine, nor do I believe that McKinsey advocated that viewpoint in its study.  However, it is one component on the road to improving a chaotic system.

The eye opening report by the Institute of Medicine on Medical Errors (To Err is Human: Building a Safer Health System, November 1999) http://www.iom.edu/~/media/Files/Report%20Files/1999/To-Err-is-Human/To%20Err%20is%20Human%201999%20%20report%20brief.pdf

estimated that as many as 98000 people die in hospitals each year as a result of preventable medical errors.  The costs in addition to loss of life are estimated to range from $17 billion to $29 billion each year.  One of the major conclusions from the Institute’s study was that faulty systems, processes, or conditions lead people to make mistakes or fail to prevent them.  The report clearly stated a need to address medicine from a systems perspective to decrease the alarming rate of medical errors.  A number of prominent physicians and healthcare organizations have advocated other approaches to improve the provision of healthcare – including changes to the basic organization of how primary care is dispensed (ACO, PCMH),  http://www.pcpcc.net/guide/better_to_best how hospitals fit into the provision of care, and how information systems can be utilized to improve both safety/quality and productivity /effectiveness.

Due to the impact of healthcare costs on our society and the slow rate of change in the industry, government policy makers have also been forced to take a more active role.  Thomas Lee and James Mongan of Partners HealthCare System in their book Chaos and Organization in Health Care http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11875 strongly advocate for this role and the importance of improving the healthcare information infrastructure.    In 2009 Congress passed the HITECH Act http://www.pwwemslaw.com/content.aspx?id=540 providing nearly $30 billion to address barrier to health IT adoption, $14.6 billion of which went to encourage adoption of electronic medical records.  Other funds were focused on developing Health Information Exchanges (HIE)  http://searchhealthit.techtarget.com/definition/Health-information-exchange-HIE toward the goal of making patient information available across all care delivery settings.  Bitton, Flier, and Jha (Health Information Technology in the Era of Care Delivery, To What End? JAMA,June 27,2012 – Vol 307,No. 24, P2593)

http://jama.jamanetwork.com/article.aspx?articleid=1199162 argue that the debate over whether health information and technology will save money and improve care is anachronistic.  They state flatly that information technology will be used in health care.  “Health IT is inevitable.  The question now is how best to do it”.

Read Full Post »