Feeds:
Posts
Comments

Posts Tagged ‘Scripps Research Institute’


2013 – Personal Perspectives on Revolutionizing Medicine and Top Stories in Cardiology

Reporter: Aviva Lev-Ari, PhD, RN

Topol Reviews 2013: A Year of Revolutionizing Medicine

 Medscape > Eric Topol on Medscape

Director, Scripps Translational Science Institute; Chief Academic Officer, Scripps Health; Professor of Genomics, The Scripps Research Institute, La Jolla, California; Editor-in-Chief, Medscape

Disclosure: Eric J. Topol, MD, has disclosed the following relevant financial relationships:
Serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for: AltheaDX; Biological Dynamics; Cypher Genomics (Co-founder); Dexcom; Genapsys; Gilead Sciences, Inc.; Portola Pharmaceuticals; Quest Diagnostics; Sotera Wireless; Volcano. Received research grant from: National Institutes of Health; Qualcomm Foundation

December 11, 2013

Practice Changers: Lab Innovations and Genetic Testing

It was almost a year ago that I signed on as Editor-in-Chief of Medscape, and I’m extremely grateful for the opportunity and for the extensive input so many of you have provided. In my last monthly newsletter for the year, I would like to expound on why I think this is the most exciting time ever in the history of medicine and how it will be imminently practice-changing.

Looking at the Laboratory

Let me first turn to laboratories, a big part of how we practice. We send our patients to the clinic or hospital lab, or a central facility, to get their blood drawn. Typically, multiple tubes of blood are obtained; the costs are not transparent; and perhaps even worse, the results are not easily or routinely accessible for most patients. Last month, I highlighted a new entity on the scene — Theranos — and interviewed Elizabeth Holmes, the young CEO.

Theranos will be in all Walgreens stores before long, leveraging microfluidic technology to do hundreds of assays with a droplet of blood, with a fully transparent cost list, and ultimately with results directly going to both the patient and doctor. After 60 years of unchanged laboratory medicine practice, this new, innovative model will help drive disruption — just the kind of shake-up that we have needed.

Second, while touching on labs, recently there has been a big flap between 23andMe, a direct-to-consumer genomics company, and the US Food and Drug Administration (FDA).[1]This was probably attributable to a prolonged lapse of communication on the part of the company, concurrent with aggressive marketing of the product. 23andMe has temporarily stopped providing health-related genetic testing, such as disease susceptibility, carrier state, and pharmacogenomics. Their intention is to work things out with the FDA and get their full $99 panel back up in the months ahead. Why is this an important issue? A Nature editorial[2] posited, “so even if regulators or doctors want to, they will not be able to stand between ordinary people and their DNA for very long.”

Genetic Tests and the “Angelina Effect”

In May of this year, Angelina Jolie published her “My Medical Choice” op-ed,[3] signaling her decision to not only get her BRCA 1,2 genes sequenced but to also undergo bilateral mastectomy.The impact of the so-called “Angelina effect” has been felt worldwide, with a large spike in BRCA testing driven by consumers, and challenges to prevailing cultural norms in places such as Israel, where there is a very high rate of pathogenic BRCA mutations but close to the lowest rate of preventive surgery.[4]

The issue at hand is the availability of genetic tests to patients, which didn’t exist before. Pregnant women can now, in their first trimester, have a single tube of blood drawn to screen for multiple chromosomal aberrations. Amniocentesis is quickly becoming a bygone procedure.[5] The power of genetic testing in practice is just starting to be felt and will be increasingly transformative in the years ahead.

Restructuring and Digitizing Medicine

The Out-of-Hospital Experience

Third, the structural icons of medicine are undergoing reassessment. What do we do with hospitals and clinics in a digital medicine world? George Halvorson, the outgoing CEO of Kaiser Permanente, has weighed in on this by saying that we should start delivering healthcare “farther and farther” from the hospital setting and “even out of doctors’ offices.”[6]

Cisco did a large consumer survey and found that over 70% of patients would prefer a virtual rather than a physical office visit.[7] A tweet that I put out in response to a Fast Company article[8] that said “the idea of going down to your doctor’s office is going to feel as foreign as going to the video store” attracted considerable attention.

Just this week, a large Intel poll of 12,000 consumers found that most believe that hospitals as we know them today will be “obsolete in the near future.”[9] The fact that we are even now questioning what to do with our hospitals and clinics is telling in itself and reflects the profound forthcoming changes in medicine.

Tracking the Human Body 

What Made Medical News in 2013?

Fourth and finally, the explosion of sensors is especially worth noting. This year, the FDA approval of smartphone ECGs and digitized pills heralded the beginning of many more novel digital ways that we will be tracking patients in the future. A watch that passively and continuously captures blood pressure from every heartbeat is just around the corner. We don’t even know what “normal” blood pressure is when it can be assessed 24/7, throughout the night, and during any time of stress, and this is representative of what the era of wireless sensor tracking will bring.

I hope that I have convinced you, with just a few examples, that this is an extraordinary time in medicine. We are all lucky to be a part of it, to see it go through its major reconfiguration and refinement. I will continue to post the links to anything that I think is particularly interesting on a daily basis via Twitter, and you are welcome to follow me @EricTopol.

Wishing you and your family all the best in the New Year. Despite some counterforces, let’s hope that 2014 takes medicine to new heights, with ever more palpable changes and improvements in the way that we render healthcare for our patients.

Eric J. Topol, MD

Editor-in-Chief, Medscape

References

  1. The FDA and thee. The Wall Street Journal. November 25, 2013.http://online.wsj.com/news/articles/SB10001424052702304465604579220003539640102 Accessed December 10, 2013.
  2. The FDA and me. Nature. December 3, 2013. http://www.nature.com/news/the-fda-and-me-1.14289 Accessed December 10, 2013.
  3. Jolie A. My medical choice. The New York Times. May 14, 2013.http://www.nytimes.com/2013/05/14/opinion/my-medical-choice.html Accessed December 10, 2013.
  4. Rabin RC. In Israel, a push to screen for cancer gene leaves many conflicted. The New York Times. November 26, 2013. http://www.nytimes.com/2013/11/27/health/in-israel-a-push-to-screen-for-cancer-gene-leaves-many-conflicted.html Accessed December 10, 2013.
  5. Topol EJ. Topol predicts genomic screening will replace amniocentesis. Medscape. November 11, 2013.http://www.medscape.com/viewarticle/814052 Accessed December 10, 2013.
  6. Friedman B. The future of healthcare: virtual physician visits & bedless hospitals. Lab Soft News. April 1, 2013.http://labsoftnews.typepad.com/lab_soft_news/2013/04/the-future-of-healthcare-less-emphasis-on-hospital-visits.html Accessed December 10, 2013.
  7. Cisco. Cisco study reveals 74 percent of consumers open to virtual doctor visit. March 4, 2013.http://newsroom.cisco.com/release/1148539/Cisco-Study-Reveals-74-Percent-of-Consumers-Open-to-Virtual-Doctor-Visit Accessed December 11, 2013.
  8. Fast Company. Could ePatient networks become the superdoctors of the future?http://www.fastcoexist.com/1680617/could-epatient-networks-become-the-superdoctors-of-the-future
  9. Fisher N. Global study finds majority believe traditional hospitals will be obsolete in the near future. Forbes. December 9, 2013. http://www.forbes.com/sites/theapothecary/2013/12/09/global-study-finds-majority-believe-traditional-hospitals-will-be-obsolete-in-the-near-future/ Accessed December 10, 2013.

SOURCE

http://www.medscape.com/viewarticle/817648_1

John Mandrola’s Top 11 Cardiology Stories of 2013

by John M. Mandrola, MD

Clinical Electrophysiologist, Baptist Medical Associates, Louisville, Kentucky

Disclosure: John M. Mandrola, MD, has disclosed the following relevant financial relationships:
Served as a speaker or member of a speakers bureau for: Biosense/Webster

In Medscape Cardiology, December 20, 2013

 

1. Obamacare/Affordable Care Act

The reforms that sweep in with the tidal waves of Obamacare will transform the landscape of cardiology. Things look differently already, but even more change is coming. Optimism is healthier than pessimism, so my assessment is: Obamacare will be associated with better heart disease outcomes.

Here’s why: What single factor limits improvement of outcomes in heart disease? It’s surely not a lack of access to echocardiograms, or new antiplatelet drugs, or LAA occlusion devices. Rather, it’s the lack of patients’ adherence to healthy lifestyles choices. Cardiologists have reached a therapeutic threshold. Gains in the treatment of heart disease have become and will likely stay incremental. The next big jump in heart disease outcomes will require patients’ actions — not doctors’.

The chief strength of Obamacare is that it ushers in the era of cost-shifting to patients. People will pay more for care. This, I believe, will favor the adoption of healthy lifestyles. Skin in the game, will, on the whole, do great things for heart health. The car analogy: We get our oil changed in our car because preventative maintenance is cost-effective. If you never had to pay for a new car, there’d be little incentive not to trash your current one.

I can hear the naysayers. Placing more of the costs on patients will keep them from getting care. Yes, in isolated cases, which will surely be amplified — this might be true. But overall, 3 arguments refute this thinking: First is that in the past decade, both deaths from heart disease and number of cardiology procedures have declined. Patients are doing better while we do less. Second is the observation that countries that do far fewer procedures boast better CV outcomes. Third, you don’t really believe that doctors control outcomes, do you?

2. The George Bush Stent Case

More than 2 decades ago, a mentor at Indiana taught me that squishing a high-grade coronary lesion did not reduce the risk for heart attack or death. I still remember where I was when I heard that. It was that counterintuitive. The notion that the vulnerable plaque is not the one that looks like a baddie on an angiogram has been proven time and time again. What’s truly remarkable is the resistance of the cardiology community to accept it. Perchance, our visceral reactions to angiograms have clouded our interpretation of science.

Cynics would believe that the overuse of stents — in the face of contrary clinical evidence — is due to financial incentives. They point to examples of outrageous behavior on the part of a tiny few outliers behaving very badly. I can’t deny that incentives don’t play a role, but I think this story has more to do with the cognitive bias stemming from the success of acute primary angioplasty. It’s tempting to merge the stunning benefits of intervening in an acute MI situation to the nonacute situations.

The George Bush story is big because the media attention forced us to look again at the science of the COURAGE trial.[1] What’s more, this story gave strength to those who question the entrenched paradigm of ischemia-guided revascularization. Imagine the implications for cardiology if there was little reason to look for asymptomatic ischemia.

3. Cholesterol Guidelines: Who Decides the “Need” for a Statin?

The cholesterol guidelines[2] had some obvious practice-changing revelations: (1) the end of nonstatin cholesterol-lowering drugs; (2) cessation of treating to numbers; (3) the notion of using statins as cardiovascular risk reducers, rather than cholesterol-lowering drugs; (4) the fight over where CV risk warrants statin intervention.

These are big issues, but I don’t see them as the biggest part of the 2013 cholesterol guideline story. I think what makes this a tipping point in clinical cardiology is the notion that the ultimate decision to take a statin falls with the patient.

Writing to patients in Forbes, Dr. Harlan Krumholz says:

It is your decision. Your doctors can guide you, but you deserve to be informed about the decision and make the choice that feels most comfortable to you. You do not know if you will be the person who avoids a heart attack or will suffer a side effect. You should have the information about what you are likely to gain by taking the medication — and what risks you are incurring. The decision to take the drug should mean that you believe that you are more likely to benefit from the drug than to be harmed by it. And even if a drug has a benefit for you, you have a right to decide whether it is right for you.

This is huge because it brings patient-centered, shared decision-making to the mainstream. Before the cholesterol guidelines, shared decision-making was something you read about in academic journals. But now, across doctors’ offices throughout the United States, low-risk patients will have to decide whether their 1-in-100 chance of preventing a heart attack is worth the 1-in-100 chance of developing diabetes or other statin side effects. Getting patients to see tradeoffs, NNTs, and aligning care with their goals isn’t just a story of 2013; it’s a story of the decade.

JNC-8, Obesity and AF, and NOACs

4. High Blood Pressure Guidelines

I often tell this story to patients: When I was a younger doctor, I would take my 94-year-old grandfather around to see the best doctors in town. We both held to the fantasy that doctors could “fix” him. Mostly he had age-related problems. He did, however, own one shining beacon of good health: He had perfect blood pressure, without medication. My message to patients is that my grandfather lived to 94 because of those BP readings.

What I learned from my grandfather’s case, which has now been borne out in the new JNC 8 guidelines,[3] is that it matters how one achieves good blood pressure. The new guidelines, chaired by a family medicine professor (how cool is that?), continues to disrupt the concept that more drug treatment leads to better outcomes.

It is indeed striking what can be found when one looks carefully and systematically at absolute benefits of treatments from randomized clinical trials. Truthfully, did you know that there was essentially no evidence that treating mild high blood pressure in patients younger than 60 improves outcomes? I didn’t.

Here the affect heuristic looms large. I find great pleasure in the idea that the medical establishment is now poised to embrace common sense. Namely, that modifying a single risk factor with a chemical that surely has multiple system-wide effects does not necessarily improve outcomes.

5. In Electrophysiology, Treat the Underlying Cause of AF

There are a few landmark studies I keep around the exam room for show-and-tell. 2013 brought another keeper. Dr. Prashanthan Sanders and colleagues (from Adelaide, Australia) are authors of the most impactful study in all of cardiology in 2013.[4]

Here is the story: Atrial fibrillation is increasing exponentially. Electrophysiologists see patients at the end of the disease spectrum. Rate control, rhythm control, and anticoagulation are each important treatment strategies, but they don’t address the root cause of AF. In previous work in animal models, this group of researchers showed that obesity increases the susceptibility to AF.

The hypothesis was that weight loss (and aggressive attention to other cardiometabolic risk factors) would reduce AF burden. They randomly assigned patients on their waiting list for AF ablation to 2 groups: (1) a physician-led aggressive program that targeted primarily weight loss, but also hypertension, sleep apnea, glucose control, and alcohol reduction; or (2) standard care with lifestyle counseling.

The findings were striking. Compared with the group of patients receiving standard care, patients in the physician-directed program lost weight, reported less AF symptoms, and had fewer AF episodes recorded. Most impressive were the structural effects noted on echocardiograms. Patients in the intervention group had regression of left ventricular hypertrophy and reduction in left atrial size.

Though this is a small trial, it is practice-changing for cardiology. It shows that treating modifiable risk factors remodels the heart and in so doing reduces the burden of AF. In an interview in JAMA, Dr. Sanders says aggressive risk factor treatment should be a standard of care. I agree. Right now, AF ablation is too often thought of in terms of a supraventricular tachycardia ablation — a fix for a fluke of nature. It’s not that way. In the majority of AF cases, the same excesses that cause atherosclerosis also cause AF. Rather than make 50 burns in the atria, it makes much more sense to address the root cause.

NOACs

6. Novel Anticoagulants Face Value-Based Headwinds

Tell me you haven’t been in this situation: You are making rounds on a patient with newly diagnosed AF, admitted the night before. She has multiple risk factors for stroke. Her heart rate has been controlled and her symptoms improved. There are now 2 choices for anticoagulation: (1) Start warfarin, and while waiting for an adequate INR, cover with IV-heparin (days in hospital) or low-molecular-weight heparin (teaching- and dollar-intensive); or (2) Begin a novel oral anticoagulant (NOAC) and discharge the patient that day. It’s so much easier to use NOAC drugs.

But then what happens when the “starter” kits run out and the patient faces a massive bill at the pharmacy, or her third-party payer denies payment? Now our patient has a problem. She is in AF and has risk factors for stroke. A gap in anticoagulation is not desirable.

At the heart of this issue is the value and superiority of NOAC drugs compared with warfarin. At the 2013 American Heart Association Sessions, the ENGAGE-AF trial showed that the newest NOAC drug, edoxaban, compared favorably to warfarin.[5] All 4 clinical trials of NOAC drugs vs warfarin looked strikingly similar — namely, that in absolute benefits (stroke reduction) and harm (bleeding), NOAC drugs and warfarin performed similarly, within 1% of each other. In the cost-conscious, evidence-based climate of 2013, NOAC drugs are increasingly recognized as overvalued. Warfarin, with all its imperfections, remains steady.

Transparency, End-of-Life Care, and TACT

7. The Sunshine Act

Cardiology is a drug- and device-intensive field. Collaboration with industry is necessary. Skillful use of stents, ICDs, ablation, and pharmaceutical agents has enhanced and saved the lives of millions of patients. Yet, there is clear evidence of overuse and misuse of expensive technology. Look no further than studies that show huge geographic practice variation,[6] which I wrote about here.

The 2013 Sunshine Act has changed the landscape of cardiology education and influence. The upside of transparency is that knowing the financial relationships of investigators is an important part of judging science. Perhaps more important, though, is the possibility that the Sunshine Act will help remove those with financial relationships from guideline writing. Given the influence of guidelines, it’s important that writers be free of conflicts.

The potential downsides of too much Sunshine are noteworthy. After being interviewed in the Wall Street Journal this August,[7] I wrote the following on my blog:

Doctors are a conservative lot. Concern over perception will surely decrease physicians’ interactions with industry, both the useful and not so useful ones. The effect on physician education might suffer. Though the Ben Goldacres of the world rightly emphasize bias when industry entwines itself with medical education, I can attest to have learned a lot from industry-sponsored programs. And this too: one thing that happens when industry sponsors a learning session is that doctors come to it. They talk; they share cases; they come together face-to-face. Such interactions are critical. Will the disappearance of sponsored sessions decrease the amount of face-to-face learning?

We shall soon learn whether all this sunshine enhances health or causes burns.

8. Compassionate Care of the Elderly

Cardiologists are programmed to see death as the enemy. This is a very good thing when treating diseases like STEMI. But a side effect of improving life-prolonging interventions is that patients live long enough to develop other problems. Cardiologists are increasingly asked to treat the elderly and the frail. And this is a challenge because in these patients, treating death as if it’s avoidable is perilous. Delaying death is not the same as prolonging life. Treating a disease is not the same as treating a person.

It’s possible that 2013 will be the year in which things changed for the better in the care of the elderly. And if it is, we will have Katy Butler, an author and investigative journalist, to thank. Ms. Butler’s 2013 book, Knocking on Heaven’s Door, poignantly chronicles the difficulties that both her parents struggled with as they approached the end of life.[8] In both cases, suffering occurred because of disconnect with cardiologists who behaved as if death were optional.

Writing in the Wall Street Journal this September, Ms. Butler describes her mother’s decision to forego aggressive intervention for valvular heart disease.[9] Despite being cared for in one of the nation’s elite heart hospitals, Mrs. Butler’s mother was forced to fight hard for her right to self-determination. Perhaps she mustered the strength to fight for a good death because of the lessons she learned as a caregiver for her chronically ill husband, whose death was tragically prolonged at the hands of paternalistic cardiologists. In Ms. Butler’s father’s case, which she describes in this award-winning New York Times Magazine essay, cardiologists implanted an unnecessary pacemaker and then refused to deactivate it, against the family’s wishes.[10]

As the American College of Cardiology begins an awareness campaign for aortic stenosis, and transcutaneous approaches to valvular disease begin their long road to clinical utility, no topic could be timelier than compassionate patient-centered care for the elderly. 2013 is the year that the oath of Maimonides — “Oh, God, Thou has appointed me to watch over the life and death of Thy creatures” — becomes even more relevant to cardiologists, the guardians of technology.

9. Chelation Therapy

Nothing has become more virtuous in the practice of medicine than clinical evidence. We have set out the rules: The scientific method will determine the best treatments for our patients. One group gets treatment A and the other treatment B. Then we measure outcomes — the simpler the better. These are the rules of the game; they can’t be changed when we don’t like how the game turns.

The TACT investigators have followed the rules. They compared 322 diabetic patients with coronary heart disease who were treated with chelation vs 311 similarly matched patients treated with placebo infusions.[11] The primary endpoint, a composite of death, MI, stroke, revascularization, and hospitalization for angina, occurred in 80 of 322 (25%) treated with chelation and 117 (38%) on placebo. That’s an absolute — not relative — reduction of 13%, and an astounding NNT of 7. For comparison, statin drugs for primary prevention, or NOAC drugs vs warfarin in patients with AF, have NNTs greater than 100.

What makes chelation in diabetics a top story of the year is more than just the data. By the authors’ own account, these findings need to be replicated. What’s really big here is the voracity of opposition from the establishment. I re-read what I said in my opinion piece from November. I’m sticking to it: “It would be a huge mistake to dismiss this science because chelation does not conform to preconceived notions or because it is practiced outside the mainstream of medicine. Let’s not forget about the patients with this terrible disease. It’s not as if we have good treatments for them.”

EMRs and the Blogosphere

10. EMR and the Danger It Poses to the Patient-Doctor Connection

Among Mr. Obama’s broken promises (if you like your insurance plan…) was that the efficiency inherent in electronic medical records (EMRs) would solve the growing cost of healthcare.

In 2013, nearly every doctor is being forced to adopt an EMR. Medicine is replete with examples of good ideas gone awry. There is no better example of this than medical EMR systems. The list is long: EMRs interface poorly with users (doctors). Completing a medical record on an encounter for a common heart rhythm ailment requires me to click more than 25 times. (Fact: EMRs either decrease the number of patients one can see, or worse, they cause a doctor to spend less face time with each patient.) EMRs don’t talk to each other — and in their current form, never will. There is not a shred of evidence that they improve real outcomes. EMRs function more as a billing invoice than a useful medical record.

Doctors are the end-users but not the customers of EMR companies, so our feedback carries little weight. EMR companies effectively answer to no one. And talk about conflict of interest: Anointed EMR companies have become immensely profitable. Even the New York Times took notice.[12]

None of this is the worst part. The worst aspect of EMR systems (in their current form) is that they threaten to remove the humanity from something that at its heart should be human: the patient-doctor connection. In 2013, EMR is one of the many forces that threaten the patient-doctor relationship. If this situation improves in 2014, I’ll report it; but I’m not optimistic. (Full disclosure: I love computers.)

11. Social Media

The American College of Cardiology, the Heart Rhythm Society, the BMJ, and the New England Journal of Medicine are all actively engaged in social media and blogging. I gave a talk at an Indiana University medical student leadership conference this year. Nearly every medical student was on Twitter. So is the president of the ACC and SCAI, as are millions of patients.

The democracy of information on social media enhances patient involvement in medical decision-making. When patients have information, decisions improve. AF patient Mellanie True Hills has made her Website, StopAfib, a go-to resource for patients, a place where influential academic leaders in electrophysiology have taken the time to be interviewed. Social media empowers patient advocates.

Social media is also transforming influence. In the past, the only influencers in cardiology were academic leaders — those who have access to medical journals. That is changing. Look at me: I am a nobody in the academic world, yet Dr. Rich Fogel, the former president of the Heart Rhythm Society (HRS), put me on the same stage with Dr. Douglas Zipes, Dr. Brian Olshansky, and Dr. Anne Curtis at the 2013 HRS sessions to speak about ICDs.

Finally, this is speculative, but I believe that social media has the power to transform medical education. This year, the biggest electrophysiology story from the 2013 European Society of Cardiology Congress was the Echo-CRT trial.[13] This was a practice changer because it put a stop to implanting CRT devices in patients destined to be nonresponders. Dr. Jay Schloss (Christ Hospital, Cincinnati, Ohio), writing on his personal blog, provided clear and useful coverage for free, without the need for registration. Another example: I think IV-diltiazem is overused and misused. In the academic literature, you cannot find a contemporary piece to support this view. But you can on social media.

This is my top 11 for 2013. I invite you to use the comments section to share your top cardiology picks.

 SOURCE

http://www.medscape.com/viewarticle/818115_1

 

 

 

Read Full Post »


24 New MacArthur Fellows: 13 men and 11 women — Now so-called “Geniuses”

Reporter: Aviva Lev-Ari, PhD, RN

 

Meet the 2013 MacArthur Fellows

http://www.macfound.org/fellows/class/2013/

  • Profile portrait of Kyle Abraham

    Kyle Abraham

    Kyle Abraham/Abraham.in.Motion

    New York, NY

    Age: 36

    Choreographer and Dancerexploring the relationship between identity and personal history through a unique hybrid of traditional and informal dance styles.

    More 

  • Profile portrait of Donald Antrim

    Donald Antrim

    Columbia University

    New York, NY

    Age: 55

    Writercomposing works of fiction and nonfiction that are characterized by tightly controlled prose juxtaposed with absurd and surreal events.

    More 

  • Profile portrait of Phil Baran

    Phil Baran

    Scripps Research Institute

    La Jolla, CA

    Age: 36

    Organic Chemistinventing efficient, scalable, and environmentally sound methods for recreating in the laboratory natural products with potential pharmaceutical applications.

    More 

  • Profile portrait of C. Kevin Boyce

    C. Kevin Boyce

    Stanford University

    Stanford, CA

    Age: 39

    Paleobotanistestablishing links between ancient plant remains and present-day ecosystems through a pioneering and integrative approach to evolutionary plant biology.

    More 

  • Profile portrait of Jeffrey Brenner

    Jeffrey Brenner

    Camden Coalition of Healthcare Providers

    Camden, NJ

    Age: 44

    Primary Care Physiciancreating a comprehensive health care delivery model that addresses the medical and social service needs of high-risk patients in impoverished communities.

    More 

  • Profile portrait of Colin Camerer

    Colin Camerer

    California Institute of Technology

    Pasadena, CA

    Age: 53

    Behavioral Economistexpanding our knowledge of individual behavior and challenging models of human interaction that form the basis of classic economic theory.

    More 

  • Profile portrait of Jeremy Denk

    Jeremy Denk

    New York, NY

    Age: 43

    Pianist and Writerengaging listeners and readers in a deeper appreciation of classical music through unmatched musical ability paired with an unusual eloquence with words.

    More 

  • Profile portrait of Angela Duckworth

    Angela Duckworth

    University of Pennsylvania

    Philadelphia, PA

    Age: 43

    Research Psychologisttransforming our understanding of the roles that grit and self-control play in educational achievement.

    More 

  • Profile portrait of Craig Fennie

    Craig Fennie

    Cornell University

    Ithaca, NY

    Age: 40

    Materials Scientistdesigning new materials, atom by atom, that have electrical, magnetic, and optical properties desirable for electronics and improved communication technology.

    More 

  • Profile portrait of Robin Fleming

    Robin Fleming

    Boston College

    Chestnut Hill, MA

    Age: 57

    Medieval Historiandrawing on archaeological and textual sources to provide fresh insight into the social, economic, and cultural lives of inhabitants of late Roman and medieval Britain.

    More 

  • Profile portrait of Carl Haber

    Carl Haber

    Lawrence Berkeley National Laboratory

    Berkeley, CA

    Age: 54

    Audio Preservationistdeveloping new technologies for the preservation of rare, damaged, and deteriorating sound recordings of immense value to our cultural heritage.

    More 

  • Profile portrait of Vijay Iyer

    Vijay Iyer

    New York, NY

    Age: 41

    Jazz Pianist and Composerforging a new conception of the practice of American music in compositions for his ensembles, cross-disciplinary collaborations, and scholarly work.

    More 

  • Profile portrait of Dina Katabi

    Dina Katabi

    Massachusetts Institute of Technology

    Cambridge, MA

    Age: 42

    Computer Scientistworking at the interface of computer science and electrical engineering to improve the speed, reliability, and security of data exchange, particularly in wireless networks.

    More 

  • Profile portrait of Julie Livingston

    Julie Livingston

    Rutgers University

    New Brunswick, NJ

    Age: 46

    Public Health Historian and Anthropologistcombining archival research and ethnographic observation to illuminate largely ignored crises of care in both the developing and developed world.

    More 

  • Profile portrait of David Lobell

    David Lobell

    Stanford University

    Stanford, CA

    Age: 34

    Agricultural Ecologistunearthing richly informative, but often underutilized, sources of data to investigate the impact of climate change on crop production and global food security.

    More 

  • Profile portrait of Tarell McCraney

    Tarell McCraney

    Steppenwolf Theatre Company

    Chicago, IL

    Age: 32

    Playwrightevoking a sense of our shared humanity in works that explore the diversity of the African American experience and imbue the lives of ordinary people with epic significance.

    More 

  • Profile portrait of Susan Murphy

    Susan Murphy

    University of Michigan

    Ann Arbor, MI

    Age: 55

    Statisticiantranslating statistical theory into powerful tools for evaluating and customizing treatment regimens for individuals coping with chronic or relapsing disorders.

    More 

  • Profile portrait of Sheila Nirenberg

    Sheila Nirenberg

    Weill Cornell Medical College

    New York, NY

    Neuroscientistinvestigating fundamental questions about the nervous system and developing new kinds of prosthetic devices and robots.

    More 

  • Profile portrait of Alexei Ratmansky

    Alexei Ratmansky

    American Ballet Theatre

    New York, NY

    Age: 45

    Choreographerrevitalizing classical ballet with a distinctive style that honors the past while infusing a modern sensibility to interpretations of the standard repertoire as well as his own works.

    More 

  • Profile portrait of Ana Maria Rey

    Ana Maria Rey

    University of Colorado

    Boulder, CO

    Age: 36

    Atomic Physicistadvancing our ability to simulate, manipulate, and control novel states of matter through fundamental conceptual research on ultra-cold atoms.

    More 

  • Profile portrait of Karen Russell

    Karen Russell

    New York, NY

    Age: 32

    Fiction Writerblending fantastical elements with psychological realism to construct wildly imaginative settings and characters in tales of transformation and redemption.

    More 

  • Profile portrait of Sara Seager

    Sara Seager

    Massachusetts Institute of Technology

    Cambridge, MA

    Age: 42

    Astrophysicistadapting fundamental maxims of existing planetary science to create a comprehensive theoretical framework for determining the characteristics of planets beyond our solar system.

    More 

  • Profile portrait of Margaret Stock

    Margaret Stock

    Cascadia Cross Border Law

    Anchorage, AK

    Age: 51

    Immigration Lawyerfinding solutions to complex immigration issues faced by military personnel and contributing to policy debates about the role of a healthy immigration system in ensuring national security.

    More 

  • Profile portrait of Carrie Mae Weems

    Carrie Mae Weems

    Syracuse, NY

    Age: 60

    Photographer and Video Artistexamining the legacy of African American identity, class, and culture in cinéma vérité–style works that incorporate elements of folklore, multimedia collage, and experimental printing methods.

    More 

SEPTEMBER 24, 2013 – PRESS RELEASE

24 Extraordinarily Creative People Who Inspire Us All:  Meet the 2013 MacArthur Fellows thumbnail

24 Extraordinarily Creative People Who Inspire Us All: Meet the 2013 MacArthur Fellows

“This year’s class of MacArthur Fellows is an extraordinary group of individuals who collectively reflect the breadth and depth of American creativity.” — Cecilia Conrad, Vice President, MacArthur Fellows programRead More

About the MacArthur Fellows Program

Frequently Asked Questions

MacArthur Fellows

Stay Informed

Sign up to be notified of Fellows announcements.

Related Video

SEPTEMBER 25, 2013 – VIDEO

Still image from Paleobotanist C. Kevin Boyce, 2013 MacArthur Fellow

Paleobotanist C. Kevin Boyce, 2013 MacArthur Fellow

Paleobotanist C. Kevin Boyce was named a MacArthur Fellow in 2013. The Fellowship is a $625,000, no-strings-attached grant for individuals who have shown exceptional creativity in their work and the promise to do more.  View … Watch Video

– See more at: http://www.macfound.org/fellows/class/2013/#sthash.vPvXUtxh.dpuf

 

The MacArthur Foundation, which gives out fellowship awards each year to winners who work in science, music and the arts, and journalism, among other fields. The fellowship comes with a five-year, “no-strings-attached” $500,000 stipend, which the foundation now says it is increasing to $625,000, the Associated Press reports.

“We looked at many benchmarks and decided it was time to make an adjustment,” Cecilia Conrad, vice president of the MacArthur Fellows Program, told the AP. She adds that the grant needed to account for inflation. The amount of the grant was last changed in 2000.

The Foundation also asked past fellows about how receiving the award affected their work and lives, with 93 percent reporting that it made them more financially stable and 88 percent said it gave them more opportunities to be creative. A small portion, about 8 percent, said that they had less time personal or family time after receiving the award, and some asked for advice on coping with newfound visibility after winning the genius grant.

This year’s crop of MacArthur fellows includes a paleobotanist, a statistician, and a neuroscientist as well as other scientists and writers and artists, according to the MacArthur Foundationthat gives out the fellowships. These 13 men and 11 women — now so-called “Geniuses” — will receive a stipend of $625,000 over five years, an award that was recently upped to keep pace with inflation.

“It was amazing to me,” new fellow and dancer-choreographer Kyle Abraham tells the New York Times about receiving his phone call from he foundation. “It was a shock. I was laughing about it; I was crying about it, it was so overwhelming. I’ve been trying to figure out how to pay off my student loans to this day.”

Among the two dozen recipients are Kevin Boyce, a paleobotanist at Stanford University, who studies ancient and modern plants in an effort to understand how climate change may affect ecosystems; statistician Susan Murphy at the University of Michigan, who is developing a method to evaluate adaptive treatments for people with chronic or relapsing disorders like addiction or depression; and Sheila Nirenberg, a neuroscientist at Weill Cornell Medical College, who is studying how the brain processes visual information and ways to restore sight.

Review Affirms Impact and Inspiration of MacArthur Fellows Program – See more at: http://www.macfound.org/press/publications/macarthur-fellows-program-review-summary/#sthash.rvNcLj3r.dpuf

 

Read Full Post »


Reporter: Aviva Lev-Ari, PhD, RN

The Consumer Genetics Conference (CGC) is a one-of-a-kind event that draws together a dynamic community of scientists, clinicians, technology innovators, and patients to discuss the burning issues around the analysis and delivery of genomics results directly to patients and consumers. Over three days, attendees will hear about disruptive diagnostic technologies, cognitive barriers to patients (and medical professionals), ethical/regulatory/privacy issues, the thorny issue of reimbursement, and the challenges of building relationships to realize the potential of personal genomics and individualized medicine. CGC provides an opportunity for all stakeholders to come together at one venue, share viewpoints and engage in an honest dialogue, and together learn how to move the elephant of change. Program topics will include:
  • Whole Genome Debates
  • Translational Genomics
  • Clinical & Third-Generation Sequencing
  • Personal Genome Analysis & Interpretation
  • Empowering Patients: Companies & Technologies
  • Molecular Diagnostics & Point-of-Care
  • Investment & Funding Opportunities
  • Reimbursement Models
  • Five-Year Plan for Consumer Genomics
  • Data Analysis & Management
  • Ethics, Privacy & Regulation
  • Digital Health Tracking Apps

SPEAKERS

2013 Distinguished Faculty

Bonnie Ancone, Vice President, Molecular Diagnostics, XIFIN, Inc.
Bonnie Ancone has 25 years experience in the medical industry of which 15 years have been directly related to medical billing and collections. Prior to coming to XIFIN, Ms. Ancone worked in Anatomic Pathology Reference laboratory settings for 10 years. She held multiple positions including Billing Supervisor, Billing Manager and Director of Billing & Collections. She was also a partner in a medical billing company. Ms. Ancone’s prior experience includes 10 years in outpatient substance abuse clinics as a nurse and Assistant Director. In this capacity, she interacted with regulatory bodies such as DEA, FDA and multiple state behavioral health agencies dealing with licensing, auditing and regulations. She participated in the development of state methadone regulations and the state methadone coalition for Arizona and Nevada. She started her career on the operations side of banking.

Nazneen Aziz, Ph.D., Director, Molecular Medicine, Transformation Program Office, College of American Pathologists
Nazneen Aziz is the Director of Molecular Medicine at the College of American Pathologists. In this role, Dr. Aziz is guiding strategies and leading projects related to genomic medicine at CAP. Currently, she leads a committee that focuses on critical issues surrounding next generation sequencing. She is a member of the Association for Molecular Pathology Workgroup for Whole Genome Analysis and the Center for Disease Control Nex-StoCT-II Workgroup on next generation sequencing bioinformatics and the Interpretation of Sequence Variant Work Group at the College of American College of Medical Genetics. In her prior positions, Dr. Aziz was Vice President of Research and Development at Interleukin Genetics, Vice President of External Research at Point Therapeutics and Director of Translational Research at Novartis Institute of Biomedical Research. In her industry career, she has focused on personalized medicine, biomarkers, genetic tests, and development of drugs in cancer and diabetes. Prior to joining the biotechnology industry Dr. Aziz was an Assistant Professor at Harvard Medical School and Children’s Hospital in Boston where she discovered and characterized the function of novel genes involved in recessive polycystic kidney disease. Nazneen received her Ph.D. in molecular genetics and Masters Degree in biochemistry at the Massachusetts Institute of Technology and her Bachelor’s Degree from Wellesley College.

Pam Baker, Senior Director, Market Access, CardioDx
Ms. Pam Baker is Senior Director of Market Access & Policy with Cardio Dx. She is a life sciences professional with 17 years of experience in pharma, biotech and diagnostics in a series of commercial roles across marketing, new product commercialization, reimbursement, pipeline and sales management. She started her healthcare career 17 years ago, beginning with Johnson & Johnson (Janssen, Ortho and Mc Neil), followed by Genentech. Ms Baker started out in sales, then moved into sales training, sales leadership and to multiple marketing roles, from product launch, to in-line marketing. She then moved into the reimbursement arena, leading the Program Strategy & Management team for Genentech Access Solutions, and has recently joined a molecular diagnostics company in Palo Alto, CA called CardioDx. Ms Baker received a Bachelor of Arts, Political Science and Asian Studies from Northwestern University and a Master, International Management from Thunderbird School of Global Management. She is a mom of 5 year old twin girls.

Shawn C. Baker, Ph.D., CSO, BlueSEQ
Dr. Shawn C. Baker is the Chief Science Officer and co-founder of BlueSEQ, an independent guide for researchers outsourcing their DNA sequencing. Having received his Ph.D. at the University of California – Davis, he started his career as a Research Scientist at Illumina when it was a 15-person startup. After spending several years at the bench developing gene expression array products, he transitioned to Product Marketing where he led a team in charge of Illumina’s Expression and Regulation sequencing portfolio. Dr. Baker started working with BlueSEQ in 2011, helping to establish an online marketplace for life science researchers to gain access to the best sequencing technology for their projects. In addition, BlueSEQ has created the Knowledge Bank, a neutral source of information on the various sequencing technologies, platforms and applications.

Cinnamon S. Bloss, Ph.D., Director, Social Sciences & Bioethics, Assistant Professor, Scripps Translational Science Institute

Dr. Bloss is an Assistant Professor, as well as Director of Social Sciences and Bioethics at the Scripps Translational Science Institute. Her research is funded by the National Institutes of Health and is focused on investigating individuals’ behavioral and psychological responses to disclosure of personal genomic information. She is the lead researcher on STSI’s Scripps Genomic Health Initiative, and her work on this project was recently published in the New England Journal of Medicine and has been highlighted at a number of national and international scientific meetings. She has also presented invited testimony on consumer genomics before the Food and Drug Administration Advisory Panel. Dr. Bloss’ other research interests include developing ways of combining genomics with traditional disease risk factors to make predictions about disease development, progression and response to treatment, as well as designing effective health interventions that leverage genomic information. She also conducts genetic association studies and has several collaborations to investigate the genetic underpinnings of neurological, behavioral, and other health-related phenotypes. Dr. Bloss received her B.A. in Psychology from Smith College, her Ph.D. in Clinical Psychology from the University of California, San Diego, and completed a predoctoral internship in clinical neuropsychology at the University of Florida. Dr. Bloss completed a post-doctoral fellowship in statistical genetics and genomic medicine at The Scripps Research Institute. At STSI, Dr. Bloss directs the Summer Undergraduate Research Internship and is an instructor in the TSRI Graduate Program. She is also a California-licensed clinical psychologist and has worked with adults and children with a wide range of neurological and psychiatric conditions.
John Boyce, President and CEO, GnuBIO
John Boyce is President, CEO and Co-Founder of GnuBIO. Prior to starting GnuBIO, John co-founded Delphi Bio, LLC, a strategic consulting company that serves startup and fortune 500 companies within the life sciences market. Using his proven ability to drive companies to commercial success, John served as the Business Development head for a number of clients, including Affomix. Over a two year period, John developed the business plan for Affomix, oversaw all commercial activities, as well as initiated and drove the sale of the company to a multi-billion dollar sequencing corporation in July 2010. Prior to Delphi and Affomix, John served as Head of Business Development for Helicos BioSciences (HLCS), where he was responsible for identifying new market opportunities. Prior to Helicos, John was the Senior Director of Commercial Development for Parallele Biosciences, Inc. where he played an integral role of building the company leading to an acquisition of the company by Affymetrix (AFFY). He was the Senior Director of Business Development for Genomics Collaborative where he was responsible for putting in place and building the Sales, Marketing, and Business Development infrastructure. John executed several key deals and played a key role in the acquisition by SeraCare Life Sciences, Inc. Prior to Genomics Collaborative, John led the successful expansion of Sequenom’s MassARRAY system as Director, United States Sales at Sequenom Inc. (SQNM), from 2000 to 2003.

Catherine Brownstein, Ph.D., Project Manager, The Gene Partnership, Boston Children’s Hospital; Instructor, Pediatrics, Harvard Medical School
Catherine Brownstein, PhD, MPH is the Project Manager for The Gene Partnership at Boston Children’s Hospital and an Instructor in Pediatrics at Harvard Medical School. For the last two years, Catherine has worked to establish and develop new sequencing and pharmacogenomics programs at the hospital. Before coming to BCH and HMS, Catherine was a toxicologist at the Massachusetts Department of Public Health, and spent four years in the world of Health 2.0, creating online patient communities for individuals with chronic and terminal diseases. Catherine’s interests and expertise lie with the intersection of genotype and phenotype, and the integration of patient-reported outcomes with genomics and medicine.

Kenneth Chahine, Ph.D., J.D., Senior Vice President and General Manager, DNA, ancestry.com
Ken Chahine has served as Senior Vice President and General Manager for Ancestry DNA, LLC since 2011. Prior to joining us he held several positions, including as Chief Executive Officer of Avigen, a biotechnology company, in the Department of Human Genetics at the University of Utah, and at Parke-Davis Pharmaceuticals (currently Pfizer). Mr. Chahine also teaches a course focused on new venture development, intellectual property, and licensing at the University of Utah’s College of Law. He earned a Ph.D. in Biochemistry from the University of Michigan, a J.D. from the University of Utah College of Law, and a B.A. in Chemistry from Florida State University.

Mick Correll, COO, Genospace
Mick Correll is the Co-Founder and Chief Operating Officer of GenoSpace, a Cambridge, Massachusetts-based company that is pioneering a bold and innovative software platform for advancing 21st-century genomic medicine. Prior to launching GenoSpace, Mick was the Associate Director of the Center for Cancer Computational Biology (CCCB) at the Dana-Farber Cancer Institute, overseeing the Center’s next-generation sequencing facility, bioinformatics consulting service and software development efforts.Mick started his career as a Bioinformatician at Lion Bioscience Research Inc, where he was the principle architect of a globally distributed gene annotation and analysis platform, and subsequently served asHead of Professional Services for Lion Bioscience Inc in North America, and Director of Healthcare Product Management at InforSense LLC.

Steven Dickman, President & Owner, CBT Advisors
Steven Dickman is President & Owner of CBT Advisors, a boutique life sciences consulting firm in Cambridge, Massachusetts. CBT Advisors works with over 20 clients a year on product positioning and corporate strategy; communications and fund-raising materials; and market analysis based on research and expert interviews. Clients include public and private biotech companies and life science venture funds. Before founding CBT Advisors in 2003, Mr. Dickman spent four years in venture capital with TVM Capital. There, Mr. Dickman’s deals included Sirna Therapeutics, sold to Merck in 2006 for $1.1 billion. Earlier, he was a Knight Science Journalism Fellow at MIT, a freelance contributor to The Economist, Discover, Science, GEO and Die Zeit and the founding bureau chief for Nature in Munich, Germany. Fluent in German, Mr. Dickman received his biochemistry degree cum laude from Princeton University.
Lynn Doucette-Stamm, Ph.D., Vice President, Development and Clinical Operations, Interleukin Genetics, Inc.
Lynn Doucette-Stamm has served as Vice President of Development and Clinical Operations at Interleukin Genetics since 2011. Prior to joining Interleukin she has worked in numerous capacities in Life Sciences for greater than 25 years. Key positions she has held prior to Interleukin include Vice President of Business Development at Beckman Coulter Genomics and Agencourt Bioscience, and Vice President and General Manager of the GenomeVisionTM Services Business Unit at Genome Therapeutics. She earned a Ph.D. in Cell Biology and Genetics from Cornell University Graduate School of Medical Sciences and a B.S. in Biology from McMaster University.
Yaniv Erlich, Ph.D., Principal Investigator and Whitehead Fellow, Whitehead Institute for Biomedical Research 
Dr. Yaniv Erlich is Andria and Paul Heafy Family Fellow and Principal Investigator at the Whitehead Institute for Biomedical Research at the Massachusetts Institute of Technology. He received a bachelor’s degree from Tel-Aviv University at Israel and his PhD from the Watson School of Biological Sciences at Cold Spring Harbor Laboratory. Dr. Erlich’s research interests are computational human genetics. He has extensive experience in developing new algorithms for high throughputs sequencing and to detect disease genes. In two of his studies, he identified the genetic basis of devastating genetic disorders. His lab works on a wide range of topics including developing compressed sensing approach to identify rare genetic variations, devising new algorithms for personal genomics, and using Web 2.0 information for genetic studies. Dr. Erlich is the recipient of the Harold M. Weintraub award, the IEEE/ACM-CS HPC award, Goldberg-Lindsay Fellowship, Wolf foundation scholarship for Excellence in exact science, and Emmanuel Ax scholarship, and he was selected as one of 2010 Tomorrow’s PIs team of Genome Technology.

Kyle Fetter, Associate Vice President, Molecular Diagnostics, XIFIN, Inc.
Kyle Fetter has overseen the commercialization, billing, and reimbursement processes for more than 10 molecular diagnostic companies releasing new high complexity laboratory testing services into the healthcare market. He currently manages billing processes for more than 10 companies at various stages of commercialization and third party payer contracting. In addition to overseeing a large molecular diagnostic billing department, Mr. Fetter consults with molecular diagnostic companies on projecting cash flow for non-covered services, implementing successful appeals strategies, and the relationship between sales and reimbursement for new medical technology. He came to the healthcare industry with a background in private equity and technology commercialization. Mr. Fetter has a B.A. in History and Journalism from the University of Southern California and an M.B.A from the University of Utah.
Birgit Funke, Ph.D., FACMG, Assistant Molecular Pathologist and Director of Clinical Research and Development, Laboratory for Molecular Medicine, Massachusetts General Hospital; Assistant Professor in Pathology, Harvard Medical School
Birgit Funke, Ph.D., FACMG is an Associate Laboratory Director of the Laboratory for Molecular Medicine (LMM) at PCPGM and is an Instructor in Pathology at Harvard Medical School. She currently oversees genetic testing and test development in the area of cardiovascular disease at the LMM. She has authored and co-authored many publications focusing on a wide array of topics, most recently incentive learning and memory in mice. Currently, Dr. Funke focuses on genetic testing with emphasis on genetically heterogeneous cardiovascular diseases, with the goal of defining the genetic basis for these disorders and developing comprehensive tests using new emerging molecular technologies. In addition, she is interested in developing genetic tests for common, complex disorders, working to understand the genetic variants that have been linked with psychotic and affective disorders.

Amanda Gammon, MS, CGC, Licensed Genetic Counselor, Huntsman Cancer Institute, University of Utah 
Amanda Gammon is a board-certified genetic counselor with a master’s degree in genetic counseling from University of Colorado at Denver Health Sciences Center. She received her bachelor’s degree from the University of Colorado at Boulder in molecular, cellular, and developmental biology and English literature. While completing her education, Amanda worked at Rocky Mountain Cancer Centers. She began working at Huntsman Cancer Institute in July 2007. She provides genetic counseling to patients in the Family Cancer Assessment Clinic and the research-oriented High Risk Breast Cancer Clinic. She also provides counseling for two National Institutes of Health-funded studies. For one study, she discusses familial colorectal cancer risk with individuals by telephone in rural Utah and Idaho to assess effectiveness of telephone intervention versus written risk information in encouraging individuals to pursue colonoscopy. In the other, she provides hereditary breast and ovarian cancer counseling to women in rural Utah both by phone and in-person to assess equivalency. Her main research interests include hereditary breast cancer and provision of genetic counseling through alternative modes for individuals with limited access to genetic counseling centers.

Manuel L. Gonzalez-Garay, Ph.D., Assistant Professor, The University of Texas Health Science Center at Houston
Dr. Gonzalez-Garay obtained his B.S. from the University of Nuevo Leon, Mexico in 1988. He wrote a bachelor’s research dissertation “Papillomavirus and cervical cancer in Mexican population” under the supervision of Dr. Barrera-Saldana and Dr. Gariglio. After a pre-doctoral fellowship at University of Texas, he joined the doctoral program in 1990. In 1996, Dr. Gonzalez-Garay completed his Ph.D. at the University of Texas, writing a dissertation about the regulation of the stoichiometry of tubulin. After a two-year Post-Doctoral Fellowship in the lab of Dr. Fernando Cabral, he joined Lexicon Genetics as a Bioinformatician. He was subsequently promoted to manager of Bioinformatics Group. During his stay at Lexicon Genetics, Dr. Gonzalez-Garay developed a large number of proprietary software and databases to support the gene knockout and drug discovery pipelines. During 2002, Dr. Gonzalez-Garay moved to Baylor College of Medicine, Human Genome Sequencing Center (HGSC) where he working as a Senior Scientific Programmer and team leader. During his stay at the HGSC he developed “Genboree discovery system” and participated as a bioinformatician in a large number of sequencing projects including the sequencing of the Human chromosome 3 and 12, the complete genomes of Rat and Sea Urchin. Dr. Gonzalez-Garay was instrumental in the development of pipelines for the re-sequencing of candidate genes at HGSC. From 2007 to 2009 he actively participated in the Tumor Sequencing Project (TSP) and the cancer genome atlas (TCGA) project. In January, 2010, The IMM recruited Dr. Gonzalez-Garay as Research Assistant Professor for The Brown Foundation Institute of Molecular Medicine for the Prevention of Human Diseases. Dr. Gonzalez-Garay is currently developing the pipelines to analyze whole genome and exome sequences and he is currently participating in three main projects: The identification of the causal mutations for tuberous sclerosis, cardiomyopathy and schizophrenia.

Robert Green, M.D., M.P.H., Associate Professor of Medicine, Division of Genetics, Department of Medicine, Brigham and Women’s Hospital and Harvard Medical School 
Robert C. Green, MD, MPH is a medical geneticist and a clinical researcher who directs the G2P research program (genomes2people.org) in translational genomics and health outcomes in the Division of Genetics at Brigham and Women’s Hospital and Harvard Medical School. Dr. Green is principal investigator of the NIH-funded REVEAL Study, in which a cross-disciplinary team has conducted 4 separate multi-center randomized clinical trials collectively enrolling 1100 individuals to disclose a genetic risk factor for Alzheimer’s disease in order to explore emerging themes in translational genomics. Dr. Green also co-directs the NIH-funded PGen Study, the first prospective study of direct-to-consumer genetic testing services and leads the MedSeq Project, the first NIH-funded research study to explore the use of whole genome sequencing in preventive medicine. Dr. Green is currently Associate Director for Research of the Partners Center for Personalized Genetic Medicine, a Board Member of the Council for Responsible Genetics and a member of the Informed Cohort Oversight Boards for both the Children’s Hospital Boston Gene Partnership Program and the Coriell Personalized Medicine Collaborative. He co-chairs the ACMG working group that is currently developing recommendations for management of incidental findings in clinical sequencing.

Steve Gullans, Managing Director, Excel Venture Management
Dr. Gullans is an experienced investor, entrepreneur and scientist. At Excel, he focuses on life science technology companies with a particular interest in disruptive platforms that can impact multiple industries. Steve is currently a Director at Tetraphase Pharmaceuticals, PathoGenetix, nanoMR, Cleveland HeartLab, and Catch.com. He was previously a board member of Activate Networks as well as BioTrove which was acquired by Life Technologies (LIFE) in 2009 and Biocius Life Sciences which was acquired by Agilent Technologies (A) in 2011. Prior to Excel, Steve co-founded RxGen, Inc., a pharma services company where he served as CEO from 2004-2008. In 2002, Steve stepped in as a senior executive at U.S. Genomics to direct operations, recruit a new CEO, and assist with fundraising. In the 1990s, he co-developed the technology that launched CellAct Pharma GmbH, a drug development company. Steve’s experience with venture investing began in the late 1980s when he became an active advisor to small biotechs and venture investors, including being a Senior Advisor to CB Health Ventures for 10 years. Dr. Gullans is an expert in advanced life science technologies and was a faculty member at Harvard Medical School and Brigham and Women’s Hospital for nearly 20 years. He has published more than 130 scientific papersin many leading journals, lectured internationally, and co-invented numerous patents. He recently co-authored with Juan Enriquez an eBook entitled, Homo evolutis: A Short Tour of Our New Species, and a comment in Nature entitled, “Genetically Enhanced Olympics Are Coming,” which describe a world where humans increasingly shape their environment, themselves, and other species. Steve received his B.S. at Union College, Ph.D. at Duke University, and postdoctoral training at the Yale School of Medicine. He is a Fellow of the AAAS and the AHA.

Tina Hambuch, Senior Scientist, Illumina, Inc.
Tina Hambuch earned her Bachelor’s degree from UC Riverside and her doctorate from UC Berkeley, focusing on genetic analyses of genes that control the immune system. She continued her studies of genetic variation as a post-doctoral fellow at the Centers for Disease Control and an assistant professor at the Ludwig Maximillians University in Munich. After her academic career, Tina used her understanding of genetics and genetic variation to help identify and design diagnostic sequencing tests for clinical application at Ambry Genetics. Tina joined Illumina in 2008 where she combined her experience in genetics, genomics, and clinical diagnostics to contribute to the development of the CLIA-certified, CAP-accredited Illumina Clinical Services Laboratory (ICSL). In 2010, she launched a California-certified Clinical Genetic Molecular Biologist Scientist training program in which she serves as the Education Coordinator and Director. Tina is currently active in the development and validation of genetic testing, as well as clinical tools for doctor support and education. Tina is a member of the American College of Medical Genetics and the American Society of Human Genetics.

Michael Hawley, Chief Design Officer, Mad*Pow

As leader of the Mad*Pow Experience Design team, Michael leverages expertise in usability and user experience to help clients achieve their goals through design. Michael holds his MS in Human Factors in Information Design from Bentley College McCallum Graduate School of Business, and BA in Cellular and Molecular Biology from the University of Michigan. He is an active member of the professional design community, serving as an officer in the User Experience Professional’s Association and contributing ideas as a speaker and author, exploring trends within the UX discipline as a published columnist in publications such as UXMatters, iMedia, TMCNet and CPWire.
Caleb J Kennedy, Ph.D., Lead Scientist, Good Start Genetics, Inc.
Caleb currently leads an amazing group of scientist-engineers developing high-performance analytical tools for next-generation advances in genetic testing and research. He holds a Ph.D. in genetics from Harvard University, as well as M.S. and B.S. degrees in molecular and cellular biology from Texas A&M University. Caleb has two beautiful boys, one with Down syndrome.
Ayub Khattak, CEO, ruubix
Ayub Khattak, CEO or ruubix inc., uses his background in biochemistry, programming and electronics in the development of the ruubix digital diagnostic platform. He has his degree in Mathematics from UCLA and developed a NSF funded project in the genetic engineering of RNAi systems before founding ruubix.

Wendy Kohlmann, MS, CGC, Licensed Genetic Counselor, Huntsman Cancer Institute, University of Utah 
Wendy Kohlmann is a board-certified genetic counselor with a master’s degree in genetic counseling from the University of Cincinnati and a bachelor’s degree in zoology from the University of Wisconsin. She has worked as a genetic counselor at the University of Texas-M.D. Anderson Cancer Center in Houston and the University of Michigan Comprehensive Cancer Center in Ann Arbor. She began working at Huntsman Cancer Institute as a research associate in 2006. Wendy Kohlmann’s research interests include the inherited basis of melanoma and pancreatic cancer, psychosocial and behavioral outcomes of genetic counseling, and issues for children and adolescents with hereditary cancer syndromes.

Antoinette F. Konski, J.D., Partner, Foley & Lardner LLP
Antoinette F. Konski is a partner with Foley & Lardner LLP where her practice focuses on intellectual property. She works with life science clients, creating and optimizing value in intellectual property portfolios encompassing technologies that include personalized medicine, regenerative and stem cell biology, antibodies, immunology, gene therapy, nanotechnology, diagnostics, small molecules and drug delivery. She represents public and private companies and universities. Ms. Konski currently serves as the firm’s Silicon Valley IP office chairperson and co-chair of the Life Sciences Industry Team.

Gary J. Kurtzman, MD, Managing Director, Healthcare, Safeguard
Gary has 25+ years of experience in operations and investments, leveraging his medical expertise to enable businesses to enhance their products and grow their services, as well as to discover new partnering potential in developing entrepreneurial companies. Gary joined Safeguard in 2006, where he is responsible for identifying, deploying capital in and supporting emerging healthcare companies in molecular and point-of-care diagnostics, medical devices and healthcare IT. He targets companies with solutions that address the high cost of medical care, and safer and more effective treatments. Gary is a board member of Safeguard partner companies Alverix, Crescendo Bioscience, Good Start Genetics, Medivo, and PixelOptics. Gary has realized value for companies through a series of successful IPOs, M&A and turnaround transactions—most recently Shire’s acquisition of Safeguard’s partner company Advanced BioHealing for $750 million, in cash, representing a 13x cash-on-cash return for Safeguard; and Eli Lilly’s acquisition of Safeguard’s partner company Avid Radiopharmaceuticals for $300 million, up front, with an additional $500 million payout dependent upon the achievement of future regulatory and commercial milestones, representing an initial 3x cash-on-cash return for Safeguard with the potential to realize up to 8x. Gary joined Safeguard from BioAdvance, a state initiative committed to funding early-stage life sciences companies, where he served as Managing Director and Chief Operating Officer. Previously, he was Chief Executive Officer at Pluvita Corporation, a company developing biological and bioinformatic solutions for drug and diagnostic development. Gary also previously served as Chief Operating Officer at Genovo, Inc., a gene therapy start-up company. He was also employed as head of research & development by Avigen, Inc., an early-stage gene therapy company located in San Francisco. Gary began his career with Gilead Sciences, Inc.—at the time, a pre-IPO biotechnology company—as virology group leader. A board-certified internist from Barnes Hospital in St. Louis, MO, with a hematology sub-specialty, Gary has authored more than 40 research articles, book chapters and reviews, and is credited as inventor on twelve issued United States patents. Presently, Gary serves on various academic and biomedical committees and boards along with the editorial board of Biotechnology Healthcare. Presently, Gary is a lecturer in the Health Care Systems Department at the Wharton School at the University of Pennsylvania where he teaches entrepreneurship in life sciences.

Gholson Lyon, M.D., Ph.D., Assistant Professor of Human Genetics, Cold Spring Harbor Laboratory; Research Scientist, Utah Foundation for Biomedical Research
Gholson Lyon is an assistant professor in human genetics at Cold Spring Harbor Laboratory and a research scientist at the Utah Foundation for Biomedical Research. He is also a board-certified child, adolescent and adult psychiatrist. He earned an M.Phil. in Genetics at the University of Cambridge, England, then received a Ph.D. and M.D. through the combined Cornell/Sloan-Kettering/Rockefeller University training program. He started his independent research career in 2009, after finishing clinical residencies in child, adolescent and adult psychiatry. In addition to his research on the genetics of neuropsychiatric illnesses, Dr. Lyon is focusing on the genetic basis of rare Mendelian diseases.

Daniel MacArthur, Ph.D., Assistant Professor, Massachusetts General Hospital; Co-founder, Genomes Unzipped 
Daniel MacArthur is a group leader at the Analytic and Translational Genetics Unit at Massachusetts General Hospital, an assistant professor at Harvard Medical School, and a research affiliate at the Broad Institute of Harvard and MIT. His research focuses on understanding the functional impact of genetic variation using genome sequencing data. His writing on personal genomics is archived at Wired Science, and his research is described on his lab page at http://www.macarthurlab.org/.

Ellen T. Matloff, M.S., Research Scientist, Department of Genetics and Director, Cancer Genetic Counseling, Yale Cancer Center
Ellen T. Matloff, M.S., C.G.C., received her Bachelor’s degree in Biology from Union College, her Master’s degree in Genetic Counseling from Northwestern University, and her board certification from the American Board of Genetic Counseling. She specializes in hereditary breast and ovarian cancer syndrome (BRCA1, BRCA2), hereditary colon cancer syndromes (HNPCC, FAP), and rare cancer syndromes. Her interests include patient and provider issues in genetic counseling, sexuality and cancer patients, and the impact of patents on clinical practice.

Martin Mendiola, M.D., MPH, Director, Clinical Program Development, Happtique
Martin Mendiola is responsible for clinical needs assessments of mHealth technology for the purposes of enhancing the provision of care and patient engagement and satisfaction. He is involved in the clinical implementation of Happtique’s solutions within client health systems while serving as a liaison to its healthcare providers. He has also created the medical, health, and wellness library intellectual property offered to Happtique’s members. Prior to joining Happtique, Martin worked in the direct delivery of care within several hospital systems and through international humanitarian relief efforts, and has conducted extensive clinical research. He earned his MD from the Ponce School of Medicine and MPH in Health Policy from Columbia University Mailman School of Public Health.

Peter S. Miller, COO, Genomic Healthcare Strategies
Peter Miller is Chief Operating Officer of Genomic Healthcare Strategies, a company focused on the changes in healthcare resulting from advances in molecular medicine. Peter spent his career building companies which have operated in expanding markets driven by new technology. He has a track record of spotting trends and successful implementation. He did his undergraduate work at MIT. While working on his MBA at MIT’s Sloan School, he was a founding member of Abt Associates Inc, and over a period of 17 years worked as COO and Board member as the company grew from 3 people to 800. Peter has been a key advisor to firms facing a variety of transitional events (external or internal), entering new markets, and facing choices around mergers/acquisitions/going public. He has helped build successful companies in software and professional services, three of which were sold to public companies. He has served on a number of boards of innovative technology companies, helping build their success, both organizationally and in their markets. He has a long term interest in health care. He established the original health care research group at Abt Associates. He has helped teach a course at Harvard School of Public Health, working with Dr. John Bryant, later Dean of Columbia’s School of Public Health. He has worked on physician education with the American Association of Medical Colleges and has been a board member of several health care services firms. He has extensive experience with entrepreneurial companies, having successfully worked with firms raising money seven times, both as an employee and as a business plan quarterback. He is involved in M&A activities on both the buy and sell sides. In addition he has been a licensed (NASD) broker/dealer. Peter is a frequent invited speaker on the changing healthcare landscape, writing and speaking on Personalized Medicine for many years as a thought leader. He has been invited to speak at the Molecular Medicine Tri-Conference, LabCompete, the University of California at Santa Barbara’s Technology Management Program, among others. Peter is co-author with Keith Batchelder of GHS of an invited Nature Biotechnology commentary: “A Change in the Market – Investing in Diagnostics.” He is active with his alma mater, having been Board Chairman of the Global MIT Enterprise Forum, a past board member of the MIT Alumni Association, and currently helps fledgling startups as Co-Director of the MIT Venture Mentoring Service.

Georgia Mitsi, MSc, Ph.D., MBA, Founder and CEO, Apptomics LLC 
Georgia Mitsi MSc, PhD, MBA is the Founder and CEO of Apptomics LLC ,a health technology firm specializing in the design and validation of quality medical mobile applications for selected conditions with high unmet need focusing primarily in CNS. Georgia received her PhD in Health Sciences and MSc in Applied Medical Sciences from University of Patras, Greece and her MBA from University of Miami. She has extensive experience in Pharmaceutical Industry and Healthcare Consulting where she has been involved in positions of increased responsibility in areas such as Clinical Research, Health Outcomes and Health Economics. She often played an instrumental role in uncovering and fostering new business opportunities and developing a strategic roadmap for product’s value proposition. Georgia also worked at the Health Services Research Center (HSRC), a joint venture between Humana and University of Miami and among other responsibilities she led the scientific effort for Games for Health initiative. She has completed successfully many research projects of high complexity and has collaborated with pharmaceutical companies as well as academic institutions. She has co-authored several scientific publications and presented in conferences such as ISPOR and DIA. Georgia is also a published novelist in her native language, Greek.

David Mittelman, Ph.D., Associate Professor, Virginia Bioinformatics Institute, Virginia Tech Department of Biological Sciences, and VTC School of Medicine
Dr. Mittelman is an Associate Professor at the Virginia Bioinformatics Institute, the Virginia Tech Department of Biological Sciences, and the VTC School of Medicine. David Mittelman holds a PhD in Molecular Biophysics through the Department of Biochemistry at Baylor College of Medicine (BCM). Dr. Mittelman completed his postdoctoral training in the Department of Molecular and Human Genetics at BCM. In 2009, Dr. Mittelman was awarded the Ruth L. Kirschstein National Research Service Award, and began an independent research program in population-scale genomics at BCM’s Human Genome Sequencing Center (HGSC). Currently, Dr. Mittelman leads the Genetics and Genomic Medicine Laboratory at Virginia Tech, combining experimental and computational approaches to characterizing personal genomes.

Anne Morriss, Founder and CEO, Genepeeks
Anne is the founder and CEO of Genepeeks, a genetic information company that helps families to protect their future children. She has helped to launch and grow multiple technology companies, and is the best-selling co-author of Uncommon Service: How to Win By Putting Customers at the Core of Your Business (Harvard Business Review Press). Anne received her B.A in American Studies from Brown University and an M.B.A from Harvard Business School.

Julia Oh, Chief Science Officer, 1eq

Heidi L. Rehm, Ph.D., FACMG, Chief Laboratory Director, Molecular Medicine, Partners HealthCare Center for Personalized Genetic Medicine (PCPGM); Assistant Professor of Pathology, Harvard Medical School
Heidi Rehm, Ph.D. was recruited in 2001 to build the Laboratory for Molecular Medicine at PCPGM and serves as its Laboratory Director. She is a board-certified clinical molecular geneticist and Assistant Professor of Pathology at Harvard Medical School with appointments at BWH, MGH and Children’s Hospital Boston. Her undergraduate degree is from Middlebury College, her graduate degree in Genetics is from Harvard University and her postdoctoral and fellowship training was at HMS. Heidi has served as the Director of the ABMG Clinical Molecular Genetics Training Program at HMS since 2006. In addition to running the LMM and the molecular training program, she also conducts research in hearing loss, Usher syndrome, cardiomyopathy and the use of IT in enabling personalized medicine.
Jessica Richman, CEO and Co-Founder, uBiome

Gabe Rudy, Vice President, Product Development, Golden Helix and Author “A Hitchhikers Guide to Next Generation Sequencing”
Gabe Rudy has been GHI’s Vice President of Product Development and team member since 2002. Gabe thrives in the dynamic and fast-changing field of bioinformatics and genetic analysis. Leading a killer team of Computer Scientists and Statisticians in building powerful products and providing world-class support, Gabe puts his passion into enabling Golden Helix’s customers to accelerate their research. When not reading or blogging, Gabe enjoys the outdoor Montana lifestyle. But most importantly, Gabe truly loves spending time with his sons and wife.

Meredith Salisbury, Senior Consultant, Bioscribe
Prior to becoming a consultant for Bioscribe, Meredith was CEO and Editor-in-Chief of GenomeWeb, the leading news and information service for scientists in the systems biology field. During her 11 years with the company, Meredith honed her knowledge of the genomics market, with a particular focus on next-gen DNA sequencing. She is the co-founder of the Consumer Genetics Conference held annually in Boston. Before joining GenomeWeb, Meredith had an extended internship in the busy newsroom at Newsweek in New York City. Meredith brings her industry knowledge and connections to oversee editorial strategy for Bioscribe clients. Meredith enjoys hot-air ballooning and is based in the NYC metropolitan area.

Anish Sebastian, Co-Founder and CEO, 1eq

Juhan Sonin, Creative Director, Involution Studios, MIT
Juhan Sonin is an emeritus of some of the finest software organizations in the world: Apple, the National Center for Supercomputing Applications (NCSA) and the Massachusetts Institute of Technology (MIT). He has been a creative director for almost two decades with his work being featured in the New York Times, Newsweek, BBC International, Billboard Magazine and National Public Radio (NPR). He is also a lecturer on design and rapid prototyping at the Massachusetts Institute of Technology (MIT).

Vasisht Tadigotla, Ph.D., Senior Bioinformatics Scientist, Courtagen Life Sciences, Inc.
Vasisht is currently working as a Senior Bioinformatics Scientist at Courtagen Life Sciences. Previously, he has worked as a Staff Scientist at Life Technologies helping develop the SOLiD and Ion Torrent sequencing technologies and at the Department of Physics at Boston University. Vasisht earned a Ph.D. in Biophysics and Computational Biology from Rutgers University and a B.Tech. in Biochemical Engineering from Indian Institute of Technology, New Delhi.

Spencer Wells, Ph.D., Explorer-in-Residence and Director, The Genographic Project, National Geographic Society
Spencer Wells is a leading population geneticist and director of the Genographic Project from National Geographic and IBM. His fascination with the past has led the scientist, author, and documentary filmmaker to the farthest reaches of the globe in search of human populations who hold the history of humankind in their DNA. By studying humankind’s family tree he hopes to close the gaps in our knowledge of human migration. A National Geographic explorer-in-residence, Wells is spearheading the Genographic Project, calling it “a dream come true.” His hope is that the project, which builds on Wells’s earlier work (featured in his book and television program, The Journey of Man) and is being conducted in collaboration with other scientists around the world, will capture an invaluable genetic snapshot of humanity before modern-day influences erase it forever. Wells’s own journey of discovery began as a child whose zeal for history and biology led him to the University of Texas, where he enrolled at age 16, majored in biology, and graduated Phi Beta Kappa three years later. He then pursued his Ph.D. at Harvard University under the tutelage of distinguished evolutionary geneticist Richard Lewontin. Beginning in 1994, Wells conducted postdoctoral training at Stanford University’s School of Medicine with famed geneticist Luca Cavalli-Sforza, considered the “father of anthropological genetics.” It was there that Wells became committed to studying genetic diversity in indigenous populations and unraveling age-old mysteries about early human migration. Wells’s field studies began in earnest in 1996 with his survey of Central Asia. In 1998 Wells and his colleagues expanded their study to include some 25,000 miles of Asia and the former Soviet republics. His landmark research findings led to advances in the understanding of the male Y chromosome and its ability to trace ancestral human migration. Wells then returned to academia where, at Oxford University, he served as director of the Population Genetics Research Group of the Wellcome Trust Centre for Human Genetics at Oxford. Following a stint as head of research for a Massachusetts-based biotechnology company, Wells made the decision in 2001 to focus on communicating scientific discovery through books and documentary films. From that was born The Journey of Man: A Genetic Odyssey, an award-winning book and documentary that aired on PBS in the U.S. and National Geographic Channel internationally. Written and presented by Wells, the film chronicled his globe-circling, DNA-gathering expeditions in 2001-02 and laid the groundwork for the Genographic Project. Since the Genographic Project began, Wells’s work has taken him to over three dozen countries, including Chad, Tajikistan, Morocco, Papua New Guinea, and French Polynesia, and he recently published his second book, Deep Ancestry: Inside the Genographic Project. He lives with his wife, a documentary filmmaker, in Washington, D.C.
Eric P. Williams, Ph.D., Senior Bioinformatics Scientist, National Marrow Donor Program
Dr. Eric Williams is Senior Bioinformatics Scientist at the National Marrow Donor Program (NMDP) which is entrusted to operate the C.W. Bill Young Cell Transplantation Program, including the Be The Match Registry. Eric has 9 years of experience working in research related to aspects of biology, histocompatibility and population genetics associated with finding matching donors for patients needing stem cell therapies. His interests include the utilization of genetic information to further medicine, infer ancestry, and aid in family history research. He has led development of systems utilized by worldwide transplant centers to access population HLA frequency and ancestry information critical to the process of finding matching, unrelated donors for patients. Other activities have included utilizing Geographic Information Systems to map global frequencies of HLA haplotypes and a US market area capacity analysis resulting in increased funding to develop facilities at medical institutions supporting stem cell therapy programs. Prior to his work with the NMDP, Eric has 18 years experience supporting marker assisted plant breeding programs at Pioneer Hi-Bred, Mycogen Seeds and Syngenta Seeds. Dr. Williams received a Ph.D. in Plant Breeding and Genetics and a MS in Plant Physiology from the University of Nebraska-Lincoln and a BA in Agronomy from Brigham Young University.
Rina Wolf, Vice President, Commercialization Strategies, XIFIN, Inc.
Rina Wolf is a nationally recognized expert in the field of laboratory commercialization and reimbursement, with over 20 years of experience in the diagnostic laboratory industry, specializing in Molecular Diagnostic Laboratories. She lectures extensively on these topics and has consulted for major laboratories and laboratory associations throughout the U.S.. She is a former President and board member of the California Clinical Laboratory Association and is an active participant with the ACLA (American Clinical Laboratory Association) and the Personalized Medicine Coalition. Ms. Wolf also advises and presents to investor audiences, recent speaking engagements include Piper Jaffray, Cowen Group and Bloomberg’s G2 Intelligence Lab Investment Forum. Most recently Ms. Wolf held the position of Vice President of Reimbursement and Regulatory Affairs at Axial Biotech, Inc. where she was responsible for creating and implementing their successful reimbursement strategies. Prior to joining Axial Biotech, Inc., Ms. Wolf held executive positions in the area of commercialization and reimbursement at RedPath Integrated Pathology, Inc., Genomic Health, Inc., and Esoterix (now LabCorp). Ms. Wolf has a Bachelor of Arts degree from UCLA and a Masters of HealthCare Administration.

http://www.consumergeneticsconference.com/cgc_content.aspx?id=116061
 

Read Full Post »


Reporter: Aviva Lev-Ari, PhD, RN

 

Scripps Research Institute Scientists Discover Key Signaling Pathway that Makes Young Neurons Connect

LA JOLLA, CA – June 20, 2013 – Neuroscientists at The Scripps Research Institute (TSRI) have filled in a significant gap in the scientific understanding of how neurons mature, pointing to a better understanding of some developmental brain disorders.

In the new study, the researchers identified a molecular program that controls an essential step in the fast-growing brains of young mammals. The researchers found that this signaling pathway spurs the growth of neuronal output connections by a mechanism called “mitochondrial capture,” which has never been described before.

“Mutations that may affect this signaling pathway already have been found in some autism cases,” said TSRI Professor Franck Polleux, who led the research, published June 20, 2013 in the journal Cell.

Branching Out

Polleux’s laboratory is focused on identifying the signaling pathways that drive neural development, with special attention to the neocortex—a recently evolved structure that handles the “higher” cognitive functions in the mammalian brain and is highly developed in humans.

In a widely cited study published in 2007, Polleux’s team identified a trigger of an early step in the development of the most important class of neocortical neurons. As these neurons develop following asymmetric division of neural stem cells, they migrate to their proper place in the developing brain. Meanwhile they start to sprout a root-like mesh of input branches called dendrites from one end, and, from the other end, a long output stalk called an axon. Polleux and his colleagues found that the kinase LKB1 provides a key signal for the initiation of axon growth in these immature cortical neurons.

In the new study, Polleux’s team followed up this discovery and found that LKB1 also is crucially important for a later stage of these neurons’ development: the branching of the end of the axon onto the dendrites of other neurons.

“In experiments with mice, we knocked the LKB1 gene out of immature cortical neurons that had already begun growing an axon, and the most striking effect was a drastic reduction in terminal branching,” said Julien Courchet, a research associate in the Polleux laboratory who was a lead co-author of the study. “We saw this also in lab dish experiments, and when we overexpressed the LKB1 gene, the result was a dramatic increase in axon branching.”

Further experiments by Courchet showed that LKB1 drives axonal branching by activating another kinase, NUAK1. The next step was to try to understand how this newly identified LKB1-NUAK1 signaling pathway induced the growth of new axon branches.

Stopping the Train in Its Tracks

Following a thin trail of clues, the researchers decided to look at the dynamics of microtubules. These tiny railway-like tracks are laid down within axons for the efficient transport of molecular cargoes and are altered and extended during axonal branching. Although they could find no major change in microtubule dynamics within immature axons lacking LKB1 or NUAK1, the team did discover one striking abnormality in the transport of cargoes along these microtubules. Tiny oxygen-reactors called mitochondria, which are the principal sources of chemical energy in cells, were transported along axons much more actively—and by contrast, became almost immobile when LKB1 and NUAK1 were overexpressed.

But the LKB1-NUAK1 signals weren’t just immobilizing mitochondria randomly. They were effectively inducing their capture at points on the axons where axons form synaptic connections with other neurons. “When we removed LKB1 or NUAK1 in cortical neurons, the mitochondria were no longer captured at these points,” said Tommy Lewis, Jr., a research associate in the Polleux Laboratory who was co-lead author of the study.

“We argue that there must be an active ‘homing factor’ that specifies where these mitochondria stop moving,” said Polleux. “And we think that this is essentially what the LKB1-NAUK1 signaling pathway does here.”

Looking Ahead

Precisely how the capture of mitochondria at nascent synapses promotes axonal branching is the object of a further line of investigation in the Polleux laboratory. “We think that we have uncovered something very interesting about mitochondrial function at synapses,” Polleux said.

In addition to its basic scientific importance, the work is likely to be highly relevant medically. Developmentally related brain disorders such as epilepsy, autism and schizophrenia typically involve abnormalities in neuronal connectivity. Recent genetic surveys have found NUAK1-related gene mutations in some children with autism, for example. “Our study is the first one to identify that NUAK1 plays a crucial role during the establishment of cortical connectivity and therefore suggests why this gene might play a role in autistic disorder,” Polleux says.

He notes, too, that declines in normal mitochondrial transport within axons have been observed in neurodegenerative disorders such as Alzheimer’s and Parkinson’s diseases. “In the light of our findings, we wonder if the decreased mitochondrial mobility observed in these cases might be due not to a transport defect, but instead to a defect in mitochondrial capture in aging neurons,” he said. “We’re eager to start doing experiments to test such possibilities.”

Other contributors to the study, “Terminal Axon Branching Is Regulated by the LKB1-NUAK1 Kinase Pathway via Presynaptic Mitochondrial Capture,” were Sohyon Lee, Virginie Courchet and Deng-Yuan Liou of TSRI, and Shinichi Aizawa of the RIKEN Institute of Kobe, Japan.

The study was funded in part by the National Institutes of Health (grants R01AG031524 and 5F32NS080464), ADI-Novartis, Fondation pour la Recherche Medicale, and the Philippe Foundation.

About The Scripps Research Institute

The Scripps Research Institute (TSRI) is one of the world’s largest independent, not-for-profit organizations focusing on research in the biomedical sciences. TSRI is internationally recognized for its contributions to science and health, including its role in laying the foundation for new treatments for cancer, rheumatoid arthritis, hemophilia, and other diseases. An institution that evolved from the Scripps Metabolic Clinic founded by philanthropist Ellen Browning Scripps in 1924, the institute now employs about 3,000 people on its campuses in La Jolla, CA, and Jupiter, FL, where its renowned scientists—including three Nobel laureates—work toward their next discoveries. The institute’s graduate program, which awards PhD degrees in biology and chemistry, ranks among the top ten of its kind in the nation. For more information, see www.scripps.edu.

# # #

For information:
Office of Communications
Tel: 858-784-2666
Fax: 858-784-8136
press@scripps.edu

http://www.scripps.edu/news/press/2013/20130620polleux.html?elq=a37dd39263d54de38cea68188c307bf6&elqCampaignId=17

Read Full Post »


Long Noncoding RNA Network regulates PTEN Transcription

Author: Larry H Bernstein, MD, FCAP

Scientists Find Surprising New Influence On Cancer Genes

A pseudogene long noncoding RNA networkregulates PTEN transcription and translation in human cells
Per Johnsson, A Ackley, L Vidarsdottir, Weng-Onn Lui, M Corcoran, D Grandér, and KV Morris
a new study led by scientists at The Scripps Research Institute (TSRI) shows how
  • pseudogenes can regulate the activity of a cancer-related gene called PTEN.
The study also shows that pseudogenes can be targeted to control PTEN’s activity.

Mol Cancer. 2011; 10: 38.   Published online 2011 April 13. doi:  10.1186/1476-4598-10-38    PMCID: PMC3098824

New Type of Gene That Regulates Tumour Suppressor PTEN Identified

Feb. 24, 2013 — Researchers at Karolinska Institutet in Sweden have identified a new so-called pseudogene that regulates the tumour-suppressing PTEN gene.
They hope that this pseudogene will be able to control PTEN to

  1. reverse the tumour process,
  2. make the cancer tumour more sensitive to chemotherapy and
  3. prevent the development of resistance.

The findings, which are published in the scientific journal Nature Structural and Molecular Biology, can be of significance in

    • the future development of cancer drugs.

The development of tumours coincides with the activation of several cancer genes as well as the inactivation of other tumour-suppressing genes owing to

  1. damage to the DNA and
  2. to the fact that
    • the cancer cells manage to switch off the transcription of tumour-suppressor genes.

To identify what might be regulating this silencing, the researchers studied PTEN,

    • one of the most commonly inactivated tumour-suppressor genes.

It has long been believed that the switching-off process is irreversible, but the team has now shown that

  • silenced PTEN genes in tumour cells can be ‘rescued’ and
  • re-activated by a ‘pseudogene’,
    • a type of gene that, unlike normal genes,
    • does not encode an entire protein.

“We identified a new non-protein encoding pseudogene, which

  • determines whether the expression of PTEN
    • is to be switched on or off,”

says research team member Per Johnsson, at Karolinska Institutet’s Department of Oncology-Pathology. “What makes this case spectacular is that the gene

  • only produces RNA,
  • the protein’s template.

It is this RNA that, through a sequence of mechanisms,

    • regulates PTEN.

Pseudogenes have been known about for many years, but

  • it was thought that they were only junk material.”

No less than 98 per cent of human DNA consists of non-protein encoding genes (i.e. pseudogenes), and by studying these formerly neglected genes the researchers

  • have begun to understand that they are very important and
    • can have an effect without encoding proteins.

Using model systems, the team has shown that the new pseudogene can

  • control the expression of PTEN and
    • make tumours more responsive to conventional chemotherapy.

Pre Johnssom suggests “we might one day be able to re-programme cancer cells

  • to proliferate less,
  • become more normal, and that
  • resistance to chemotherapy can hopefully be avoided.

“We also believe that our findings can be very important for the future development of cancer drugs.  The human genome conceals no less than 15,000 or so pseudogenes, and it’s not unreasonable to think

  • that many of them are relevant to diseases such as cancer.”

The study was conducted in collaboration with scientists at The Scripps Research Institute, USA, and the University of New South Wales, Australia, and was made possible with

  • grants from the Swedish Childhood Cancer Foundation, the Swedish Cancer Society, the Cancer Research Funds of Radiumhemmet, Karolinska Institutet’s KID programme for doctoral studies, the Swedish Research Council, the Erik and Edith Fernström Foundation for Medical Research, the National Institute of Allergy and Infectious Diseases, the National Cancer Institute and the National Institutes of Health.

The functional role of long non-coding RNA in human carcinomas
EA Gibb, CJ Brown, and WL Lam
Long non-coding RNAs (lncRNAs) are emerging as new players in the cancer paradigm demonstrating potential roles in both oncogenic and tumor suppressive pathways. These novel genes are frequently

    • aberrantly expressed in a variety of human cancers,

however the biological functions of the vast majority remain unknown. Recently, evidence has begun to accumulate describing the molecular mechanisms by which these RNA species function, providing insight into

    • the functional roles they may play in tumorigenesis.

In this review, we highlight the emerging functional role of lncRNAs in human cancer.

One of modern biology’s great surprises was the discovery that the human genome encodes only ~20,000 protein-coding genes, representing <2% of the total genome sequence [1,2]. However, with the advent of

  • tiling resolution genomic microarrays and
  • whole genome and transcriptome sequencing technologies
    • it was determined that at least 90% of the genome is actively transcribed [3,4].

The human transcriptome was found to be more complex than

  • a collection of protein-coding genes and their splice variants; showing
    • extensive antisense,
    • overlapping and non-coding RNA (ncRNA) expression [5-10].

Although initially argued to be spurious transcriptional noise, recent evidence suggests that the proverbial “dark matter” of the genome

  • may play a major biological role in cellular development and metabolism [11-17].

One such player, the newly discovered long non-coding RNA (lncRNA) genes, demonstrate

  1. developmental and tissue specific expression patterns, and
  2. aberrant regulation in a variety of diseases, including cancer [18-27].

NcRNAs are loosely grouped into two major classes based on transcript size; small ncRNAs and lncRNAs [28-30].

  1. Small ncRNAs are represented by a broad range of known and newly discovered RNA species, with many being associated
    • with 5′ or 3′ regions of genes [4,31,32].

This class includes the well-documented miRNAs, RNAs ~22 nucleotides (nt) long involved in the specific regulation of both

  1. protein-coding, and
  2. putatively non-coding genes,
    • by post-transcriptional silencing or infrequently
    • by activation [33-35].

miRNAs serve as major

  1. regulators of gene expression and as
  2. intricate components of the cellular gene expression network [33-38].

Another newly described subclass are the transcription initiation RNAs (tiRNAs), which are

  • the smallest functional RNAs at only 18 nt in length [39,40].
  1. small ncRNAs classes, including miRNAs, have established roles in tumorigenesis, an intriguing association between
  2. the aberrant expression of ncRNA satellite repeats and cancer has been recently demonstrated [41-46].

Types of human non-coding RNAs

In contrast to miRNAs, lncRNAs, the focus of this article, are

    • mRNA-like transcripts ranging in length from 200 nt to ~100 kilobases (kb) lacking significant open reading frames.

Many identified lncRNAs are transcribed by RNA polymerase II (RNA pol II) and are polyadenylated, but this is not a fast rule [47,48].
There are examples of lncRNAs, such as the

  • antisense asOct4-pg5 or the
  • brain-associated BC200,
    • which are functional, but not polyadenylated [49-51].
  1. lncRNA expression levels appear to be lower than protein-coding genes [52-55], and some
  2. lncRNAs are preferentially expressed in specific tissues [21].

Novel lncRNAs may contribute a significant portion of the aforementioned ‘dark matter’ of the human transcriptome [56,57]. In an exciting report
by Kapranov et.al., it was revealed the bulk of the relative mass of RNA in a human cell, exclusive of the ribosomal and mitochondrial RNA,
is represented by non-coding transcripts with no known function
[57].

Like miRNAs and protein-coding genes, some

  • transcriptionally active lncRNA genes display
  • histone H3K4 trimethylation at their 5′-end and
  • histone H3K36 trimethylation in the body of the gene [8,58,59].

The small number of characterized human lncRNAs have been associated with a spectrum of biological processes, for example,

  • epigenetics,
  • alternative splicing,
  • nuclear import,
    1. as structural components,
    2. as precursors to small RNAs and
    3. even as regulators of mRNA decay [4,60-70].

Furthermore, accumulating reports of misregulated lncRNA expression across numerous cancer types suggest that

    • aberrant lncRNA expression may be a major contributor to tumorigenesis [71].

This surge in publications reflects the increasing attention to this subject  and a number of useful lncRNA databases have been created .
In this review we highlight the emerging

    • functional role of aberrant lncRNA expression, including
    • transcribed ultraconserved regions (T-UCRs), within human carcinomas.

Publications describing cancer-associated ncRNAs. Entries are based on a National Library of Medicine Pubmed search using the terms
“ncRNA” or “non-coding RNA” or “noncoding RNA” or non-protein-coding RNA” with cancer and annual (Jan.1-Dec.31) date limitations. …
Publically available long non-coding RNA online databases

The definition ‘non-coding RNA’ is typically used to describe transcripts where

    • sequence analysis has failed to identify an open reading frame.

There are cases where ‘non-coding’ transcripts were found to encode short, functional peptides [72]. Currently, a
universal classification scheme to define lncRNAs does not exist. Terms such as

  • large non-coding RNA,
  • mRNA-like long RNA, and
  • intergenic RNA

all define cellular RNAs, exclusive of rRNAs,

    • greater than 200 nt in length and having no obvious protein-coding capacity [62].

This has led to confusion in the literature as to exactly which transcripts should constitute a lncRNA. One subclass of lncRNAs is called
large or long intergenic ncRNAs (lincRNAs). These lncRNAs are

  1. exclusively intergenic and are
  2. marked by a chromatin signature indicative of transcription [8,58].

RNA species that are bifunctional preclude categorization into either group of

  • protein-coding or
  • ncRNAs as

their transcripts function both at the RNA and protein levels [73].

The term ‘lncRNA‘ is used only to describe transcripts with no protein-coding capacity. In the meantime, and for the purposes of this review,
we will consider lncRNAs as a blanket term to encompass

  1. mRNA-like ncRNAs,
  2. lincRNAs, as well as
  3. antisense and intron-encoded transcripts,
  4. T-UCRs and
  5. transcribed pseudogenes.

Discovery of LncRNAs

The earliest reports describing lncRNA predated the discovery of miRNAs, although the term ‘lncRNA‘ had not been coined at the time .
One of the first lncRNA genes reported was the imprinted H19 gene, which was quickly followed by the discovery of the

  • silencing X-inactive-specific transcript (XIST) lncRNA gene, which
    • plays a critical function in X-chromosome inactivation [74,75].

The discovery of the first miRNA lin-14 dramatically redirected the focus of ncRNA research from long ncRNAs to miRNAs [76], and
the discovery of miRNAs revealed RNA could

  1. regulate gene expression and
  2. entire gene networks could be affected by ncRNA expression and

Within the last decade miRNAs were discovered to be associated with cancer. At the time of this writing there are approximately
1049 human miRNAs described in miRBase V16 [80,81] with the potential of

    • affecting the expression of approximately 60% of protein -coding genes [82,83].

Conversely, the variety and dynamics of lncRNA expression was not to be fully appreciated until the introduction of whole transcriptome sequencing.
With the advent of the FANTOM and ENCODE transcript mapping projects, it was revealed that the mammalian genome is extensively transcribed,
although a large portion of this represented non-coding sequences [3,84]. Coupled with the novel functional annotation of a few lncRNAs, this discovery
promoted research focusing on lncRNA discovery and characterization. Recent reports have described new lncRNA classes such as lincRNAs and T-UCRs [8,58,85].
Current estimates of the lncRNA gene content in the human genome ranges from ~7000 – 23,000 unique lncRNAs, implying this class of ncRNA will
represent an enormous, yet undiscovered, component of normal cellular networks that may be disrupted in cancer biology [62].

Emerging Role of Long Non-Coding RNA in Tumorigenesis

A role for differential lncRNA expression in cancer had been suspected for many years, however, lacked strong supporting evidence [86]. With advancements
in cancer transcriptome profiling and accumulating evidence supporting lncRNA function, a number of differentially expressed lncRNAs have been associated
with cancer. LncRNAs have been implicated to

  • regulate a range of biological functions and
  • the disruption of some of these functions, such as
    • genomic imprinting and transcriptional regulation,
    • plays a critical role in cancer development.

Here we describe some of the better characterized lncRNAs that have been associated with cancer biology.

Human cancer-associated lncRNAs

Imprinted lncRNA genes

Imprinting is a process whereby the copy of a gene inherited from one parent is epigenetically silenced [87,88]. Intriguingly, imprinted regions often
include multiple maternal and paternally expressed genes with a high frequency of ncRNA genes. The imprinted ncRNA genes are implicated in the
imprinting of the region by a variety of mechanisms including

  • enhancer competition and chromatin remodeling [89].

A key feature of cancer is the loss of this imprinting resulting in altered gene expression [90,91]. Two of the best known imprinted genes
are in fact lncRNAs.

H19

The H19 gene encodes a 2.3 kb lncRNA that is expressed exclusively from the maternal allele. H19 and its reciprocally imprinted protein-coding neighbor
the Insulin-Like Growth Factor 2 or IGF2 gene at 11p15.5 were among the first genes, non-coding or otherwise, found to demonstrate genomic imprinting [74,92].

The expression of H19 is high during vertebrate embryo development, but is

  • downregulated in most tissues shortly after birth with the exception of skeletal tissue and cartilage [20,93,94].
  • Loss of imprinting and subsequent strong gene expression has been well-documented in human cancers. Likewise,
  • loss of imprinting at the H19 locus resulted in high H19 expression in cancers of the esophagus, colon, liver, bladder and with hepatic metastases [95-97].

H19 has been implicated as having both oncogenic and tumor suppression properties in cancer. H19 is upregulated in a number of human cancers, including
hepatocellular, bladder and breast carcinomas, suggesting an oncogenic function for this lncRNA [97-99]. In colon cancer H19 was shown to be directly activated
by the oncogenic transcription factor c-Myc, suggesting

  • H19 may be an intermediate functionary between c-Myc and downstream gene expression [98].

Conversely, the tumor suppressor gene and transcriptional activator p53 has been shown to

  • down-regulate H19 expression [100,101].

H19 transcripts also serve as a precursor for miR-675, a miRNA involved in the regulation of developmental genes [102].
miR-675 is processed from the first exon of H19 and functionally

  • downregulates the tumor suppressor gene retinoblastoma (RB1) in human colorectal cancer, further implying an oncogenic role for H19 [103].

There is evidence suggesting H19 may also play a role in tumor suppression [104,105]. Using a mouse model for colorectal cancer, it was shown that
mice lacking H19 manifested an increased polyp count compared to wild-type [106]. Secondly, a mouse teratocarcinoma model demonstrated larger
tumor growth when the embryo lacked H19, and finally in a hepatocarcinoma model, mice developed cancer much earlier when H19 was absent [107].
The discrepancy as to whether H19 has oncogenic or tumor suppressive potential may be due in part to the bifunctional nature of the lncRNA or may
be context dependent. In either case, the precise functional and biological role of H19 remains to be determined.

XIST – X-inactive-specific transcript

The 17 kb lncRNA XIST is arguably an archetype for the study of functional lncRNAs in mammalian cells, having been studied for nearly two decades.
In female cells, the XIST transcript plays a critical role in X-chromosome inactivation by

  • physically coating one of the two X-chromosomes, and is necessary for the
  • cis-inactivation of the over one thousand X-linked genes [75,108-110].

Like the lncRNAs HOTAIR and ANRIL, XIST associates with polycomb-repressor proteins, suggesting

    • a common pathway of inducing silencing utilized by diverse lncRNAs.

Discovery of Molecular Mechanisms of Traditional Chinese Medicinal Formula Si-Wu-Tang Using Gene Expression Microarray and Connectivity Map
by Zhining Wen, Zhijun Wang, Steven Wang, Ranadheer Ravula, Lun Yang, …et al.
PLoS ONE (2011); 6:(3), Publisher: PLoS, Pages: 14    http.//dx.doi.org/10.1371/journal.pone.0018278        PubMed: 21464939
http://dx.plos.org/Wen Z, Wang Z, Wang Z, et al./discovery of molecular mechanisms of traditional chinese medicinal formula…/
To pursue a systematic approach to discovery of mechanisms of action of traditional Chinese medicine (TCM), we used

  • microarrays,
  • bioinformatics and the
  • Connectivity Map (CMAP)
    • to examine TCM-induced changes in gene expression.

We demonstrated that this approach can be used to elucidate new molecular targets using a model TCM herbal formula Si-Wu-Tang (SWT) which is

  • widely used for women’s health.

The human breast cancer MCF-7 cells treated with 0.1 µM estradiol or 2.56 mg/ml of SWT

  • showed dramatic gene expression changes, while
  • no significant change was detected for ferulic acid, a known bioactive compound of SWT.

Pathway analysis using

  • differentially expressed genes related to the treatment effect
  • identified that expression of genes in the nuclear factor erythroid 2-related factor 2 (Nrf2) cytoprotective pathway
  • was most significantly affected by SWT,
    • but not by estradiol or ferulic acid.
  • The Nrf2-regulated genes
    • HMOX1,
    • GCLC,
    • GCLM,
    • SLC7A11 and
    • NQO1 were
  • upreguated by SWT in a dose-dependent manner, which was validated by real-time RT-PCR. Consistently,
  • treatment with SWT and its four herbal ingredients resulted in an 
  • increased antioxidant response element (ARE)-luciferase reporter activity in MCF-7 and HEK293 cells.

Furthermore, the gene expression profile of differentially expressed genes related to SWT treatment was used to compare with those of

  • 1,309 compounds in the CMAP database.

The CMAP profiles of estradiol-treated MCF-7 cells showed an excellent match with SWT treatment,

  • consistent with SWT’s widely claimed use for women’s diseases and indicating a phytoestrogenic effect.

Read Full Post »


Breast Cancer and Mitochondrial Mutations

Author: Larry H Bernstein, MD, FCAP

How Aggressive Breast Tumors and Mitochondrial Mutations Are Linked

Feb 18, 2013  Brunhilde H. Felding, Ph.D., Scripps Research Institute (TSRI)
Mitochondrial complex I critically determines the energy output of cellular respiration. The Felding team discovered that the balance of key metabolic cofactors processed by complex I—specifically,
the form it takes after accepting a key electron in the energy production cycle—

The team altered genes tied to NAD+ production. The resulting shift again showed that

  • higher NADH levels meant more aggressive tumors,
  • while increased NAD+ had the opposite effect.
The scientists found that enhancing the NAD+/NADH balance through the nicotinamide treatment inhibited metastasis, and the mice lived longer.
Reduction and oxidation of the NAD. Created us...

Reduction and oxidation of the NAD. Created using ACD/ChemSketch 10.0 and . (Photo credit: Wikipedia)

Comparison of the absorbance spectra of NAD+ a...

Comparison of the absorbance spectra of NAD+ and NADH (Photo credit: Wikipedia)

English: By Richard Wheeler (Zephyris) 2006. T...

English: By Richard Wheeler (Zephyris) 2006. The structure of the peripheral domain of an NADH dehydrogenase (mitochondrial complex I) related protein; bacterial FMN dehygrogenase PDB 2FUG. (Photo credit: Wikipedia)

Related articles

Read Full Post »

What is the Future for Genomics in Clinical Medicine?


What is the Future for Genomics in Clinical Medicine?

Author and Curator: Larry H Bernstein, MD, FCAP

 

Introduction

This is the last in a series of articles looking at the past and future of the genome revolution.  It is a revolution indeed that has had a beginning with the first phase discovery leading to the Watson-Crick model, the second phase leading to the completion of the Human Genome Project, a third phase in elaboration of ENCODE.  But we are entering a fourth phase, not so designated, except that it leads to designing a path to the patient clinical experience.
What is most remarkable on this journey, which has little to show in treatment results at this time, is that the boundary between metabolism and genomics is breaking down.  The reality is that we are a magnificent “magical” experience in evolutionary time, functioning in a bioenvironment, put rogether like a truly complex machine, and with interacting parts.  What are those parts – organelles, a genetic message that may be constrained and it may be modified based on chemical structure, feedback, crosstalk, and signaling pathways.  This brings in diet as a source of essential nutrients, exercise as a method for delay of structural loss (not in excess), stress oxidation, repair mechanisms, and an entirely unexpected impact of this knowledge on pharmacotherapy.  I illustrate this with some very new observations.

Gutenberg Redone

The first is a recent talk on how genomic medicine has constructed a novel version of the “printing press”, that led us out of the dark ages.

Topol_splash_image

In our series The Creative Destruction of Medicine, I’m trying to get into critical aspects of how we can Schumpeter or reboot the future of healthcare by leveraging the big innovations that are occurring in the digital world, including digital medicine.

We have this big thing about evidence-based medicine and, of course, the sanctimonious randomized, placebo-controlled clinical trial. Well, that’s great if one can do that, but often we’re talking about needing thousands, if not tens of thousands, of patients for these types of clinical trials. And things are changing so fast with respect to medicine and, for example, genomically guided interventions that it’s going to become increasingly difficult to justify these very large clinical trials.

For example, there was a drug trial for melanoma and the mutation of BRAF, which is the gene that is found in about 60% of people with malignant melanoma. When that trial was done, there was a placebo control, and there was a big ethical charge asking whether it is justifiable to have a body count. This was a matched drug for the biology underpinning metastatic melanoma, which is essentially a fatal condition within 1 year, and researchers were giving some individuals a placebo.

The next observation is a progression of what he have already learned. The genome has a role is cellular regulation that we could not have dreamed of 25 years ago, or less. The role is far more than just the translation of a message from DNA to RNA to construction of proteins, lipoproteins, cellular and organelle structures, and more than a regulation of glycosidic and glycolytic pathways, and under the influence of endocrine and apocrine interactions. Despite what we have learned, the strength of inter-molecular interactions, strong and weak chemical bonds, essential for 3-D folding, we know little about the importance of trace metals that have key roles in catalysis and because of their orbital structures, are essential for organic-inorganic interplay. This will not be coming soon because we know almost nothing about the intracellular, interstitial, and intrvesicular distributions and how they affect the metabolic – truly metabolic events.

I shall however, use some new information that gives real cause for joy.

Reprogramming Alters Cells’ Fate

Kathy Liszewski
Gordon Conference  Report: June 21, 2012;32(11)
New and emerging strategies were showcased at Gordon Conference’s recent “Reprogramming Cell Fate” meeting. For example, cutting-edge studies described how only a handful of key transcription factors were needed to entirely reprogram cells.
M. Azim Surani, Ph.D., Marshall-Walton professor at the Gurdon Institute, University of Cambridge, U.K., is examining cellular reprogramming in a mouse model. Epiblast stem cells are derived from the early-stage embryonic stage after implantation of blastocysts, about six days into development, and retain the potential to undergo reversion to embryonic stem cells (ESCs) or to PGCs.”  They report two critical steps both of which are needed for exploring epigenetic reprogramming.  “Although there are two X chromosomes in females, the inactivation of one is necessary for cell differentiation. Only after epigenetic reprogramming of the X chromosome can pluripotency be acquired. Pluripotent stem cells can generate any fetal or adult cell type but are not capable of developing into a complete organism.”
The second read-out is the activation of Oct4, a key transcription factor involved in ESC development. The expression of Oct4 in epiSCs requires its proximal enhancer.  Dr. Surani said that their cell-based system demonstrates how a systematic analysis can be performed to analyze how other key genes contribute to the many-faceted events involved in reprogramming the germline.
Reprogramming Expressway
A number of other recent studies have shown the importance of Oct4 for self-renewal of undifferentiated ESCs. It is sufficient to induce pluripotency in neural tissues and somatic cells, among others. The expression of Oct4 must be tightly regulated to control cellular differentiation. But, Oct4 is much more than a simple regulator of pluripotency, according to Hans R. Schöler, Ph.D., professor in the department of cell and developmental biology at the Max Planck Institute for Molecular Biomedicine.
Oct4 has a critical role in committing pluripotent cells into the somatic cellular pathway. When embryonic stem cells overexpress Oct4, they undergo rapid differentiation and then lose their ability for pluripotency. Other studies have shown that Oct4 expression in somatic cells reprograms them for transformation into a particular germ cell layer and also gives rise to induced pluripotent stem cells (iPSCs) under specific culture conditions.
Oct4 is the gatekeeper into and out of the reprogramming expressway. By modifying experimental conditions, Oct4 plus additional factors can induce formation of iPSCs, epiblast stem cells, neural cells, or cardiac cells. Dr. Schöler suggests that Oct4 a potentially key factor not only for inducing iPSCs but also for transdifferention.  “Therapeutic applications might eventually focus less on pluripotency and more on multipotency, especially if one can dedifferentiate cells within the same lineage. Although fibroblasts are from a different germ layer, we recently showed that adding a cocktail of transcription factors induces mouse fibroblasts to directly acquire a neural stem cell identity.
Stem cell diagram illustrates a human fetus st...

Stem cell diagram illustrates a human fetus stem cell and possible uses on the circulatory, nervous, and immune systems. (Photo credit: Wikipedia)

English: Embryonic Stem Cells. (A) shows hESCs...

English: Embryonic Stem Cells. (A) shows hESCs. (B) shows neurons derived from hESCs. (Photo credit: Wikipedia)

Transforming growth factor beta (TGF-β) is a s...

Transforming growth factor beta (TGF-β) is a secreted protein that controls proliferation, cellular differentiation, and other functions in most cells. http://en.wikipedia.org/wiki/TGFbeta (Photo credit: Wikipedia)

Pioneer Transcription Factors

Pioneer transcription factors take the lead in facilitating cellular reprogramming and responses to environmental cues. Multicellular organisms consist of functionally distinct cellular types produced by differential activation of gene expression. They seek out and bind specific regulatory sequences in DNA. Even though DNA is coated with and condensed into a thick fiber of chromatin. The pioneer factor, discovered by Prof. KS Zaret at UPenn SOM in 1996, he says, endows the competence for gene activity, being among the first transcription factors to engage and pry open the target sites in chromatin.
FoxA factors, expressed in the foregut endoderm of the mouse,are necessary for induction of the liver program. They found that nearly one-third of the DNA sites bound by FoxA in the adult liver occur near silent genes

A Nontranscriptional Role for HIF-1α as a Direct Inhibitor of DNA Replication

ME Hubbi, K Shitiz, DM Gilkes, S Rey,….GL Semenza. Johns Hopkins University SOM
Sci. Signal 2013; 6(262) 10pgs. [DOI: 10.1126/scisignal.2003417]   http:dx.doi.org/10.1126/scisignal.2003417

http://SciSignal.com/A Nontranscriptional Role for HIF-1α as a Direct Inhibitor of DNA Replication/

Many of the cellular responses to reduced O2 availability are mediated through the transcriptional activity of hypoxia-inducible factor 1 (HIF-1). We report a role for the isolated HIF-1α subunit as an inhibitor of DNA replication, and this role was independent of HIF-1β and transcriptional regulation. In response to hypoxia, HIF-1α bound to Cdc6, a protein that is essential for loading of the mini-chromosome maintenance (MCM) complex (which has DNA helicase activity) onto DNA, and promoted the interaction between Cdc6 and the MCM complex. The binding of HIF-1α to the complex decreased phosphorylation and activation of the MCM complex by the kinase Cdc7. As a result, HIF-1α inhibited firing of replication origins, decreased DNA replication, and induced cell cycle arrest in various cell types. To whom correspondence should be addressed. E-mail: gsemenza@jhmi.edu
Citation: M. E. Hubbi, Kshitiz, D. M. Gilkes, S. Rey, C. C. Wong, W. Luo, D.-H. Kim, C. V. Dang, A. Levchenko, G. L. Semenza, A Nontranscriptional Role for HIF-1α as a Direct Inhibitor of DNA Replication. Sci. Signal. 6, ra10 (2013).

Identification of a Candidate Therapeutic Autophagy-inducing Peptide

Nature 2013;494(7436).    http://nature.com/Identification_of_a_candidate_therapeutic_autophagy-inducing_peptide/   http://www.ncbi.nlm.nih.gov/pubmed/23364696
http://www.readcube.com/articles/10.1038/nature11866

Beth Levine and colleagues have constructed a cell-permeable peptide derived from part of an autophagy protein called beclin 1. This peptide is a potent inducer of autophagy in mammalian cells and in vivo in mice and was effective in the clearance of several viruses including chikungunya virus, West Nile virus and HIV-1.

Could this small autophagy-inducing peptide may be effective in the prevention and treatment of human diseases?

PR-Set7 Is a Nucleosome-Specific Methyltransferase that Modifies Lysine 20 of

Histone H4 and Is Associated with Silent Chromatin

K Nishioka, JC Rice, K Sarma, H Erdjument-Bromage, …, D Reinberg.   Molecular Cell, Vol. 9, 1201–1213, June, 2002, Copyright 2002 by Cell Press   http://www.cell.com/molecular-cell/abstract/S1097-2765(02)00548-8

http://www.sciencedirect.com/science/article/pii/S1097276502005488           http://www.ncbi.nlm.nih.gov/pubmed/12086618
http://www.cienciavida.cl/publications/b46e8d324fa4aefa771c4d6ece4d2e27_PR-Set7_Is_a_Nucleosome-Specific.pdf

We have purified a human histone H4 lysine 20methyl-transferase and cloned the encoding gene, PR/SET07. A mutation in Drosophila pr-set7 is lethal: second in-star larval death coincides with the loss of H4 lysine 20 methylation, indicating a fundamental role for PR-Set7 in development. Transcriptionally competent regions lack H4 lysine 20 methylation, but the modification coincided with condensed chromosomal regions polytene chromosomes, including chromocenter euchromatic arms. The Drosophila male X chromosome, which is hyperacetylated at H4 lysine 16, has significantly decreased levels of lysine 20 methylation compared to that of females. In vitro, methylation of lysine 20 and acetylation of lysine 16 on the H4 tail are competitive. Taken together, these results support the hypothesis that methylation of H4 lysine 20 maintains silent chromatin, in part, by precluding neighboring acetylation on the H4 tail.

Next-Generation Sequencing vs. Microarrays

Shawn C. Baker, Ph.D., CSO of BlueSEQ
GEN Feb 2013
With recent advancements and a radical decline in sequencing costs, the popularity of next generation sequencing (NGS) has skyrocketed. As costs become less prohibitive and methods become simpler and more widespread, researchers are choosing NGS over microarrays for more of their genomic applications. The immense number of journal articles citing NGS technologies it looks like NGS is no longer just for the early adopters. Once thought of as cost prohibitive and technically out of reach, NGS has become a mainstream option for many laboratories, allowing researchers to generate more complete and scientifically accurate data than previously possible with microarrays.

Gene Expression

Researchers have been eager to use NGS for gene expression experiments for a detailed look at the transcriptome. Arrays suffer from fundamental ‘design bias’ —they only return results from those regions for which probes have been designed. The various RNA-Seq methods cover all aspects of the transcriptome without any a priori knowledge of it, allowing for the analysis of such things as novel transcripts, splice junctions and noncoding RNAs. Despite NGS advancements, expression arrays are still cheaper and easier when processing large numbers of samples (e.g., hundreds to thousands).
Methylation
While NGS unquestionably provides a more complete picture of the methylome, whole genome methods are still quite expensive. To reduce costs and increase throughput, some researchers are using targeted methods, which only look at a portion of the methylome. Because details of exactly how methylation impacts the genome and transcriptome are still being investigated, many researchers find a combination of NGS for discovery and microarrays for rapid profiling.

Diagnostics

They are interested in ease of use, consistent results, and regulatory approval, which microarrays offer. With NGS, there’s always the possibility of revealing something new and unexpected. Clinicians aren’t prepared for the extra information NGS offers. But the power and potential cost savings of NGS-based diagnostics is alluring, leading to their cautious adoption for certain tests such as non-invasive prenatal testing.
Cytogenetics
Perhaps the application that has made the least progress in transitioning to NGS is cytogenetics. Researchers and clinicians, who are used to using older technologies such as karyotyping, are just now starting to embrace microarrays. NGS has the potential to offer even higher resolution and a more comprehensive view of the genome, but it currently comes at a substantially higher price due to the greater sequencing depth. While dropping prices and maturing technology are causing NGS to make headway in becoming the technology of choice for a wide range of applications, the transition away from microarrays is a long and varied one. Different applications have different requirements, so researchers need to carefully weigh their options when making the choice to switch to a new technology or platform. Regardless of which technology they choose, genomic researchers have never had more options.

Sequencing Hones In on Targets

Greg Crowther, Ph.D.

GEN Feb 2013

Cliff Han, PhD, team leader at the Joint Genome Institute in the Los Alamo National Lab, was one of a number of scientists who made presentations regarding target enrichment at the “Sequencing, Finishing, and Analysis in the Future” (SFAF) conference in Santa Fe, which was co-sponsored by the Los Alamos National Laboratory and DOE Joint Genome Institute. One of the main challenges is that of target enrichment: the selective sequencing of genomic or transcriptomic regions. The polymerase chain reaction (PCR) can be considered the original target-enrichment technique and continues to be useful in contexts such as genome finishing. “One target set is the unique gaps—the gaps in the unique sequence regions. Another is to enrich the repetitive sequences…ribosomal RNA regions, which together are about 5 kb or 6 kb.” The unique-sequence gaps targeted for PCR with 40-nucleotide primers complementary to sequences adjacent to the gaps, did not yield the several-hundred-fold enrichment expected based on previously published work. “We got a maximum of 70-fold enrichment and generally in the dozens of fold of enrichment,” noted Dr. Han.

“We enrich the genome, put the enriched fragments onto the Pacific Biosciences sequencer, and sequence the repeats,” continued Dr. Han. “In many parts of the sequence there will be a unique sequence anchored at one or both ends of it, and that will help us to link these scaffolds together.” This work, while promising, will remain unpublished for now, as the Joint Genome Institute has shifted its resources to other projects.
At the SFAF conference Dr. Jones focused on going beyond basic target enrichment and described new tools for more efficient NGS research. “Hybridization methods are flexible and have multiple stop-start sites, and you can capture very large sizes, but they require library prep,” said Jennifer Carter Jones, Ph.D., a genomics field applications scientist at Agilent. “With PCR-based methods, you have to design PCR primers and you’re doing multiplexed PCR, so it’s limited in the size that you can target. But the workflow is quick because there’s no library preparation; you’re just doing PCR.” She discussed Agilent’s recently acquired HaloPlex technology, a hybrid system that includes both a hybridization step and a PCR step. Because no library preparation is required, sequencing results can be obtained in about six hours, making it suitable for clinical uses. However, the hybridization step allows capture of targets of up to 5 megabases—longer than purely PCR-based methods can deliver. The Agilent talk also provided details on the applications of SureSelect, the company’s hybridization technology, to Methyl-Seq and RNA-Seq research. With this technology, 120-mer baits hybridize to targets, then are pulled down with streptavidin-coated magnetic beads.
These are selections from the SFAF conference, which is expected to be a boost to work on the microbiome, and lead to infectious disease therapeutic approaches.

Summary

We have finished a breathtaking ride through the genomic universe in several sessions.  This has been a thorough review of genomic structure and function in cellular regulation.  The items that have been discussed and can be studied in detail include:

  1.  the classical model of the DNA structure
  2. the role of ubiquitinylation in managing cellular function and in autophagy, mitophagy, macrophagy, and protein degradation
  3. the nature of the tight folding of the chromatin in the nucleus
  4. intramolecular bonds and short distance hydrophobic and hydrophilic interactions
  5. trace metals in molecular structure
  6. nuclear to membrane interactions
  7. the importance of the Human Genome Project followed by Encode
  8. the Fractal nature of chromosome structure
  9. the oligomeric formation of short sequences and single nucletide polymorphisms (SNPs)and the potential to identify drug targets
  10. Enzymatic components of gene regulation (ligase, kinases, phosphatases)
  11. Methods of computational analysis in genomics
  12. Methods of sequencing that have become more accurate and are dropping in cost
  13. Chromatin remodeling
  14. Triplex and quadruplex models not possible to construct at the time of Watson-Crick
  15. sequencing errors
  16. propagation of errors
  17. oxidative stress and its expected and unintended effects
  18. origins of cardiovascular disease
  19. starvation and effect on protein loss
  20. ribosomal damage and repair
  21. mitochondrial damage and repair
  22. miscoding and mutational changes
  23. personalized medicine
  24. Genomics to the clinics
  25. Pharmacotherapy horizons
  26. driver mutations
  27. induced pluripotential embryonic stem cell (iPSCs)
  28. The association of key targets with disease
  29. The real possibility of moving genomic information to the bedside
  30. Requirements for the next generation of electronic health record to enable item 29

Other Related articles on this Open Access Online Scientific Journal, include the following:

https://pharmaceuticalintelligence.com/2013/01/14/oogonial-stem-cells-purified-a-view-towards-the-future-of-reproductive-biology/   SSaha

https://pharmaceuticalintelligence.com/2012/10/22/blood-vessel-generating-stem-cells-discovered/ RSaxena

https://pharmaceuticalintelligence.com/2012/08/22/a-possible-light-by-stem-cell-therapy-in-painful-dark-of-osteoarthritis-kartogenin-a-small-molecule-differentiates-stem-cells-to-chondrocyte-healthy-cartilage-cells/   ASarkar and RSaxena

https://pharmaceuticalintelligence.com/2012/08/07/human-embryonic-pluripotent-stem-cells-and-healing-post-myocardial-infarction/    LHB

https://pharmaceuticalintelligence.com/2013/02/03/genome-wide-detection-of-single-nucleotide-and-copy-number-variation-of-a-single-human-cell/  SJWilliams

https://pharmaceuticalintelligence.com/2013/01/09/gene-therapy-into-healthy-heart-muscle-reprogramming-scar-tissue-in-damaged-hearts/ ALev-Ari

https://pharmaceuticalintelligence.com/2013/01/03/differentiation-therapy-epigenetics-tackles-solid-tumors/  SJWilliams

https://pharmaceuticalintelligence.com/2012/12/09/naotech-therapy-for-breast-cancer/  TBarliya

Read Full Post »

Expanding the Genetic Alphabet and Linking the Genome to the Metabolome


English: The citric acid cycle, also known as ...

English: The citric acid cycle, also known as the tricarboxylic acid cycle (TCA cycle) or the Krebs cycle. Produced at WikiPathways. (Photo credit: Wikipedia)

Expanding the Genetic Alphabet and Linking the Genome to the Metabolome

 

Reporter& Curator:  Larry Bernstein, MD, FCAP

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Unlocking the diversity of genomic expression within tumorigenesis and “tailoring” of therapeutic options

1. Reshaping the DNA landscape between diseases and within diseases by the linking of DNA to treatments

In the NEW York Times of 9/24,2012 Gina Kolata reports on four types of breast cancer and the reshaping of breast cancer DNA treatment based on the findings of the genetically distinct types, which each have common “cluster” features that are driving many cancers.  The discoveries were published online in the journal Nature on Sunday (9/23).  The study is considered the first comprehensive genetic analysis of breast cancer and called a roadmap to future breast cancer treatments.  I consider that if this is a landmark study in cancer genomics leading to personalized drug management of patients, it is also a fitting of the treatment to measurable “combinatorial feature sets” that tie into population biodiversity with respect to known conditions.   The researchers caution that it will take years to establish transformative treatments, and this is clearly because in the genetic types, there are subsets that have a bearing on treatment “tailoring”.   In addition, there is growing evidence that the Watson-Crick model of the gene is itself being modified by an expansion of the alphabet used to construct the DNA library, which itself will open opportunities to explain some of what has been considered junk DNA, and which may carry essential information with respect to metabolic pathways and pathway regulation.  The breast cancer study is tied to the  “Cancer Genome Atlas” Project, already reported.  It is expected that this work will tie into building maps of genetic changes in common cancers, such as, breast, colon, and lung.  What is not explicit I presume is a closely related concept, that the translational challenge is closely related to the suppression of key proteomic processes tied into manipulating the metabolome.

Saha S. Impact of evolutionary selection on functional regions: The imprint of evolutionary selection on ENCODE regulatory elements is manifested between species and within human populations. 9/12/2012. PharmaceuticalIntelligence.Wordpress.com

Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, Shen EH, Ng L, et al. An anatomically comprehensive atlas of the adult human brain transcriptome. Nature  Sept 14-20, 2012

Sarkar A. Prediction of Nucleosome Positioning and Occupancy Using a Statistical Mechanics Model. 9/12/2012. PharmaceuticalIntelligence.WordPress.com

Heijden et al.   Connecting nucleosome positions with free energy landscapes. (Proc Natl Acad Sci U S A. 2012, Aug 20 [Epub ahead of print]).  http://www.ncbi.nlm.nih.gov/pubmed/22908247

2. Fiddling with an expanded genetic alphabet – greater flexibility in design of treatment (pharmaneogenesis?)

Diagram of DNA polymerase extending a DNA stra...

Diagram of DNA polymerase extending a DNA strand and proof-reading. (Photo credit: Wikipedia)

A clear indication of this emerging remodeling of the genetic alphabet is a new
study led by scientists at The Scripps Research Institute appeared in the
June 3, 2012 issue of Nature Chemical Biology that indicates the genetic code as
we know it may be expanded to include synthetic and unnatural sequence pairing (Study Suggests Expanding the Genetic Alphabet May Be Easier than Previously Thought, Genome). They infer that the genetic instructions for living organisms
that is composed of four bases (C, G, A and T)— is open to unnatural letters. An expanded “DNA alphabet” could carry more information than natural DNA, potentially coding for a much wider range of molecules and enabling a variety of powerful applications. The implications of the application of this would further expand the translation of portions of DNA to new transciptional proteins that are heretofore unknown, but have metabolic relavence and therapeutic potential. The existence of such pairing in nature has been studied in Eukariotes for at least a decade, and may have a role in biodiversity. The investigators show how a previously identified pair of artificial DNA bases can go through the DNA replication process almost as efficiently as the four natural bases.  This could as well be translated into human diversity, and human diseases.

The Romesberg laboratory collaborated on the new study and his lab have been trying to find a way to extend the DNA alphabet since the late 1990s. In 2008, they developed the efficiently replicating bases NaM and 5SICS, which come together as a complementary base pair within the DNA helix, much as, in normal DNA, the base adenine (A) pairs with thymine (T), and cytosine (C) pairs with guanine (G). It had been clear that their chemical structures lack the ability to form the hydrogen bonds that join natural base pairs in DNA. Such bonds had been thought to be an absolute requirement for successful DNA replication, but that is not the case because other bonds can be in play.

The data strongly suggested that NaM and 5SICS do not even approximate the edge-to-edge geometry of natural base pairs—termed the Watson-Crick geometry, after the co-discoverers of the DNA double-helix. Instead, they join in a looser, overlapping, “intercalated” fashion that resembles a ‘mispair.’ In test after test, the NaM-5SICS pair was efficiently replicable even though it appeared that the DNA polymerase didn’t recognize it. Their structural data showed that the NaM-5SICS pair maintain an abnormal, intercalated structure within double-helix DNA—but remarkably adopt the normal, edge-to-edge, “Watson-Crick” positioning when gripped by the polymerase during the crucial moments of DNA replication. NaM and 5SICS, lacking hydrogen bonds, are held together in the DNA double-helix by “hydrophobic” forces, which cause certain molecular structures (like those found in oil) to be repelled by water molecules, and thus to cling together in a watery medium.

The finding suggests that NaM-5SICS and potentially other, hydrophobically bound base pairs could be used to extend the DNA alphabet and that Evolution’s choice of the existing four-letter DNA alphabet—on this planet—may have been developed allowing for life based on other genetic systems.

3.  Studies that consider a DNA triplet model that includes one or more NATURAL nucleosides and looks closely allied to the formation of the disulfide bond and oxidation reduction reaction.

This independent work is being conducted based on a similar concep. John Berger, founder of Triplex DNA has commented on this. He emphasizes Sulfur as the most important element for understanding evolution of metabolic pathways in the human transcriptome. It is a combination of sulfur 34 and sulphur 32 ATMU. S34 is element 16 + flourine, while S32 is element 16 + phosphorous. The cysteine-cystine bond is the bridge and controller between inorganic chemistry (flourine) and organic chemistry (phosphorous). He uses a dual spelling, using  sulfphur to combine the two referring to the master catalyst of oxidation-reduction reactions. Various isotopic alleles (please note the duality principle which is natures most important pattern). Sulfphur is Methionine, S adenosylmethionine, cysteine, cystine, taurine, gluthionine, acetyl Coenzyme A, Biotin, Linoic acid, H2S, H2SO4, HSO3-, cytochromes, thioredoxin, ferredoxins, purple sulfphur anerobic bacteria prokaroytes, hydrocarbons, green sulfphur bacteria, garlic, penicillin and many antibiotics; hundreds of CSN drugs for parasites and fungi antagonists. These are but a few names which come to mind. It is at the heart of the Krebs cycle of oxidative phosphorylation, i.e. ATP. It is also a second pathway to purine metabolism and nucleic acids. It literally is the key enzymes between RNA and DNA, ie, SH thiol bond oxidized to SS (dna) cysteine through thioredoxins, ferredoxins, and nitrogenase. The immune system is founded upon sulfphur compounds and processes. Photosynthesis Fe4S4 to Fe2S3 absorbs the entire electromagnetic spectrum which is filtered by the Allen belt some 75 miles above earth. Look up chromatium vinosum or allochromatium species.  There is reasonable evidence it is the first symbiotic species of sulfphur anerobic bacteria (Fe4S4) with high potential mvolts which drives photosynthesis while making glucose with H2S.
He envisions a sulfphur control map to automate human metabolism with exact timing sequences, at specific three dimensional coordinates on Bravais crystalline lattices. He proposes adding the inosine-xanthosine family to the current 5 nucleotide genetic code. Finally, he adds, the expanded genetic code is populated with “synthetic nucleosides and nucleotides” with all kinds of customized functional side groups, which often reshape nature’s allosteric and physiochemical properties. The inosine family is nature’s natural evolutionary partner with the adenosine and guanosine families in purine synthesis de novo, salvage, and catabolic degradation. Inosine has three major enzymes (IMPDH1,2&3 for purine ring closure, HPGRT for purine salvage, and xanthine oxidase and xanthine dehydrogenase.

English: DNA replication or DNA synthesis is t...

English: DNA replication or DNA synthesis is the process of copying a double-stranded DNA molecule. This process is paramount to all life as we know it. (Photo credit: Wikipedia)

3. Nutritional regulation of gene expression,  an essential role of sulfur, and metabolic control 

Finally, the research carried out for decades by Yves Ingenbleek and the late Vernon Young warrants mention. According to their work, sulfur is again tagged as essential for health. Sulfur (S) is the seventh most abundant element measurable in human tissues and its provision is mainly insured by the intake of methionine (Met) found in plant and animal proteins. Met is endowed with unique functional properties as it controls the ribosomal initiation of protein syntheses, governs a myriad of major metabolic and catalytic activities and may be subjected to reversible redox processes contributing to safeguard protein integrity.

Consuming diets with inadequate amounts of methionine (Met) are characterized by overt or subclinical protein malnutrition, and it has serious morbid consequences. The result is reduction in size of their lean body mass (LBM), best identified by the serial measurement of plasma transthyretin (TTR), which is seen with unachieved replenishment (chronic malnutrition, strict veganism) or excessive losses (trauma, burns, inflammatory diseases).  This status is accompanied by a rise in homocysteine, and a concomitant fall in methionine.  The ratio of S to N is quite invariant, but dependent on source.  The S:N ratio is typical 1:20 for plant sources and 1:14.5 for animal protein sources.  The key enzyme involved with the control of Met in man is the enzyme cystathionine-b-synthase, which declines with inadequate dietary provision of S, and the loss is not compensated by cobalamine for CH3- transfer.

As a result of the disordered metabolic state from inadequate sulfur intake (the S:N ratio is lower in plants than in animals), the transsulfuration pathway is depressed at cystathionine-β-synthase (CβS) level triggering the upstream sequestration of homocysteine (Hcy) in biological fluids and promoting its conversion to Met. They both stimulate comparable remethylation reactions from homocysteine (Hcy), indicating that Met homeostasis benefits from high metabolic priority. Maintenance of beneficial Met homeostasis is counterpoised by the drop of cysteine (Cys) and glutathione (GSH) values downstream to CβS causing reducing molecules implicated in the regulation of the 3 desulfuration pathways

4. The effect on accretion of LBM of protein malnutrition and/or the inflammatory state: in closer focus

Hepatic synthesis is influenced by nutritional and inflammatory circumstances working concomitantly and liver production of  TTR integrates the dietary and stressful components of any disease spectrum. Thus we have a depletion of visceral transport proteins made by the liver and fat-free weight loss secondary to protein catabolism. This is most accurately reflected by TTR, which is a rapid turnover protein, but it is involved in transport and is essential for thyroid function (thyroxine-binding prealbumin) and tied to retinol-binding protein. Furthermore, protein accretion is dependent on a sulfonation reaction with 2 ATP.  Consequently, Kwashiorkor is associated with thyroid goiter, as the pituitary-thyroid axis is a major sulfonation target. With this in mind, it is not surprising why TTR is the sole plasma protein whose evolutionary patterns closely follow the shape outlined by LBM fluctuations. Serial measurement of TTR therefore provides unequaled information on the alterations affecting overall protein nutritional status. Recent advances in TTR physiopathology emphasize the detecting power and preventive role played by the protein in hyper-homocysteinemic states.

Individuals submitted to N-restricted regimens are basically able to maintain N homeostasis until very late in the starvation processes. But the N balance study only provides an overall estimate of N gains and losses but fails to identify the tissue sites and specific interorgan fluxes involved. Using vastly improved methods the LBM has been measured in its components. The LBM of the reference man contains 98% of total body potassium (TBK) and the bulk of total body sulfur (TBS). TBK and TBS reach equal intracellular amounts (140 g each) and share distribution patterns (half in SM and half in the rest of cell mass). The body content of K and S largely exceeds that of magnesium (19 g), iron (4.2 g) and zinc (2.3 g).

TBN and TBK are highly correlated in healthy subjects and both parameters manifest an age-dependent curvilinear decline with an accelerated decrease after 65 years. Sulfur Methylation (SM) undergoes a 15% reduction in size per decade, an involutive process. The trend toward sarcopenia is more marked and rapid in elderly men than in elderly women decreasing strength and functional capacity. The downward SM slope may be somewhat prevented by physical training or accelerated by supranormal cytokine status as reported in apparently healthy aged persons suffering low-grade inflammation or in critically ill patients whose muscle mass undergoes proteolysis.

5.  The results of the events described are:

  • Declining generation of hydrogen sulfide (H2S) from enzymatic sources and in the non-enzymatic reduction of elemental S to H2S.
  • The biogenesis of H2S via non-enzymatic reduction is further inhibited in areas where earth’s crust is depleted in elemental sulfur (S8) and sulfate oxyanions.
  • Elemental S operates as co-factor of several (apo)enzymes critically involved in the control of oxidative processes.

Combination of protein and sulfur dietary deficiencies constitute a novel clinical entity threatening plant-eating population groups. They have a defective production of Cys, GSH and H2S reductants, explaining persistence of an oxidative burden.

6. The clinical entity increases the risk of developing:

  • cardiovascular diseases (CVD) and
  • stroke

in plant-eating populations regardless of Framingham criteria and vitamin-B status.
Met molecules supplied by dietary proteins are submitted to transmethylation processes resulting in the release of Hcy which:

  • either undergoes Hcy — Met RM pathways or
  • is committed to transsulfuration decay.

Impairment of CβS activity, as described in protein malnutrition, entails supranormal accumulation of Hcy in body fluids, stimulation of activity and maintenance of Met homeostasis. The data show that combined protein- and S-deficiencies work in concert to deplete Cys, GSH and H2S from their body reserves, hence impeding these reducing molecules to properly face the oxidative stress imposed by hyperhomocysteinemia.

Although unrecognized up to now, the nutritional disorder is one of the commonest worldwide, reaching top prevalence in populated regions of Southeastern Asia. Increased risk of hyperhomocysteinemia and oxidative stress may also affect individuals suffering from intestinal malabsorption or westernized communities having adopted vegan dietary lifestyles.

Ingenbleek Y. Hyperhomocysteinemia is a biomarker of sulfur-deficiency in human morbidities. Open Clin. Chem. J. 2009 ; 2 : 49-60.

7. The dysfunctional metabolism in transitional cell transformation

A third development is also important and possibly related. The transition a cell goes through in becoming cancerous tends to be driven by changes to the cell’s DNA. But that is not the whole story. Large-scale techniques to the study of metabolic processes going on in cancer cells is being carried out at Oxford, UK in collaboration with Japanese workers. This thread will extend our insight into the metabolome. Otto Warburg, the pioneer in respiration studies, pointed out in the early 1900s that most cancer cells get the energy they need predominantly through a high utilization of glucose with lower respiration (the metabolic process that breaks down glucose to release energy). It helps the cancer cells deal with the low oxygen levels that tend to be present in a tumor. The tissue reverts to a metabolic profile of anaerobiosis.  Studies of the genetic basis of cancer and dysfunctional metabolism in cancer cells are complementary. Tomoyoshi Soga’s large lab in Japan has been at the forefront of developing the technology for metabolomics research over the past couple of decades (metabolomics being the ugly-sounding term used to describe research that studies all metabolic processes at once, like genomics is the study of the entire genome).

Their results have led to the idea that some metabolic compounds, or metabolites, when they accumulate in cells, can cause changes to metabolic processes and set cells off on a path towards cancer. The collaborators have published a perspective article in the journal Frontiers in Molecular and Cellular Oncology that proposes fumarate as such an ‘oncometabolite’. Fumarate is a standard compound involved in cellular metabolism. The researchers summarize that shows how accumulation of fumarate when an enzyme goes wrong affects various biological pathways in the cell. It shifts the balance of metabolic processes and disrupts the cell in ways that could favor development of cancer.  This is of particular interest because “fumarate” is the intermediate in the TCA cycle that is converted to malate.

Animation of the structure of a section of DNA...

Animation of the structure of a section of DNA. The bases lie horizontally between the two spiraling strands. (Photo credit: Wikipedia)

The Keio group is able to label glucose or glutamine, basic biological sources of fuel for cells, and track the pathways cells use to burn up the fuel.  As these studies proceed, they could profile the metabolites in a cohort of tumor samples and matched normal tissue. This would produce a dataset of the concentrations of hundreds of different metabolites in each group. Statistical approaches could suggest which metabolic pathways were abnormal. These would then be the subject of experiments targeting the pathways to confirm the relationship between changed metabolism and uncontrolled growth of the cancer cells.

Related articles

Read Full Post »


Reporter: Aviva Lev-Ari, PhD, RN

Genomics and the State of Science Clarity

Projects supported by the US National Institutes of Health will have produced 68,000 total human genomes — around 18,000 of those whole human genomes — through the end of this year, National Human Genome Research Institute estimates indicate. And in his book, The Creative Destruction of Medicine, the Scripps Research Institute‘s Eric Topol projects that 1 million human genomes will have been sequenced by 2013 and 5 million by 2014.

“There’s a lot of inventory out there, and these things are being generated at a fiendish rate,” says Daniel MacArthur, a group leader in Massachusetts General Hospital‘s Analytic and Translational Genetics Unit. “From a capacity perspective … millions of genomes are not that far off. If you look at the rate that we’re scaling, we can certainly achieve that.”

The prospect of so many genomes has brought clinical interpretation into focus — and for good reason. Save for regulatory hurdles, it seems to be the single greatest barrier to the broad implementation of genomic medicine.

But there is an important distinction to be made between the interpretation of an apparently healthy person’s genome and that of an individual who is already affected by a disease, whether known or unknown.

In an April Science Translational Medicine paper, Johns Hopkins University School of Medicine‘s Nicholas Roberts and his colleagues reported that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals. The researchers then concluded that whole-genome sequencing was not likely to be clinically useful for that purpose. (See sidebar, story end.)

“The Roberts paper was really about the value of omniscient interpretation of whole-genome sequences in asymptomatic individuals and what were the likely theoretical limits,” says Isaac Kohane, chair of the informatics program at Children’s Hospital Boston. “That was certainly an important study, and it was important to establish what those limits of knowledge are in asymptomatic populations. But, in fact, the major and most important use cases [for whole-genome sequencing] may be in cases of disease.”

Still, targeted clinical interpretations are not cut and dried. “Even in cases of disease, it’s not clear that we know now how to look across multiple genes and figure out which are relevant, which are not,” Kohane adds.

While substantial progress has been made — in particular, for genetic diseases, including certain cancers — ambiguities have clouded even the most targeted interpretation efforts to date. Technological challenges, meager sample sizes, and a need for increased, fail-safe automation all have hampered researchers’ attempts to reliably interpret the clinical significance of genomic variation. But perhaps the greatest problem, experts say, is a lack of community-wide standards for the task.

Genes to genomes

When scientists analyzed James Watson’s genome — his was the first personal sequence, completed in 2007 and published in Nature in 2008 — they were surprised to find that he harbored two putative homozygous SNPs matching Human Gene Mutation Database entries that, were they truly homozygous, would have produced severe clinical pheno-types.

But Watson was not sick.

As researchers search more and more genomes, such inconsistencies are increasingly common.

“My take on what has happened is that the people who were doing the interpretation of the raw sequence largely were coming from a SNPs world, where they were thinking about sequence variants that have been observed before, or that have an appreciable frequency, and weren’t thinking very much about the single-ton sequence variants,” says Sean Tavtigian, associate professor of oncology at the University of Utah.

“There is a qualitative difference between looking at whole-genome sequences and looking at single genes or, even more typically, small numbers of variants that have been previously implicated in a disease,” Boston’s Kohane adds.
“Previously, because of the cost and time limitations around sequencing and genotyping, we only looked at variants in genes for which we had a clinical indication. Now, since we can essentially see that in the near future we will be able to do a full genome sequence for essentially the same cost as just a focused set-of-variants test, all of the sudden we have to ask ourselves: What is the meaning of variants that fall outside where we would have ordinarily looked for a given disease or, in fact, if there is no disease at all?”

Mass General’s MacArthur says it has been difficult to pinpoint causal variants because they are enriched for both sequencing and annotation errors. “In the genome era, we can generate those false positives at an amazing rate, and we need to work hard to filter them back out,” he says.

“Clinical geneticists have been working on rare diseases for a long time, and have identified many genes, and are used to working in a world where there is sequence data available only from, say, one gene with a strong biological hypothesis. Suddenly, they’re in this world where they have data from patients on all 20,000 genes,” MacArthur adds. “There’s a fundamental mind-shift there, in shifting from one gene through to every gene. My impression is that the community as a whole hasn’t really internalized that shift; people still have a sense in their head that if you see a strongly damaging variant that segregates with the disease, and maybe there’s some sort of biological plausibility around it as well, that that’s probably the causal variant.”

Studies have shown that that’s not necessarily so. Because of this, “I do worry that in the next year or so we’ll see increasing numbers of mutations published that later prove to just be benign polymorphisms,” MacArthur adds.

“The meaning of whole-genome -sequence I think is very much front-and-center of where genomics is going to go. What is the true, clinical meaning? What is the interpretation? And, there’s really a double-edged sword,” Kohane says. On one hand, “if you only focus on the genes that you believe are relevant to the condition you’re studying, then you might miss some important findings,” he says. Conversely, “if you look at every-thing, the likelihood of a false positive becomes very, very high. Because, if you look at enough things, invariably you will find something abnormal,” he adds.

False positives are but one of the several challenges scientists working to analyze genomes in a clinical context face.

Technical difficulties

That advances in sequencing technologies are far outstripping researchers’ abilities to analyze the data they produce has become a truism of the field. But current sequencing platforms are still far from perfect, making most analyses complicated and nuanced. Among other things, improvements in both read length and quality are needed to enable accurate and reproducible interpretations.

“The most promising thing is the rate at which the cost-per-base-pair of massively parallel sequencing has dropped,” Utah’s Tavtigian says. Still, the cost of clinical sequencing is not inconsequential. “The $1,000, $2,000, $3,000 whole-genome sequences that you can do right now do not come anywhere close to 99 percent probability to identify a singleton sequence variant, especially a biologically severe singleton sequence variant,” he says. “Right now, the real price of just the laboratory sequencing to reach that quality is at least $5,000, if not $10,000.”

However, Tavtigian adds, “techniques for multiplexing many samples into a channel for sequencing have come along. They’re not perfect yet, but they’re going to improve over the next year or so.”

Using next-generation sequencing platforms, researchers have uncovered a variety of SNPs, copy-number variants, and small indels. But to MacArthur’s mind, current read lengths are not up to par when it comes to clinical-grade sequencing, and they have made supernumerary quality-control measures necessary.

“There’s no question that we’re already seeing huge improvements. … And as we add in to that changes in technology — for instance much, much longer sequencing reads, more accurate reads, possibly combining different platforms — I think these sorts of [quality-control] issues will begin to go away over the next couple of years,” MacArthur says. “But at this stage, there is still a substantial quality-control component in any sort of interpretation process. We don’t have perfect genomes.”

In a 2011 Nature Biotechnology paper, Stanford University’s Michael Snyder and his colleagues sought to examine the accuracy and completeness of single-nucleotide variant and indel calls from both the Illumina and Complete Genomics platforms by sequencing the genome of one individual using both technologies. Though the researchers found that more than 88 percent of the unique single-nucleotide variants they detected were concordant between the two platforms, only around one-quarter of the indel calls they generated matched up. Overall, the authors reported having found tens of thousands of platform-specific variant calls, around 60 percent of which they later validated by genotyping array.

For clinical sequencing to ever become widespread, “we’re going to have to be able to show the same reproducibility and test characteristic modification as we have for, let’s say, an LDL cholesterol level,” Boston’s Kohane says. “And if you measure it in one place, it should not be too different from another place. … Even before we can get to the clinical meaning of the genomes, we’re going to have to get some industry-wide standards around quality of sequencing.”
Scripps’ Topol adds that when it comes to detecting rare variants, “there still needs to be a big upgrade in accuracy.”

Analytical issues

Beyond sequencing, technological advances must also be made on the analysis end. “The next thing, of course, is once you have better -accuracy … being able to do all of the analytical work,” Topol says. “We’re getting better at the exome, but every-thing outside of protein-coding -elements, there’s still a tremendous challenge.”

Indeed, that challenge has inspired another — a friendly competition among bioinformaticians working to analyze pediatric genomes in a pedigree study.

With enrollment closed and all sequencing completed, participants in the Children’s Hospital Boston-sponsored CLARITY Challenge have rolled up their shirtsleeves and begun to dig into the data — de-identified clinical summaries and exome or whole-genome sequences generated by Complete Genomics and Life Technologies for three children affected by rare diseases of unknown genetic basis, and their parents. According to its organizers, the competition aims to help set standards for genomic analysis and interpretation in a clinical setting, and for returning actionable results to clinicians and patients.

“A bunch of teams have signed up to provide clinical-grade reports that will be checked by a blue-ribbon panel of judges later this year to compare and contrast the different forms of clinical reporting at the genome-wide level,” Kohane says. The winning team will be announced this fall and will receive a $25,000 prize, he adds.

While the competition covers all aspects of clinical sequencing — from readout to reporting — it is important to recognize that, more generally, there may not be one right answer and that the challenges are far-reaching, affecting even the most basic aspects of analysis.

“There is a lot of algorithm investment still to be made in order to get very good at identifying the very rare or singleton sequence variants from the massively parallel sequencing reads efficiently, accurately, [and with] sensitivity,” Utah’s Tavtigian says.

Picking up a variant that has been seen before is one thing, but detecting a potentially causal, though as-yet-unclassified variant is a beast of another nature.

“Novel mutations usually need extensive knowledge but also validation. That’s one of the challenges,” says Zhongming Zhao, associate professor of biomedical informatics at Vanderbilt University. “Validation in terms of a disease study is most challenging right now, because it is very time-consuming, and usually you need to find a good number of samples with similar disease to show this is not by chance.”

Search for significance

Much like sequencing a human genome in the early- to mid-2000s was more laborious than it is now, genome interpretation has also become increasingly automated.

Beyond standard quality-control checks, the process of moving from raw data to calling variants is now semiautomatic. “There’s essentially no manual intervention required there, apart from running our eyes over [the calls], making sure nothing has gone horribly wrong,” says Mass General’s MacArthur. “The step that requires manual intervention now is all about taking that list of variants that comes out of that and looking at all the available biological data that exists on the Web, [coming] up with a short-list of genes, and then all of us basically have a look at all sorts of online resources to see if any of them have some kind of intuitive biological profile that fits with the disease we’re thinking about.”

Of course, intuitive leads are not foolproof, nor are current mutation data-bases. (See sidebar, story end.) And so, MacArthur says, “we need to start replacing the sort of intuitive biological approach with a much more data-informed approach.”

Developing such an approach hinges in part on having more genomes. “If we get thousands — tens of thousands — of people sequenced with various different phenotypes that have been crisply identified, that’s going to be so important because it’s the coupling of the processing of the data with having rare variants, structural variants, all the other genomic variations to understand the relationship of whole-genome sequence of any particular phenotype and a sequence variant,” Scripps’ Topol says.

Vanderbilt’s Zhao says that sample size is still an issue. “Right now, the number of samples in each whole-genome sequencing-based publication is still very limited,” he says. At the same time, he adds, “when I read peers’ grant applications, they are proposing more and more whole-genome sequencing.”

When it comes to disease studies, sequencing a whole swath of apparently healthy people is not likely to ever be worthwhile. According to Utah’s Tavtigian, “the place where it is cost-effective is when you test cases and then, if something is found in the case, go on and test all of the first-degree relatives of the case — reflex testing for the first-degree relatives,” he says. “If there is something that’s pathogenic for heart disease or colon cancer or whatever is found in an index case, then there is a roughly 50 percent chance that the first-degree relatives are going to carry the same thing, whereas if you go and apply that same test to someone in the general population, the probability that they carry something of interest is a lot lower.”

But more genomes, even familial ones, are not the only missing elements. To fill in the functional blanks, researchers require multiple data types.

“We’ve been pretty much sequence-centric in our thinking for many years now because that was where are the attention [was],” Topol says. “But that leaves the other ‘omes out there.”

From the transcriptome to the proteome, the metabolome, the microbiome, and beyond — Topol says that because all the ‘omes contribute to human health, they all merit review.

“The ability to integrate information about the other ‘omics will probably be a critical direction to understand the underpinnings of disease,” he says. “I call it the ‘panoromic’ view — that is really going to become a critical future direction once we can do those other ‘omics readily. We’re quite a ways off from that right now.”

Mass General’s MacArthur envisages “rolling in data from protein-protein interaction networks and tissue expression data — pulling all of these together into a model that predicts, given the phenotype, given the systems that appear to be disrupted by this variant, what are the most likely set of genes to be involved,” he says. From there, whittling that set down to putative causal variants would be simpler.

“And at the end of that, I think we’ll end up with a relatively small number of variants, each of which has a probability score associated with it, along with a whole host of additional information that a clinician can just drill down into in an intuitive way in making a diagnosis in that individual,” he adds.

According to MacArthur, “we’re already moving in this direction — in five years I think we will have made substantial progress toward that.” He adds, “I certainly think within five years we will be diagnosing the majority of severe genetic disease patients; the vast majority of those we’ll be able to assign a likely causal variant using this type of approach.”

Tavtigian, however, highlights a potential pitfall. While he says that “integration of those [multivariate] data helps a lot with assessing unclassified variants,” it is not enough to help clinicians ascertain causality. Functional assays, which can be both inconclusive and costly, will be needed for some unclassified variant hits, particularly those that are thought to be clinically meaningful.

“I don’t see how you’re going to do a functional assay for less than like $1,000,” he says. “That means that unless the cost of the sequencing test also includes a whole bunch of money for assessing the unclassified variants, a sequencing test is going to create more of a mess than it cleans up.”

Rare, common

Despite the challenges, there have been plenty of clinical sequencing success stories. Already, Scripps’ Topol says there have been “two big fronts in 2012: One is the unknown diseases [and] the other one, of course, is cancer.” But scientists say that despite the challenges, whole–genome sequencing might also become clinically useful for asymptomatic individuals in the future.

Down the line, scientists have their sights set on sequencing asymptomatic individuals to predict disease risk. “The long-term goal is to have any person walk off the street, be able to take a look at their genome and, without even looking at them clinically, say: ‘This is a person who will almost certainly have phenotype X,'” MacArthur says. “That is a long way away. And, of course, there are many phenotypes that can’t be predicted from genetic data alone.”

Nearer term, Boston’s Kohane imagines that newborns might have their genomes screened for a number of neonatal or pediatric conditions.

Overall, he says, it’s tough to say exactly where all of the chips might fall. “It’s going to be an interesting few years where the sequencing companies will be aligning themselves with laboratory testing companies and with genome interpretation companies,” Kohane says.

Even if clinical sequencing does not show utility for cases other than genetic diseases, it could still become common practice.

“Worldwide, there are certainly millions of people with severe diseases that would benefit from whole–genome sequencing, so the demand is certainly there,” MacArthur says. “It’s just a question of whether we can develop the infrastructure that is required to turn the research-grade genomes that we’re generating at the moment into clinical-grade genomes. Given the demand and the practical benefit of having this information … I don’t think there is any question that we will continue to drive, pretty aggressively, towards large-scale -genome sequencing.”

Kohane adds that “although rare diseases are rare, in aggregate they’re actually not — 5 percent of the population, or 1 in 20, is beginning to look common.”

Despite conflicting reports as to its clinical value, given the rapid declines in cost, Kohane says it’s possible that a whole-genome sequence could be less expensive than a CT scan in the next five years. Confident that many of the interpretation issues will be worked out by then, he adds, “this soon-to-be-very-inexpensive test will actually have a lot of clinical value in a variety of situations. I think it will become part the decision procedure of most doctors.”


[Sidebar] ‘Predictive Capacity’ Challenged

In Science Translational Medicine in April, Johns Hopkins University School of Medicine’s Nicholas Roberts and his colleagues showed that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals and concluded that whole-genome sequencing was unlikely to be useful for that purpose.

As the Scripps Research Institute’s Eric Topol says, that Roberts and his colleagues examined the predictive capacity of personal genome sequencing “without any genome sequences” was but one flaw of their interpretation.

In a comment appearing in the same journal in May, Topol elaborated on this criticism, and noted that the Roberts et al. study essentially showed nothing new. “We cannot know the predictive capacity of whole-genome sequencing until we have sequenced a large number of individuals with like conditions,” Topol wrote.

Elsewhere in the journal, Tel Aviv University’s David Golan and Saharon Rosset noted that slightly tweaking the gene-environment parameters of the mathematical model used by Roberts et al. showed that the “predictive capacity of genomes may be higher than their maximal estimates.”

Colin Begg and Malcolm Pike from Memorial Sloan-Kettering Cancer Center also commented on the study in Science Translational Medicine, reporting their -alternative calculation of the predictive capacity of personal sequencing and their analysis of cancer occurrence in the second breast of breast cancer patients, both of which, they wrote, “offer a more optimistic view of the predictive value of genetic data.”

In response to those comments, Bert Vogelstein — who co-authored the Roberts et al. study — and his colleagues wrote in Science Translational Medicine that their “group was the first to show that unbiased genome-wide sequencing could illuminate the basis for a hereditary disease,” adding that they are “acutely aware of its immense power to elucidate disease pathogenesis.” However, Vogelstein and his colleagues also said that recognizing the potential limitations of personal genome sequencing is important to “minimize false expectations and foster the most fruitful investigations.”


[Sidebar] ‘The Single Biggest Problem’

That there is currently no comprehensive, accurate, and openly accessible database of human disease-causing mutations “is the single greatest failure of modern human genetics,” Massachusetts General Hospital’s Daniel MacArthur says.

“We’ve invested so much effort and so much money in researching these Mendelian diseases, and yet we have never managed as a community to centralize all of those mutations in a single resource that’s actually useful,” MacArthur says. While he notes that several groups have produced enormously helpful resources and that others are developing more, currently “none covers anywhere close to the whole of the literature with the degree of detail that is required to make an accurate interpretation.”

Because of this, he adds, researchers are pouring time and resources into rehashing one another’s efforts and chasing down false leads.

“As anyone at the moment who is sequencing genomes can tell you, when you look at a person’s genome and you compare it to any of these databases, you find things that just shouldn’t be there — homozygous mutations that are predicted to be severe, recessive, disease-causing variants and dominant mutations all over the place, maybe a dozen or more, that they’ve seen in every genome,” MacArthur says. “Those things are clearly not what they claim to be, in the sense that a person isn’t sick.” Most often, he adds, the researchers who reported that variant as disease-causing were mistaken. Less commonly, the database moderators are at fault.

“The single biggest problem is that the literature contains a lot of noise. There are things that have been reported to be mutations that just aren’t. And, of course, a lot of the databases are missing a lot of mutations as well,” MacArthur adds. “Until we have a complete database of severe disease mutations that we can trust, genome interpretation will always be far more complicated than it should be.”

Tracy Vence is a senior editor of Genome Technology.

Source: 

http://www.genomeweb.com/node/1098636/

NIST Consortium Embarks on Developing ‘Meter Stick of the Genome’ for Clinical Sequencing

September 05, 2012

The National Institute of Standards and Technology has founded a consortium, called “Genome in a Bottle,” to develop reference materials and performance metrics for clinical human genome sequencing.

Following an initial workshop in April, consortium members – which include stakeholders from industry, academia, and the government – met at NIST last month to discuss details and timelines for the project.

The current aim is to have the first reference genome — consisting of genomic DNA for a specific human sample and whole-genome sequencing data with variant calls for that sample — available by the end of next year, and another, more complete version by mid-2014.

“At present, there are no widely accepted genomics standards or quantitative performance metrics for confidence in variant calling,” the consortium wrote in its work plan, which was discussed at the meeting. Its main motivation is “to develop widely accepted reference materials and accompanying performance metrics to provide a strong scientific foundation for the development of regulations and professional standards for clinical sequencing.”

“This is like the meter stick of the genome,” said Marc Salit, leader of the Multiplexed Biomolecular Science group in NIST’s Materials Measurement Laboratory and one of the consortium’s organizers. He and his colleagues were approached by several vendors of next-generation sequencing instrumentation about the possibility of generating standards for assessing the performance of next-gen sequencing in clinical laboratories. The project, he said, will focus on whole-genome sequencing but will also include targeted sequencing applications.

The consortium, which receives funding from NIST and the Food and Drug Administration, is open for anyone to participate. About 100 people, representing 40 to 50 organizations, attended last month’s meeting, among them representatives from Illumina, Life Technologies, Pacific Biosciences, Complete Genomics, the FDA, the Centers for Disease Control and Prevention, commercial and academic clinical laboratories, and a number of large-scale sequencing centers.

Four working groups will be responsible for different aspects of the project: a group led by Andrew Grupe at Celera will select and design the reference materials; a group headed by Elliott Margulies at Illumina will characterize the reference materials experimentally, using multiple sequencing platforms; Steve Sherry at the National Center for Biotechnology Information is heading a bioinformatics, data integration, and data representation group to analyze and represent the experimental data; and Justin Johnson from EdgeBio is in charge of a performance metrics and “figures of merit” group to help laboratories use the reference materials to characterize their own performance.

The reference materials will include both human genomic DNA and synthetic DNA that can be used as spike-in controls. Eventually, NIST plans to release the references as Standard Reference Materials that will be “internationally recognized as certified reference materials of higher order.”

According to Salit, there was some discussion at the meeting about what sample to select for a national reference genome. The initial plan was to use a HapMap sample – NA12878, a female from the CEPH pedigree from Utah – but it turned out that HapMap samples are consented for research use only and not for commercial use, for example in an in vitro diagnostic or for potential re-identification from sequence data.

The genome of NA12878 has already been extensively characterized, and the CDC is developing it as a reference for clinical laboratories doing targeted sequencing. “We were going to build on that momentum and make our first reference material the same genome,” Salit said. But because of the consent issues, NIST’s institutional review board and legal experts are currently evaluating whether the sample can be used.

In the meantime, consortium members have been “quite enthusiastic” about using samples from the Harvard University’s Personal Genome Project, which are broadly consented, Salit said.

The reference material working group issued a recommendation to develop a set of genomes from eight ethnically diverse parent-child trios as references, he said. For cancer applications, the references may also potentially include a tumor-normal pair.

The consortium will characterize all reference materials by several sequencing platforms. Several instrument vendors, as well as a couple of academic labs, have offered to contribute to data production. According to Justin Zook, a biomedical engineer at NIST and another organizer of the consortium, the current plan is to use sequencing technology from Illumina, Life Technologies, Complete Genomics, and – at least for the first genome – PacBio. Some of the sequencing will be done internally at NIST, which has Life Tech’s 5500 and Ion Torrent PGM available. In addition, the consortium might consider fosmid sequencing, which would provide phasing information and lower the error rate, as well as optical mapping to gain structural information, Zook said.

He and his colleagues have developed new methods for calling consensus variants from different data sets already available for the NA12878 sample, which they are planning to submit for publication in the near future. A fraction of the genotype calls will be validated using other methods, such as microarrays and Sanger sequencing. Consensus genotypes with associated confidence levels will eventually be released publicly as NIST Reference Data.

An important part of NIST’s work on the data analysis will be to develop probabilistic confidence estimates for the variant calls. It will also be important to distinguish between homozygous reference genotypes and areas in the genome “where you’re not sure what the genotype is,” Zook said, adding that this will require new data formats.

Coming up with confidence estimates for the different types of variants will be challenging, Zook said, particularly for indels and structural variants. Also, representing complex variants has not been standardized yet.

Several meeting participants called for “reproducible research and transparency in the analysis,” Salit said, and there were discussions about how to implement that at the technical level, including data archives so anyone can re-analyze the reference data.

One of the challenges will be to establish the infrastructure for hosting the reference data, which will require help from the NCBI, Salit said. Also, analyzing the data collaboratively is “not a solved problem,” and the consortium is looking into cloud computing services for that.

The consortium will also develop methods that describe how to use the reference materials to assess the performance of a particular sequencing method, including both experimental protocols and open source software for comparing genotypes. “We could throw this over the fence and tell someone, ‘Here is the genome and here is the variant table,'” Salit said, but, he noted, the consortium would like to help clinical labs use those tools to understand their own performance.

Edge Bio’s Johnson, who is chairing the working group in charge of this effort, is also involved in developing bioinformatic tools to judge the quality of genomes for the Archon Genomics X Prize (CSN 11/2/2011). Salit said that NIST is “leveraging some excellent work coming out of the X Prize” and is collaborating with a member of the X Prize team on the consensus genotype calling project.

By the end of 2013, the consortium wants to have its first “genome in a bottle” and reference data with SNV and maybe indel calls available, which will not yet include all confidence estimates. Another version, to be released in mid-2014, will include further analysis of error rates and uncertainties, as well as additional types of variants, such as structural variation.

Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews.
Source:

At AACC, NHGRI’s Green Lays out Vision for Genomic Medicine

July 16, 2012

LOS ANGELES – The age of genomic medicine is within “striking distance,” Eric Green, director of the National Human Genome Research Institute, told attendees of the American Association of Clinical Chemistry’s annual meeting here on Sunday.

Speaking at the conference’s opening plenary session, Green discussed NHGRI’sroadmap for moving genomic findings into clinical practice. While this so-called “helix to healthcare” vision may take many years to fully materialize, “I predict absolutely that it’s coming,” he said.

Green noted that rapid advances in DNA sequencing have put genomics on a similar development path as clinical chemistry, which is also a technology-driven field. “If you look over the history of clinical chemistry, whenever there were technology advances, it became incredibly powerful and new opportunities sprouted up left and right,” he said.

Green likened next-gen sequencing to the autoanalyzers that “changed the face of clinical chemistry” by providing a generic platform that enabled a range of applications. In a similar fashion, low-cost sequencing is becoming a “general purpose technology” that can not only read out DNA sequence but can also provide information about RNA, epigenetic modifications, and other associated biology, he said.

The “low-hanging fruit” for genomic medicine is cancer, where molecular profiling is already being used alongside traditional histopathology to provide information on prognosis and to help guide treatment, he said.

Another area where Green said that genomic medicine is already bearing fruit is pharmacogenomics, where genomic data is proving useful in determining which patients will respond to specific drugs.

Nevertheless, while it’s clear that “sequencing is already altering the clinical landscape,” Green urged caution. “We have to manage expectations and realize it’s going to be many years from going from the most basic information about our genome sequence to actually changing medical care in any serious way,” he said.

In particular, he noted that the clinical interpretation of genomic data is still a challenge. Not only are the data volumes formidable, but the functional role of most variants is still unknown, he noted.

This knowledge gap should be addressed over the next several years as NHGRI and other organizations worldwide sequence “hundreds of thousands” of human genomes as part of large-scale research studies.

“We’re increasingly thinking about how to use that data to actually do clinical care, but I want to emphasize that the great majority of this data being generated will and should be part of research studies and not part of primary clinical care quite yet,” Green said.

Source:

http://www.genomeweb.com/sequencing/aacc-nhgris-green-lays-out-vision-genomic-medicine

Startup Aims to Translate Hopkins Team’s Cancer Genomics Expertise into Patient Care

May 16, 2012

Researchers at Johns Hopkins University who helped pioneer cancer genome sequencing have launched a commercial effort intended to translate their experience into clinical care.

Personal Genome Diagnostics, founded in 2010 by Victor Velculescu and Luis Diaz, aims to commercialize a number of cancer genome analysis methods that have been developed at Hopkins over the past several decades. Velculescu, chief scientific officer of PGDx, is director of cancer genetics at the Ludwig Center for Cancer Genetics and Therapeutics at Hopkins; while Diaz, chief medical officer of the company, is director of translational medicine at the Ludwig Center.

Other founders include Ludwig Center Director Bert Vogelstein as well as Hopkins researchers Ken Kinzler, Nick Papadopoulos, and Shibin Zhou. The team has led a number of seminal cancer sequencing projects, including the first effort to apply large-scale sequencing to cancer genomes, one of the first cancer exome sequencingstudies, and the discovery of a number of cancer-related genes, including TP53, PIK3CA, APC, IDH1 and IDH2.

Velculescu told Clinical Sequencing News that the 10-person company, headquartered in the Science and Technology Park at Johns Hopkins in Baltimore, is a natural extension of the Hopkins group’s research activities.

Several years ago, “we began receiving requests from other researchers, other physicians, collaborators, and then actually patients, family members, and friends, wanting us to do these whole-exome analyses on cancer samples,” he said. “We realized that doing this in the laboratory wasn’t really the best place to do it, so for that reason we founded Personal Genome Diagnostics.”

The goal of the company, he said, “is to translate this history of our group’s experience of cancer genetics and our understanding of cancer biology, together with the technology that has now become available, and to ultimately perform these analyses for individual patients.”

The fledgling company has reached two commercial milestones in the last several weeks. First, it gained CLIA certification for cancer exome sequencing using the HiSeq 2000. In addition, it secured exclusive licensing rights from Hopkins for a technology called digital karyotyping, developed by Velculescu and colleagues to analyze copy number changes in cancer genomes.

PGDx offers a comprehensive cancer genome analysis service that combines exome sequencing with digital karyotyping, which isolates short sequence tags from specific genomic loci in order to identify chromosomal changes as well as amplifications and deletions.

The company sequences tumor-normal pairs and promises a turnaround time of six to 10 weeks, though Velculescu said that ongoing improvements in sequencing technology and the team’s analysis methods promise to reduce that time “significantly.” It is currently seeing turnaround times of under a month.

To date, the company has focused solely on the research market. Customers have included pharmaceutical and biotech companies, individual clinicians and researchers, and contract research organizations, while the scale of these projects has ranged from individual patients to thousands of exomes for clinical trials.

While the company performs its own sequencing for smaller projects, it relies on third-party service providers for larger studies.

PGDx specializes in all aspects of cancer genome analyses, but has a particular focus on the front and back end of the workflow, Velculescu said, including “library construction, pathologic review of the samples, dissection of tumor samples to enrich tumor purity, next generation sequencing, identification of tumor-specific alterations, and linking of these data to clinical and biologic information about human cancer.”

The sequencing step in the middle, however, “is really almost becoming a commodity,” he noted. “Although we’ve done it in house, we typically do outsource it and that allows us to scale with the size of these projects.”

He said that PGDx typically works with “a number of very high-quality sequence partners to do that part of it,” but he declined to disclose these partners.

On the front end, PGDx has developed “a variety of techniques that we’ve licensed and optimized from Hopkins that have allowed us to improve extraction of DNA from both frozen tissue and [formalin-fixed, paraffin-embedded] tissue, even at very small quantities,” Diaz said. The team has also developed methods “to maximize our ability to construct libraries, capture, and then perform exomic sequencing with digital karyotyping.”

Once the sequence data is in hand, “we have a pipeline that takes that information and deciphers the changes that are most likely to be related to the cancer and its genetic make-up,” he said. “That’s not trivial. It requires inspection by an experienced cancer geneticist.”

While the firm is working on automating the analysis, “it’s not something that is entirely automatable at this time and therefore cannot be commoditized,” Diaz said.

The firm issues a report for its customers that “provides information not only on the actual sequence changes which are of high quality, but what these changes are likely to do,” Velculescu said, including “information about diagnosis, prognosis, therapeutic targeting [information] or predictive information about the therapy, and clinical trials.”

So far, the company has relied primarily on word of mouth to raise awareness of its offerings. “We’ve literally been swamped with requests from people who just know us,” Velculescu said. “I think one of the major reasons people have been coming to us for either these small or very large contracts is that people are getting this type of NGS data and they don’t know what to do with it — whether it’s a researcher who doesn’t have a lot of experience in cancer or a clinician who hasn’t seen this type of data before.”

While there’s currently “a wealth in the ability to get data, there’s an inadequacy in being able to understand and interpret the data,” he said.

Pricing for the company’s services is on a case-by-case basis, but Diaz estimated that retail costs are currently between $5,000 and $10,000 per tumor-normal pair for research purposes. Clinical cases are more costly because the depth of coverage is deeper and additional analyses are required, as well as a physician interpretation.

A Cautious Approach

While the company’s ultimate goal is to help oncologists use genomic information to inform treatment for their patients, PGDx is “proceeding cautiously” in that direction, Diaz said.

The firm has so far sequenced around 50 tumor-normal pairs for individual patients, but these have been for “informational purposes,” he said, stressing that the company believes the field of cancer genomics is still in the “discovery” phase.

“I think we’re really at the beginning of the genomic revolution in cancer,” Diaz said. “We are partnering with pharma, with researchers, and with certain clinicians to start bringing this forward — not only as a discovery tool but eventually as a clinical application.”

“We do think that rushing into this right now is too soon, but we are building the infrastructure — for example our recent CLIA approval for cancer genome analyses — to do that,” he added.

This cautious approach sets the firm apart from some competitors, including Foundation Medicine, which is about to launch a targeted sequencing test that it is marketing as a diagnostic aid to help physicians tailor therapy for their patients. Diagnostic firm Asuragen is also offering cancer sequencing services based on a targeted approach (CSN 1/12/12), as are a number of academic labs.

Diaz said that PGDx’s comprehensive approach also sets it apart from these groups. “We think there’s a lot of clinically actionable information in the genome … and we don’t want to limit ourselves by just looking at a set of genes and saying that these may or may not have importance.”

While the genes in targeted panels “may have some data surrounding them with regard to prognosis, or in relation to a therapy, that’s really only a small part of the story when it comes to the patient’s cancer,” Diaz said.

“That’s why we would like to remain the company that looks at the entire cancer genome in a comprehensive fashion, because we don’t know enough yet to break it down to a few genes,” he said.

The company’s proprietary use of digital karyotyping to find copy number alterations is another differentiator, Velculescu said, because many cancer-associated genes — such as p16, EGFR, MYC, and HER2/neu — are only affected by copy number changes, not point mutations.

Ultimately, “we want to develop something that has value for the clinician,” Diaz said. “A clinician currently sees 20 to 30 patients a day and may have only a few minutes to look at a report. If [information from sequencing] doesn’t have immediate high-impact value, it’s going to be very hard to justify its use down the road.”

He added that the company is “thinking very hard about what we can squeeze out of the cancer genome to provide that high-impact clinical value — something that isn’t just going to improve the outcome of patients by a few months or weeks, but actually change the outlook of that patient substantially.”

Source:

http://www.genomeweb.com/sequencing/startup-aims-translate-hopkins-teams-cancer-genomics-expertise-patient-care

 
Bernadette Toner is editorial director for GenomeWeb’s premium content. E-mail her here or follow her GenomeWeb Twitter account at @GenomeWeb.

In Educational Symposium, Illumina to Sequence, Interpret Genomes of 50 Participants for $5K Each

June 27, 2012

This story was originally published June 25.

As part of a company-sponsored symposium this fall to “explore best practices for deploying next-generation sequencing in a clinical setting,” Illumina plans to sequence and analyze the genomes of around 50 participants for $5,000 each, Clinical Sequencing News has learned.

According to Matt Posard, senior vice president and general manager of Illumina’s translational and consumer genomics business, the event is part of a “multi-step process to engage experts in the field around whole-genome sequencing, and to support the conversation.”

The “Understand your Genome” symposium will take place Oct. 22-23 at Illumina’s headquarters in San Diego.

The company sent out invitations to the event over the last few months, targeting individuals with a professional interest in whole-genome sequencing, including medical geneticists, pathologists, academics, and industry or business leaders, Posard told CSN this week. To provide potential participants with more information about the symposium, Illumina also hosted a webinar this month that included a Q&A session.

Registration closed June 14 and has exceeded capacity — initially 50 spots, a number that may increase slightly, Posard said. Everyone else is currently waitlisted, and Illumina plans to host additional symposia next year.

“There has been quite a bit of unanticipated enthusiasm around this from people who are speaking at the event or planning to attend the event,” including postings on blogs and listservs, Posard said.

As part of their $5,000 registration fee, which does not include travel and lodging, participants will have their whole genome sequenced in Illumina’s CLIA-certified and CAP-accredited lab prior to the event. It is also possible to participate without having one’s genome sequenced, but only as a companion to a full registrant, according to Illumina’s website. The company prefers that participants submit their own sample, but as an alternative, they may submit a patient sample instead.

The general procedure is very similar to Illumina’s Individual Genome Sequencing, or IGS, service in that it requires a prescription from a physician, who also receives the results to review them with the participant. However, participants pay less than they would through IGS, where a single human genome currently costs $9,500.

Participants will also have a one-on-one session with an Illumina geneticist prior to being sequenced, and they can choose to not receive certain medical information as part of the genome interpretation.

Doctors will receive the results and review them with the participants sometime before the event. “There will be no surprises for these participants when they come to the symposium,” Posard said.

Results will include not only a list of variants but also a clinical interpretation of the data by Illumina geneticists. This is currently not part of IGS, which requires an interpretation of the data by a third party, but Illumina plans to start offering interpretation services for IGS before the symposium, Posard said.

“Our stated intent has always been that we want to fill in all of the pieces that the physicians require, so we are building a human resource, as well as an informatics team, to provide that clinical interpretation, and we are using that apparatus for the ‘Understand your Genome’ event,” Posard said.

The interpretation will include “a specified subset of genes relating to Mendelian conditions, drug response, and complex disease risks,” according to the website, which notes that “as with any clinical test, the patient and physician must discuss any medically significant results.”

The first day of the symposium will feature presentations on clinical, laboratory, ethical, legal, and social issues around whole-genome sequencing by experts in the field. Speakers include Eric Topol from the Scripps Translational Science Institute, Matthew Ferber from the Mayo Clinic, Robert Green from Brigham and Women’s Hospital and Harvard Medical School, Heidi Rehm from the Harvard Partners Center for Genetics and Genomics, Gregory Tsongalis from the Dartmouth Hitchcock Medical Center, Robert Best from the University of South Carolina School of Medicine, Kenneth Chahine from Ancestry.com, as well as Illumina’s CEO Jay Flatley and chief scientist David Bentley.

On the second day, participants will receive their genome data on an iPad and learn how to analyze their results using the iPad MyGenome application that Illumina launched in April.

The planned symposium stirred some controversy at the European Society of Human Genetics annual meeting in Nuremberg, Germany, this week. During a presentation in a session on the diagnostic use of next-generation sequencing, Gert Matthijs, head of the Laboratory for Molecular Diagnostics at the Center for Human Genetics in Leuven, Belgium, said he was upset because the invitation to Illumina’s event apparently not only reached selected individuals but also patient organizations.

“To me, personally, [the event] tells that some people are really exploring the limits of business, and business models, to get us to genome sequencing,” he said.

“We have to be very careful when we put next-generation sequencing direct to the consumer, or to patient testing, but it’s a free world,” he added later.

Posard said that Illumina welcomes questions about and criticism of the symposium. “This is another example of us being extremely responsible and transparent in how we’re handling this novel application that everybody acknowledges is the wave of the future,” he said. “We want to responsibly introduce that wave, and I believe we’re doing so, through such things as the ‘Understand your Genome’ event, but not limited to this event.”

Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews.
Source:

Federal Court Rules Helicos Patent Invalid; Company Reaches Payment Agreement with Lenders

August 30, 2012

NEW YORK (GenomeWeb News) – A federal court has ruled in Illumina’s favor in a lawsuit filed by Helicos BioSciences that had alleged patent infringement.

In a decision dated Aug. 28, District Judge Sue Robinson of the US District Court for the District of Delaware granted Illumina’s motion for summary judgment declaring US Patent No 7,593,109 held by Helicos invalid for “lack of written description.”

Titled “Apparatus and methods for analyzing samples,” the patent relates to an apparatus, systems, and methods for biological sample analysis.

The ‘109 patent was the last of three patents that Helicos accused Illumina of infringing, following voluntary dismissal by Helicos earlier this year with prejudice of the other two patents. In October 2010 Helicos included Illumina and Life Technologies in a lawsuit that originally accused Pacific Biosciences of patent infringement.

Helicos dropped its lawsuit against Life Tech and settled with PacBio earlier this year, leaving Illumina as the sole defendant.

In seeking a motion for summary judgment, Illumina argued that the ‘109 patent does not disclose “a focusing light source operating with any one of the analytical light sources to focus said optical instrument on the sample.” Illumina’s expert witness further said that the patent “does not describe how focusing light source works” nor does it provide an illustration of such a system, according to court documents.

In handing down her decision, Robinson said, “In sum, and based on the record created by the parties, the court concludes that Illumina has demonstrated, by clear and convincing evidence, that the written description requirement has not been met.”

In a statement, Illumina President and CEO Jay Flatley said he was pleased with the court’s decision.

“The court’s ruling on the ‘109 patent, and Helicos’ voluntary dismissal of the other patents in the suit, vindicates our position that we do not infringe any valid Helicos patent,” he said. “While we respect valid and enforceable intellectual property rights of others, Illumina will continue to vigorously defend against unfounded claims of infringement.”

After the close of the market Wednesday, Helicos also disclosed that it had reached an agreement with lenders to waive defaults arising from Helicos’ failure to pay certain risk premium payments in connection with prior liquidity transactions. The transactions are part of risk premium payment agreement Helicos entered into with funds affiliated with Atlas Venture and Flagship Ventures in November 2010.

The lenders have agreed to defer the risk premium payments “until [10] business days after receipt of a written notice from the lenders demanding the payment of such risk premium payments,” Helicos said in a document filed with the US Securities and Exchange Commission.

The Cambridge, Mass.-based firm also disclosed that Noubar Afeyan and Peter Barrett have resigned from its board.

Helicos said two weeks ago that its second-quarter revenues dipped 29 percent year over year to $577,000. In an SEC document, it also warned that existing funds were not sufficient to support its operations and related litigation expenses through the planned September trial date for its dispute with Illumina.

In Thursday trade on the OTC market, shares of Helicos closed down 20 percent at $.04.

Source:

http://www.genomeweb.com/sequencing/federal-court-rules-helicos-patent-invalid-company-reaches-payment-agreement-len

State of the Science: Genomics and Cancer Research

April 2012
Basic research allows for a better understanding of cancer and, eventually, improved patient outcomes. Zhu Chen, China’s minister of health, and Shanghai Jiao Tong University’s Zhen-Yi Wang received the seventh annual Szent-Györgyi prize from the National Foundation for Cancer Research for their work on a treatment for acute promyelocytic leukemia. Genome Technology‘s Ciara Curtin spoke to Chen, Wang, and past prize winners about the state of cancer research.

Genome Technology: Doctors Wang and Chen, can you tell me a bit about the work you did that led to you receiving the Szent-Györgyi prize?

Zhen-Yi Wang: I am a physician. I am working in the clinic, so I have to serve the patients. … I know the genes very superficially, not very deeply, but the question raised to me is: There are so many genes, but how are [we] to judge what is the most important?

Zhu Chen: The work that is recognized by this year’s Szent-Györgyi Prize concerns … acute promyelocytic leukemia. Over the past few decades, we have been involved in developing new treatment strategies against this disease.

You have two [therapies — all-trans retinoic acid and arsenic trioxide] — that target the same protein but with slightly different mechanisms, so we call this synergistic targeting. When the two drugs combine together for the induction therapy, then we see very nice response in terms of the complete remission rate. But more importantly, we see that this synergistic targeting, together with the effect of the chemotherapy, can achieve a very high five-year disease-free survival — as high as 90 percent.

But we were more interested in the functional aspects of the genome, to understand what each gene does and also to particularly understand the network behavior of the genes.

GT: There are a number of consortiums looking at the genome sequences of many cancer types. What do you hope to see from such studies?

Webster Cavenee: This is a way that tumors are being sequenced in a rational kind of way. It would have been done anyway by labs individually, which would have taken a lot more money and taken a lot longer, too. The human genome sequence, everybody said, ‘Why are you going to do that?’ … But that now turns out to be a tremendous resource. … From the point of view of The Cancer Genome Atlas, having the catalog of all of the kinds of mutations which are present in tumors can be very useful because you can see patterns. For example, in the glioblastoma cancer genome project, they found an unexpected association of some mutations and combinations of mutations with drug sensitivity. Nobody would have thought that.

The problem, of course, is that when you are sequencing all these tumors, it’s a very static thing. You get one point in time and you sequence whatever comes out of this big lump of tissue. That big lump is made up of a lot of different kinds of pieces, so when you see a mutation, you can’t know where it came from and you don’t know whether it actually does anything. That then leads into what’s going to be the functionalizing of the genome. Because in the absence of knowing that it has a function, it’s not going to be of very much use to develop drugs or anything like that. And that’s a much bigger exercise because that involves a lot of experiments, not just stuffing stuff into a sequencer.Peter Vogt: [The genome] has to be used primarily to determine function. Without function, there’s not much you can do with these mutations, because the distinction between a driver mutation and a passenger mutation can’t be made just on the basis of sequence.

Carlo Croce: After that, you have to be able to validate all of the genetic operations in model systems where you can reproduce the same changes and see whether there are the same consequences. Otherwise, without validation, to develop therapy doesn’t make much sense because maybe those so-called driver mutations will turn out to be something else.

GT: Will sequencing of patient’s tumors come to the clinic?

CC: It is inevitable. Naturally, there are a lot of bottlenecks. To do the sequencing is the, quote, trivial part and it is going to cost less and less. But then interpreting the data might be a little bit more cumbersome.

Sujuan Ba: Dr. Chen, there is an e-health card in China right now. Do you think some day gene sequencing will be stored in that card?

ZC: We are developing a digital healthcare in China. We started with electronic health records and now by providing the e-health card to the people, that will facilitate the individualized health management and also the supervision of our healthcare system. In terms of the use of genetic information for clinical purposes, as Professor Croce said, it’s going to happen.

GT: What do you think are the major questions in cancer research that still need to be addressed?

PV: There are increasingly two schools of thought on cancer. One is that it is all an engineering problem: We have all the information we need, we just need to engineer the right drugs. The other school says it’s still a basic knowledge problem. I think more and more people think it’s just an engineering problem — give us the money and we’ll do it all. A lot of things can be done, but we still don’t have complete knowledge.

Roundtable Participants
Sujuan Ba, National Foundation for Cancer Research
Webster Cavenee, University of California, San Diego
Zhu Chen, Ministry of Health, China
Carlo Croce, Ohio State University
Peter Vogt, Scripps Research Institute
Zhen-Yi Wang, Shanghai Jiao Tong University

Source:

Read Full Post »


Reporter: Aviva Lev-Ari, PhD, RN

Game On

September 2012

Combining gaming and genomics may sound odd, but play can produce useful data. By presenting complex biological problems as games and distributing those games far and wide, researchers can take advantage of a large network of virtual computation in the form of thousands of players. Games can also harness the natural human ability for pattern recognition and take advantage of the ways in which the human brain is better at that than even the most powerful supercomputer.

One of the first games to harness the power of the crowd was Foldit, which was developed by the University of Washington’s David Baker in 2008. Foldit encourages players to solve protein structure prediction problems by folding proteins into stable shapes.

Last September, an elite contingent of 15 Foldit players used molecular replacement to solve the crystal structure of a retroviral protease from the Mason-Pfizer Monkey Virus, which causes simian AIDS.

Baker’s team published a paper in Nature Structural & Molecular Biology describing how the solutions facilitated the identification of novel structural features that could provide a foundation for the design of new antiretroviral drugs. According to the authors, this marked the first time gamers solved a longstanding -biological problem.

Then in December 2010, a team from McGill University rolled out Phylo, a Sudoku-like game that utilizes players’ abilities to match visual patterns between regions of similarity in multiple sequence alignments. Phylo’s designers reported in a PLOS One paper published in March that, since the game’s launch, they had received more than 350,000 solutions produced by more than 12,000 registered players.

“We don’t know right now what other interesting problems we could solve using these crowdsourcing techniques. We still have a lack of deep understanding about when human intuition or help is very useful,” says Jérôme Waldispühl, an assistant professor at McGill University and lead developer of Phylo. “But what I like is the involvement of society digging into the most meaningful and deep scientific questions — you are trying to involve society into the very process of scientific discovery.”

Last year, a group from Carnegie Mellon University and Stanford University released an online game called EteRNA, the purpose of which is to help investigators envision RNA knots, polyhedra, and any other RNA shapes that have yet to be identified. Top designs are analyzed each week to determine if the molecules can fold themselves into the 3D shapes predicted by RNA modeling software.

Purposeful play

More “games with a purpose” — as they are sometimes called — aimed at solving biological problems are in development.

“No matter how big your super-computer is, you can’t try all gene combinations within a 20,000-gene space,” says Benjamin Good, a research associate at the Scripps Research Institute. “Humans have a role to play here because maybe we can do better than what happens when you just compute for just many, many random datasets.”

Good and his colleagues have developed a suite of games, called Gene Games, which they hope to use to build out genomic databases and to improve existing algorithms. This suite includes GenESP, a two-player game in which both players see the same disease and each must guess what gene the other is typing from a dropdown list of possible genes. This game is aimed at an audience with some knowledge of the field and takes advantage of players’ expertise.

Another game is Combo, which challenges players to find the ideal combination of genes for phenotype prediction. Combo players can choose to start at an introductory level where they separate mammals from other animals or divide the animal kingdom into five classes, or begin at a more challenging level where they have to identify gene expression signatures in tumors to predict survival or metastasis.

“The goal of issuing GenESP is to provide a new way of building out gene annotation databases, and Combo is specifically made to enrich a machine-learning algorithm. It’s all an attempt to use games to tap into a large communities of people to get after what they know,” Good says.

But he is quick to point out that there is no specific critical mass of players that will provide the desired data — instead, it is about finding the right players just as the Foldit project seems to have done.

“I can’t say when we hit 1,000 users or 10,000 users, we’ll have X units of compute available to us — it just doesn’t work like that. But if we end up getting the right 100 people playing these games, we can make a lot of progress,” Good adds.

Diagnostic disport

Whether taking a crowdsourcing approach to access networks of thousands of players or to get at a handful of skilled players, gaming can not only provide new data for researchers, but can also provide a handy diagnostic resource for clinicians.

In May, a group from the University of California, Los Angeles, released Molt, a game in which players aid in the diagnosis of malaria-infected red blood cells. After completing training, players are presented with frames of red blood cell images and use a virtual syringe tool to eliminate infected cells and then use a collect-all tool to designate the remaining cells in the frame as healthy. Results from the games are sent back to the point-of-care or hospital setting.

Popular online games are also providing models for new bioinformatics applications. A new software tool called ImageJS marries bioinformatics with pathology, which its developers say takes its cue from Rovio Entertainment’s popular Angry Birds game. Developed by a team at the University of Alabama at Birmingham, this free app allows pathologists to drag pathology slides into a Web app to identify malignancies based on color.

“There are two bioinformatics problems at the point of care that an Angry Birds-approach would solve. The first is that it delivers the code to the machine rather than forcing the data to travel, so the code does the traveling,” says Jonas Almeida, a professor at UAB. “The second problem it solves is that it doesn’t require installation of any application; they are written in JavaScript, which is a native language of Web development, and the application has no access to the local file system, so we have an application that does not make the IT people nervous.”

Almeida and his team have also designed a module for ImageJS that can analyze genomes using the same visual interface as the malignancy–diagnosis module to provide clinicians with even better diagnostic accuracy. In the spirit of crowdsourcing, Almeida adds that the success of ImageJS will rely upon the willingness of its users to develop their own modules to solve their own specific problems using the game-like interface.

“Funding agencies are also interested in these out-of-the-box solutions. However, there is also some skepticism that will probably need more time to be overcome through clear success stories with these new gaming based solutions to existing problems,” says Molt’s designer Aydogan Ozcan, an associate professor at UCLA.

Despite the funding limitations — and the formidable challenge of designing a game that people will want to play — gaming is gaining traction and enjoying a favorable reception from the research community.

“On a one-to-one basis, everyone seems to love it. They get the potential and are just waiting to see what is going to happen,” Scripps’ Good says. “But developing these games is an enormous challenge. Our background is in bioinformatics, it’s not in making things that are fun. It’s hard to make a fun game all by itself, let alone make one that will solve a difficult problem.”

Matthew Dublin is a senior writer at Genome Technology.

http://www.genomeweb.com//node/1122001?hq_e=el&hq_m=1336360&hq_l=3&hq_v=e1df6f3681

 

Related topic on this Scientific Web Site

‘Gamifying’ Drug R&D: Boehringer Ingelheim, Sanofi, Eli Lilly

https://pharmaceuticalintelligence.com/2012/08/22/gamifying-drug-rd-boehringer-ingelheim-sanofi-eli-lilly/

Read Full Post »

Older Posts »