Feeds:
Posts
Comments

Archive for the ‘Biological Networks, Gene Regulation and Evolution’ Category

 

Reporter: Aviva Lev-Ari, PhD

Mucosal CD30-positive T-cell lymphoproliferations of the head and neck show a clinicopathologic spectrum similar to cutaneous CD30-positive T-cell lymphoproliferative disorders

Modern Pathology 25, 983-992 (July 2012) doi:10.1038/modpathol.2012.38

Andrew P Sciallis, Mark E Law, David J Inwards, Rebecca F McClure,William R Macon, Paul J Kurtin, Ahmet Dogan and Andrew L Feldman

Abstract

CD30-positive T-cell lymphoproliferative disorders are classified as cutaneous (primary cutaneous anaplastic large cell lymphoma and lymphomatoid papulosis) or systemic. As extent of disease dictates prognosis and treatment, patients with skin involvement need clinical staging to determine whether systemic lymphoma also is present. Similar processes may involve mucosal sites of the head and neck, constituting a spectrum that includes both neoplasms and reactive conditions (eg, traumatic ulcerative granuloma with stromal eosinophilia). However, no standard classification exists for mucosal CD30-positive T-cell lymphoproliferations. To improve our understanding of these processes, we identified 15 such patients and examined clinical presentation, treatment and outcome, morphology, phenotype using immunohistochemistry, and genetics using gene rearrangement studies and fluorescence in situ hybridization. The 15 patients (11 M, 4 F; mean age, 57 years) had disease involving the oral cavity/lip/tongue (9), orbit/conjunctiva (3) or nasal cavity/sinuses (3). Of 14 patients with staging data, 7 had mucosal disease only; 2 had mucocutaneous disease; and 5 had systemic anaplastic large cell lymphoma. Patients with mucosal or mucocutaneous disease only had a favorable prognosis and none developed systemic spread (follow-up, 4–93 months). Three of five patients with systemic disease died of lymphoma after 1–48 months. Morphologic and phenotypic features were similar regardless of extent of disease. One anaplastic lymphoma kinase-positive case was associated with systemic disease. Two cases had rearrangements of the DUSP22-IRF4 locus on chromosome 6p25.3, seen most frequently in primary cutaneous anaplastic large cell lymphoma. Our findings suggest mucosal CD30-positive T-cell lymphoproliferations share features with cutaneous CD30-positive T-cell lymphoproliferative disorders, and require clinical staging for stratification into primary and secondary types. Primary cases have clinicopathologic features closer to primary cutaneous disease than to systemic anaplastic large cell lymphoma, including indolent clinical behavior. Understanding the spectrum of mucosal CD30-positive T-cell lymphoproliferations is important to avoid possible overtreatment resulting from a diagnosis of overt T-cell lymphoma.

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN
UPDATED on 12/6/2013

23andMe Suspends Health Interpretations

December 06, 2013

Direct-to-consumer genetic testing company 23andMe hasstopped offering its health-related test to new customers, bringing it in line with a request from the US Food and Drug Administration.

In letter sent on Nov. 22, FDA said that 23andMe had not adequately responded to its concerns regarding the validity of their Personal Genome Service. The letter instructed 23andMe to “immediately discontinue marketing” the service until it receives authorization from the agency.

According to a post at the company’s blog from CEO Anne Wojcicki, 23andMe customers who purchased their kits on or after Nov. 22 “will not have access to health-related results.” They will, though, have access to ancestry information and their raw genetic data. Wojcicki notes that the customers may have access to the health interpretations in the future depending on FDA marketing authorization. Those customers are also being offered a refund.

Customers who purchased their kits before Nov. 22 will have access to all reports.

“We remain firmly committed to fulfilling our long-term mission to help people everywhere have access to their own genetic data and have the ability to use that information to improve their lives,” a notice at the 23andMe site says.

In a letter appearing in the Wall Street Journal earlier this week, FDA Commissioner Margaret Hamburg wrote that the agency “supports the development of innovative tests.” As an example, she pointed to its recent clearance of sequencing-based testsfrom Illumina.

She added that the agency also understands that some consumers do want to know more about their genomes and their genetic risk of disease, and that a DTC model would let consumers take an active role in their health.

“The agency’s desire to review these particular tests is solely to ensure that they are safe, do what they claim to do and that the results are communicated in a way that a consumer can understand,” Hamburg said.

In a statement, 23andMe’s Wojcicki says that the company remains committed to its ethos of allowing people access to their genetic information. “Our goal is to work cooperatively with the FDA to provide that opportunity in a way that clearly demonstrates the benefit to people and the validity of the science that underlies the test,” Wojcicki adds.

 SOURCE
23andMe

23andMe Takes First Step Toward FDA Clearance

Company Provides Leadership in Direct-to-Consumer Genetic Testing

Mountain View, CA – July 30, 2012 — 23andMe, the leading personal genetics company, today announced that it has delivered its first round of 510(k) documentation to the Food and Drug Administration (FDA). Since its 2006 inception, 23andMe largely created the direct-to-consumer market for genetic analysis. As a leader in personal genetics, the company is now the first in the industry to announce it is working towards FDA clearance. The FDA will review the filing over the next several months and the process of gaining clearance will take time as both the FDA and 23andMe attempt to apply current regulations to a new and growing industry.

“23andMe has pioneered the direct-to-consumer genetic testing industry and we are committed to helping individuals understand their own genetic information through proven DNA analysis technologies and web-based interactive tools,” stated 23andMe CEO and Co-Founder Anne Wojcicki. “23andMe is working proactively with the FDA to ensure the industry delivers high quality information that consumers can trust.”

23andMe’s Personal Genome Service® enables individuals to explore their own DNA and now provides more than 200 health and trait reports as well as genetic ancestry information. The extensive package of health and ancestry reports offered by 23andMe has grown dramatically as the body of research in the general scientific community has continued to make significant advances in assessing the role of genetics in health and diseases. That body of peer-reviewed, published research is regularly curated by the team of 23andMe scientists to determine which information meets the rigorous 23andMe criteria to be incorporated into its health and trait reporting as detailed in https://www.23andme.com/for/scientists/.

“23andMe has always valued the guidance of the FDA and, in fact, engaged the agency in conversations prior to launching the Personal Genome Service® in 2007. Our ongoing conversations with the FDA in the last year, in particular, resulted in a focused approach that resulted in our ability to compile a comprehensive analysis of 23andMe’s direct-to-consumer testing for FDA consideration,” stated 23andMe VP Corporate Development and Chief Legal Officer Ashley Gould.

In providing personalized health reports 23andMe believes that individuals have a fundamental right to their personal genetic data and that genetic data is an essential complement to family history for people to make informed decisions in conjunction with their healthcare provider.

The 23andMe platform is designed to be both fluid and transparent and the filing with the FDA is designed to accommodate this data-driven paradigm. The body of information provided by 23andMe grows over time, not only in adding more traits and health reports, but also in interpreting results based on the continued evolution of scientific literature. 23andMe uses a CLIA-certified laboratory to process customer DNA samples. The 510(k) documentation provided to the FDA builds upon the company’s scientifically sound practices by demonstrating the clinical and analytical validity of its reporting.

“FDA clearance is an important step on the path towards getting genetic information integrated with routine medical care,” explained Ms. Wojcicki. “As the knowledge around personalized medicine continues to grow, consumers should expect their healthcare providers to begin to incorporate genetic information into their treatments and preventative care.”

“We believe our ongoing conversations with the FDA and ultimately securing clearance will be very important as we continue to serve our customers with genetic information that is an essential consideration in their personal health, and continue to grow our community, which is now more than 150,000 strong,” concluded Ms. Wojcicki.

An ongoing service, 23andMe’s Personal Genome Service® provides a wealth of information about an individual’s DNA and updates about new research. Customers can also choose to participate in the company’s unique research programs. By completing online surveys, customers contribute directly to genetic research that can potentially lead to better understanding of and new treatments for a variety of health conditions.

To learn more, visit www.23andMe.com.

About 23andMe

23andMe, Inc. is a leading personal genetics company dedicated to helping individuals understand their own genetic information through DNA analysis technologies and web-based interactive tools. The company’s Personal Genome Service® enables individuals to gain deeper insights into their ancestry and inherited traits. The vision for 23andMe is to personalize healthcare by making and supporting meaningful discoveries through genetic research. 23andMe, Inc., was founded in 2006, and the company is advised by a group of renowned experts in the fields of human genetics, bioinformatics and computer science. More information is available atwww.23andme.com.

Seeking 510(k) Clearance for Genomic Testing Service, 23andMe Maintains Direct-to-Consumer Ethos

July 31, 2012

23andMe this week submitted the first of several 510(k) applications it plans to file in order to gain clearance from the US Food and Drug Administration for its Personal Genome Service. However, despite acquiescing to regulatory oversight, the firm hopes to keep marketing its genomic testing service directly to consumers.

“The fundamental philosophy of 23andMe is that people have the right to access their genomic information directly, and nothing has changed in that regard” now that the company is filing for 510(k) clearance, Ashley Gould, 23andMe’s VP of corporate development and chief legal officer, told PGx Reporter. “This submission to the FDA is under our existing business model where individuals can directly access their information.”

The de novo 510(k) application 23andMe submitted this week represents the first of several the company plans to file this year with the FDA related to its Personal Genome Service. The first submission, made to the Office of In Vitro Diagnostic Device Evaluation and Safety at the FDA’s Center for Devices and Radiological Health, included information about seven genetic tests that are included as part of its service.

23andMe said that its genetic tests provide information on the effects of specific gene variants on health conditions based on peer-reviewed, published literature. Each test that 23andMe submits to the FDA for clearance may contain more than one genetic marker or gene, but Gould explained that these tests don’t report on the combined effect of multiple genes on a particular condition unless such multi-gene effects are supported by the literature.

Gould added that 23andMe has submitted as part of its 510(k) application analytical validation data for its tests, as well as clinical validation data supported by published literature. By year end, 23andMe plans to file information with the agency on as many as 100 tests.

The company’s Personal Genome Service, performed in a CLIA lab by the Laboratory Corporation of America, currently provides so-called “health reports” for 242 diseases and conditions, including genetic associations associated with carrier status, disease risk, drug response, and physical traits.

23andMe declined to disclose which of these diseases or conditions would be among the tests that the company is submitting for FDA clearance. Gould noted that the agency has provided input on which tests needed to be reviewed by the agency and cleared.

“The FDA is now in the process of reviewing our submission, and it will be an iterative process where we go back and forth. They’ll have questions and we’ll answer them,” Gould said. The decision to file the first 510(k) application is the “culmination of an ongoing process” and wasn’t triggered by a particular event, she added.

A Rocky Regulatory Road

The company noted in a statement that its interactions with the FDA began before it launched its genotyping service in 2007. In the intervening years, however, the nascent DTC genomic testing services industry raised alarms among state and federal health regulators and became the subject of scrutiny that ultimately caused most DTC firms to modify their business models and require a physician’s prescription for their tests, leaving 23andMe as the only US-based firm marketing its service directly to consumers.

The regulatory kerfuffle began in 2008 when health regulators in New York and California asked DTC genomics companies to get the proper state certification and a doctor’s prescription in order to market medical tests to state residents. Then, in 2010, when DTC genomics company Pathway Genomics announced plans to market its online testing service via brick-and-mortar pharmacies, the FDA asked several DTC genomics firms why their tests weren’t cleared through the agency for marketing as medical devices (PGx Reporter 6/25/2008; 6/16/2010).

After this, the FDA held a public hearing on DTC genomic testing services, where stakeholders from the broader diagnostics industry asked the agency to promulgate regulations that would bring more consistency to the genetic risk information sold by DTC genomics firms. Meanwhile, 23andMe and other supporters of the DTC model maintained that people are capable of understanding genomic data and should have unfettered access to their genomic information, without the “paternalistic” intervention of health regulators and physicians (PGx Reporter 7/21/2010).

A few days after the FDA public meeting, the House Committee on Energy and Commerce held a hearing to discuss findings from an undercover Government Accountability Office investigation that found that the test results provided by DTC genomics companies were “misleading and of little or no practical use to consumers.” (PGx Reporter 7/28/2010)

By this time, many industry observers were already predicting the demise of the DTC genomics industry. Some regulatory officials and stakeholders had proposed at the time that certain types of medical testing offered by genomic testing services – such as pharmacogenomic testing – would have to become prescription-only, while other types of testing, such as those for learning about ancestry, could continue to be available directly by consumers.

In fact, the FDA’s Medical Devices Advisory Committee’s Molecular and Clinical Genetics Panel last year came to a similar conclusion. After discussing the regulatory issues affecting the DTC genomics services industry, the committee members concluded that consumers should get a prescription from a doctor before purchasing genetic tests that could potentially be used to inform healthcare decisions. The panel was more comfortable maintaining direct consumer access to certain nutrigenetic tests, but felt that carrier testing, genetic testing to gauge disease risk, and pharmacogenetic testing should be routed through a physician (PGx Reporter 3/9/2011).

After undergoing significant regulatory scrutiny, by the end of last year, half of the major players in the DTC genomics sector, including Navigenics and Pathway, had abandoned the DTC model and chose to market their tests through physicians. Navigenics was recently acquired by Life Technologies for its CLIA lab, a key piece of Life Tech’s plans to develop its own molecular diagnostics products. Having shifted its strategic focus under Life Tech, Navigenics will not be taking on any more customers for its genomic testing service (PGx Reporter 7/18/2012).

Meanwhile, as one of the last remaining firms still holding on to the DTC model, 23andMe has publicly expressed its willingness to meet FDA regulations, but has also insisted that the agency’s oversight shouldn’t necessarily preclude consumer access to genetic testing. FDA’s OIVD ensures the safety and efficacy of complex IVDs that are marketed through healthcare professionals, such as genetic tests that predict whether a person will respond to a particular treatment, though it also oversees tests that are available over-the-counter for consumers to use at home, such as pregnancy tests.

While it’s still unknown how OIVD intends to categorize 23andMe’s service, it’s likely that with regulatory approval, the company may need to change the language it uses to market its tests. “Part of any 510(k) review process includes a review of product ‘labeling,'” Gould said in an e-mail. “It is possible that some language may need to be modified based on the FDA labeling review.”

And even though 23andMe believes that it will be able to continue providing its customers with unfettered access to its testing services, the FDA of course could still delineate certain portions of its service as prescription only. The FDA does not discuss applications it is reviewing and did not respond to questions from PGx Reporter about 23andMe’s 510(k) submission.

For the time being, the company will continue to market the Personal Genome Service as a single, direct-to-consumer offering for $299.

The agency has 90 days to review the 510(k) submission. Gould said 23andMe is already working on its second application.

Seeking Validation

With FDA’s blessing to market its service, 23andMe is hoping to deflect the negative light in which the genomic testing service industry has been portrayed by some in the past. “We’re hopeful that FDA clearance will provide increased confidence in genetic testing services generally, [result in] increased understanding of what these services have to offer, and [establish] that these are valid tests,” Gould said.

“A big motivation for us seeking FDA clearance is to try to pave this pathway toward personalized medicine,” she added. “So, we’re absolutely proponents of people taking their DNA [information] to their healthcare providers and talking to them about the data, and being more individually empowered and knowledgeable about their own bodies.”

The 510(k) filing comes during a time when 23andMe is expanding its business. The firm earlier this month bolstered its potential customer base and strengthened its ability to conduct genome-wide association studies through the acquisition of CureTogether, a website where patients share qualitative information about more than 500 health conditions. The purchase marked 23andMe’s first acquisition.

The company is also working with pharmaceutical firms that are using the genomic and phenotypic information it has curated through its more than 150,000 customers to advance understanding of diseases and inform the development of new drugs. For example, 23andMe and Genentech announced last year that they are conducting research to learn about genes that might protect people against Alzheimer’s disease (PGx Reporter 6/29/2011).

Gould explained this week that 23andMe’s work with drug companies is separate from the Personal Genome Service that it markets to customers. “Our collaborations [with pharma] are not designed to launch companion diagnostics,” Gould said, adding that those partnerships are focused on advancing knowledge about the gene-disease or gene-drug relationship in specific populations.

 http://www.genomeweb.com/mdx/seeking-510k-clearance-genomic-testing-service-23andme-maintains-direct-consumer

Media Contacts

Rubenstein Communications
1345 Ave of the Americas
New York, NY 10105
Jane Rubinstein, 212-843-8287, jrubinstein@rubenstein.com
Alison Hendrie, 212-843-8029, ahendrie@rubenstein.com

Press Releases

Read Full Post »

Mitochondrial Mechanisms of Disease in Diabetes Mellitus

Reporter: Aviva Lev-Ari, PhD, RN

Mitochondrial Mechanisms of Disease in Diabetes Mellitus

By Mark Abrahams, MD

Reviewed by Loren Wissner Greene, MD, MA (Bioethics), Clinical Associate Professor of Medicine, NYU School of Medicine, New York, NY

Published: 03/13/2012

http://www.medpagetoday.com/resource-center/diabetes/Mitochondrial-Mechanisms-Disease-Diabetes-Mellitus/a/31636 

Mitochondria are found in every cell in the human body.1 Known as the “power plant of the cell,” mitochondria are central to the conversion of fatty acids and glucose to usable energy in the form of ATP (adenosine triphosphate).1, 2 A growing body of evidence now demonstrates a link between various disturbances in mitochondrial functioning and type 2 diabetes.1

In patients with type 2 diabetes, the size, number, and efficiency of mitochondria are reduced.3 This can have pathogenic effects in the tissues central to glucose metabolism — the pancreas, liver, and skeletal muscle.

In pancreatic beta cells, mitochondria are central to insulin secretion. As the amount of glucose in the circulation increases, so does the mitochondrial production of ATP inside the cell. When this occurs, ATP-sensitive channels open, leading to membrane depolarization and the secretion of insulin.1

Much data support the concept that mitochondrial function is required for appropriate glucose-induced insulin secretion.4 Studies in beta cell lines have shown that when mitochondrial function is experimentally decreased, insulin secretion shows a similar reduction.4 Supporting studies in humans have shown that individuals with disabling mutations in mitochondrial DNA (i.e., the A32433G mutation) demonstrate impaired pancreatic insulin secretion in response to glucose challenge.

Mitochondrial dysfunction in skeletal muscle and the liver might also contribute to the development of diabetes. As part of its cellular respiratory function, mitochondria utilize (and break down) fatty acids. When mitochondrial function is reduced, intracellular fats may accumulate.2

One hypothesis is that excessive accumulation of intracellular fat may have a central role in insulin resistance. This hypothesis is supported by the observation that excessive lipids lead to reductions in numbers and function of insulin receptors.2

The link between obesity, inactivity, and type 2 diabetes is well established — and weight loss remains a cornerstone of diabetes management.3 The role of mitochondria as cellular “power plant” makes a compelling case for a causative relationship between mitochondrial dysfunction and clinical disease.3

Reduced mitochondrial capacity has been demonstrated in patients with type 2 diabetes.3 In one study, patients who lost weight demonstrated an increase in mitochondrial density and insulin sensitivity. Patients achieved an average weight loss of 7.1% and experienced a decrease in mean HbA1c from 7.9 to 6.5, as well as significant improvements in both fasting and postprandial blood glucose.3

Strategies that focus on increasing mitochondrial function could represent important new approaches in the treatment of diabetes.

One agent under investigation is coenzyme Q10 (CoQ10). In animal studies, CoQ10 significantly reduced fasting and 2-hour postprandial glucose levels. In humans, early, uncontrolled studies of diabetic patients receiving CoQ10 have demonstrated improvements in blood glucose and insulin synthesis and secretion. Furthermore, the clinical benefit of CoQ10 has been evident in a number of therapeutic trials in patients with maternally inherited mitochondrial defects like MELAS (Mitochondrial Encephalomyopathy, Lactic Acidosis, and Stroke-like episodes).1 The therapeutic advantage of supplementary CoQ10 may be especially helpful in patients taking statins, as these patients have been shown to have decreased production of endogenous CoQ10.5

Impaired mitochondrial function in tissues central to glucose metabolism (pancreas, muscle, liver) may be partly responsible for diabetes pathogenesis.2 The failure to appropriately manage cellular energy needs may result in impaired insulin secretion and/or insulin resistance.2 Targeting mitochondrial dysfunction may represent a promising path forward in the development of novel treatments for diabetes.

REFERENCES

  1. Lamson DW, et al. Mitochondrial Factors in the Pathogenesis of Diabetes: A Hypothesis for Treatment. Altern Med Rev. 2002;7:94-111.
  2. Patti ME, et al. The Role of Mitochondria in the Pathogenesis of Type 2 DiabetesEndocr Rev. 2010;31:364-395.
  3. Toledo FG, et al. Effects of Physical Activity and Weight Loss on Skeletal Muscle Mitochondria and Relationship With Glucose Control in Type 2 Diabetes. Diabetes. 2007;56:2142-2147.
  4. Maassen JA, et al. Mitochondrial Diabetes: Molecular Mechanisms and Clinical Presentation. Diabetes. 2004;53(suppl 1):S103-S109.
  5. Ghirlanda G, et al. Evidence of Plasma CoQ10-Lowering Effect by HMG-CoA Reductase Inhibitors: A Double-Blind, Placebo-Controlled Study. J Clin Pharmacol. 1993;33:226-229.

 

Read Full Post »

Reporter: Prabodh Kandala, PhD

Scientists are reporting another reason — besides possible liver damage, stomach bleeding and other side effects — to avoid drinking alcohol while taking certain medicines. Their report in ACS’ journal Molecular Pharmaceutics describes laboratory experiments in which alcohol made several medications up to three times more available to the body, effectively tripling the original dose.

Christel Bergström and colleagues explain that beverage alcohol, or ethanol, can cause an increase in the amount of non-prescription and prescription drugs that are “available” to the body after taking a specific dose. Alcohol can change how enzymes and other substances in the body interact with many of the 5,000 such medications on the market. Some of these medications don’t dissolve well in the gastrointestinal tract — especially in the stomach and intestines. The researchers sought to test whether ethanol made these drugs dissolve more easily. If so, this would make the drugs more available in the body, possibly intensifying their effects when combined with alcohol.

To find out, the scientists used a simulated environment of the small intestine to test how rapidly medications dissolved when alcohol was and was not present. Almost 60 percent of the 22 medications in their tests dissolved much faster in the presence of alcohol. In addition, they found that certain types of substances, such as those that were acidic, were more affected. Some common acidic drugs include warfarin, the anticoagulant; Tamoxifen, used to treat certain forms of cancer; and naproxen, which relieves pain and inflammation.

Abstract:

Ethanol intake can lead to an unexpected and possibly problematic increase in the bioavailability of druglike compounds. In this work we investigated the effect of ethanol on the apparent solubility and dissolution rate of poorly soluble compounds in simulated intestinal fluid representing a preprandial state. A series of 22 structurally diverse, poorly soluble compounds were measured for apparent solubility and intrinsic dissolution rate (37 °C) in phosphate buffer pH 6.5 (PhB6.5) and fasted state simulated intestinal fluid (FaSSIF, pH 6.5) with and without ethanol at 5% v/v or 20% v/v. The obtained data were used to understand for which molecules ethanol results in an increased apparent solubility and, therefore, may increase the amount of drug absorbed. In FaSSIF20%ethanol 59% of the compounds displayed >3-fold higher apparent solubility than in pure FaSSIF, whereas the effects of 5% ethanol on solubility, in most cases, were negligible. Acidic and neutral compounds were more solubilized by the addition of ethanol than by lecithin/taurocholate aggregates, whereas bases showed a more substance-specific response to the additives in the buffer. The stronger solubilizing capacity of ethanol as compared to the mixed lipid aggregates in FaSSIF was further identified through Spearman rank analyses, which showed a stronger relationship between FaSSIF20%ethanol and PhB6.5,20%ethanol (rS of 0.97) than FaSSIF20%ethanol and FaSSIF (rS of 0.86). No relationships were found between solubility changes in media containing ethanol and single physicochemical properties, but multivariate data analysis showed that inclusion of ethanol significantly reduced the negative effect of compound lipophilicity on solubility. For this data set the higher concentration of ethanol gave a dose number (Do) <1 for 30% of the compounds that showed incomplete dissolution in FaSSIF. Significant differences were shown in the melting point, lipophilicity, and dose profiles between the compounds having a Do < 1 and Do > 1, with the latter having higher absolute values in all three parameters. In conclusion, this study showed that significant effects of ethanol on apparent solubility in the preprandial state can be expected for lipophilic compounds. The results herein indicate that acidic and neutral compounds are more sensitive to the addition of ethanol than to the mixed lipid aggregates present in the fasted intestine.

Ref:

http://www.sciencedaily.com/releases/2012/07/120726153953.htm

http://pubs.acs.org/doi/abs/10.1021/mp2006467

Read Full Post »

Reporter: Prabodh Kandala, PhD

Two men with longstanding HIV infections no longer have detectable HIV in their blood cells following bone marrow transplants. The virus was easily detected in blood lymphocytes of both men prior to their transplants but became undetectable by eight months post-transplant. The men, who were treated at Brigham and Women’s Hospital (BWH), have remained on anti-retroviral therapy.

Their cases will be presented on July 26, 2012 at the International AIDS Conference by Timothy Henrich, MD and Daniel Kuritzkes, MD, physician-researchers in the Division of Infectious Diseases at BWH.

“This gives us some important information,” said Dr. Kuritzkes. “It suggests that under the cover of anti-retroviral therapy, the cells that repopulated the patient’s immune system appear to be protected from becoming re-infected with HIV.”

One patient’s bone marrow transplant was two years ago, the other was four years ago. Both were performed at the Dana-Farber/Brigham and Women’s Cancer Center. Over time, as the patients’ cells were replaced by donor cells, traces of HIV were lost. Currently, both patients have no detectable HIV DNA or RNA in their blood. The level of HIV antibody, a measure of exposure to HIV, also declined in both men.

“We expected HIV to vanish from the patients’ plasma, but it is surprising that we can’t find any traces of HIV in their cells,” said Dr. Henrich. “The next step is to determine if there are any traces of HIV in their tissue.”

The research team is currently designing studies that would enable them to look for HIV in the tissues. Researchers also plan to study additional HIV-positive patients who have undergone a bone marrow transplant.

Researchers point out that there are two key differences between the Brigham patients and the “Berlin patient,” a man who was functionally cured of HIV after a stem cell transplant. In the Berlin patient’s case, his donor was specifically chosen because the donor had a genetic mutation that resisted HIV. The Brigham patients’ bone marrow transplants were done without any thought to selecting an HIV-resistant donor. Second, the Berlin patient ceased anti-retroviral therapy after his transplant, while the Brigham patients have remained on anti-retroviral therapy.

Ref:

http://www.sciencedaily.com/releases/2012/07/120726153945.htm

Read Full Post »

Reporter: Prabodh Kandala, PhD

Screen Shot 2021-07-19 at 6.19.31 PM

Word Cloud By Danielle Smolyar

Men diagnosed with prostate cancer are less likely to die from the disease than from largely preventable conditions such as heart disease, according to a new study from Harvard School of Public Health (HSPH). It is the largest study to date that looks at causes of death among men with prostate cancer, and suggests that encouraging healthy lifestyle changes should play an important role in prostate cancer management.

Prostate cancer is the most frequently diagnosed form of cancer, affecting one in six men during their lifetime. While incidence of prostate cancer has greatly increased in the United States, Sweden, and other Western countries in recent decades, the likelihood that a newly diagnosed man in these countries will die from the disease has declined. The researchers attribute this to the widespread use of the prostate-specific antigen (PSA) test, which has resulted in a higher proportion of men diagnosed with lower-risk forms of the disease.

The researchers examined causes of death among prostate cancer cases recorded in the U.S. Surveillance, Epidemiology, and End Results Program (over 490,000 men from 1973 to 2008) and the nationwide Swedish Cancer and Cause of Death registries (over 210,000 men from 1961 to 2008).

The results showed that during the study period, prostate cancer accounted for 52% of all reported deaths in Sweden and 30% of reported deaths in the United States among men with prostate cancer; however, only 35% of Swedish men and 16% of U.S. men diagnosed with prostate cancer died from this disease. In both populations, the risk of prostate cancer-specific death declined, while the risk of death from heart disease and non-prostate cancer remained constant. The five-year cumulative incidence of death from prostate cancer was 29% in Sweden and 11% in the United States.

Death rates from prostate cancer varied by age and calendar year of diagnosis, with the highest number of deaths from the disease among men diagnosed at older ages and those diagnosed in the earlier years of the surveys (especially in the years before the introduction of PSA screening).

Ref:

http://www.sciencedaily.com/releases/2012/07/120726135230.htm

Read Full Post »

Reporter: Prabodh Kandala, PhD

Researchers in Newcastle and Singapore have identified a new type of white blood cell which activates a killing immune response to an external source — providing a new potential target for vaccines for conditions such as cancer or Hepatitis B.

Publishing in the journal Immunity, the team of researchers from Newcastle University in collaboration with A*STAR’s Singapore Immunology Network (SIgN) describe a new human tissue dendritic cell with cross-presenting function.

Dendritic cells (DCs) are a type of white blood cell that orchestrate our body’s immune responses to infectious agents such as bacteria and viruses, as well as cancer cells. They are also very important for eliciting the immune response generated by vaccines.

DCs kick start an immune response by presenting small fragments of molecules from micro-organisms such as bacteria and viruses, or from vaccines or tumours, called antigens on their surface. This leads to activation of another white blood cell subset called T cells, which specialise in killing cells and are crucial for eliminating cancerous or infected cells. Most cells are only able to present antigens from within themselves, and so will only elicit an immune response if they are infected themselves. Only a specialised subset of DCs is able to generate a response to an external source of antigen, for example bacteria, vaccines and tumours.

The identity of human tissue DCs that are capable of presenting external antigen to activate the cell-killing response by T cells — a process termed ‘cross-presentation’ — has remained a mystery. Their discovery, as revealed by this research, will help scientists to design better targeted vaccine strategies to treat cancer and infections such as Hepatitis B.

“These are the cells we need to be targeting for anti-cancer vaccines,” said Dr Muzlifah Haniffa, a Wellcome Trust Intermediate Fellow and Senior Clinical Lecturer at Newcastle University. “Our discovery offers an accessible, easily targetable system which makes the most of the natural ability of the cell.” The researchers also showed for the first time that dendritic cell subsets are conserved between species and have in effect created a map, facilitating the translation of mouse studies to the human immune system.

“The cross-species map is in effect a Rosetta stone that deciphers the language of mouse into human,” explains Matthew Collin, Professor of Haematology from Newcastle University.

In the paper the researchers describe how the cross-presenting DCs were first isolated from surplus plastic surgery skin which was digested to melt the gelatinous collagen to isolate the cells. This research will have significant impact on the design of vaccines and other targeted immunotherapies.

The Rosetta Stone of our immune system: Mapping Human and Mouse dendritic cells

The Newcastle University team in collaboration with A*STAR’s Singapore Immunology Network (SIgN) have for the first time ever aligned the dendritic cell subsets between mouse and humans allowing the accurate translation of mouse studies into the human model for the first time.

The researchers isolated the dendritic cells from human blood and skin and those from mouse blood, lung and liver. Using gene expression analysis, they identified gene signatures for each human dendritic cell subset. Mouse orthologues of these genes were identified and a computational analysis was performed to match subsets across species.

Abstract:

Dendritic cell (DC)-mediated cross-presentation of exogenous antigens acquired in the periphery is critical for the initiation of CD8+ T cell responses. Several DC subsets are described in human tissues but migratory cross-presenting DCs have not been isolated, despite their potential importance in immunity to pathogens, vaccines, and tumors and tolerance to self. Here, we identified a CD141hi DC present in human interstitial dermis, liver, and lung that was distinct from the majority of CD1c+ and CD14+ tissue DCs and superior at cross-presenting soluble antigens. Cutaneous CD141hi DCs were closely related to blood CD141+ DCs, and migratory counterparts were found among skin-draining lymph node DCs. Comparative transcriptomic analysis with mouse showed tissue DC subsets to be conserved between species and permitted close alignment of human and mouse DC subsets. These studies inform the rational design of targeted immunotherapies and facilitate translation of mouse functional DC biology to the human setting.

Ref:

http://www.sciencedaily.com/releases/2012/07/120728151028.htm

http://www.cell.com/immunity/retrieve/pii/S1074761312002798

Read Full Post »

Reporter: Prabodh Kandala, PhD

Using a new assay method to study tumor cells, researchers at the University of California, San Diego School of Medicine and UC San Diego Moores Cancer Center have found evidence of clonal evolution in chronic lymphocytic leukemia (CLL). The assay method distinguishes features of leukemia cells that indicate whether the disease will be aggressive or slow-moving, a key factor in when and how patients are treated.

The findings are published in the July 26, 2012 First Edition online issue of Blood.

The progression of CLL is highly variable, dependent upon the rate and effects of accumulating monoclonal B cells in the blood, marrow, and lymphoid tissues. Some patients are symptom-free for years and do not require treatment, which involves the use of drugs that can cause significant side effects and are not curative. In other patients, however, CLL is relatively aggressive and demands therapeutic intervention soon after diagnosis.

“Our study shows that there may not be a sharp dividing line between the more aggressive and less aggressive forms of CLL,” said Thomas J. Kipps, MD, PhD, Evelyn and Edwin Tasch Chair in Cancer Research and senior author of the study. “Instead, it seems that over time the leukemia cells of patients with indolent disease begin to use genes similar to those that are generally used by CLL cells of patients with aggressive disease. In other words, prior to requiring therapy, the patterns of genes expressed by CLL cells appear to converge, regardless of whether or not the patient had aggressive versus indolent disease at diagnosis.”

Existing markers for aggressive or indolent disease are mostly fixed and have declining predictive value the longer the patient is from his or her initial diagnosis. When the blood sample is collected, these markers cannot reliably predict whether a CLL patient will need therapy soon, particularly when the patient has had the diagnosis of CLL for many years.

Kipps and colleagues studied thousands of genes, particularly those that code for proteins, in a group of 130 CLL patients with varying risks of disease progression. They identified 38 prognostic subnetworks of interacting genes and proteins that, at the time of sample collection, indicate the relative the aggressiveness of the disease and predict when the patient will require therapy. They confirmed their work using the method on two other, smaller CLL patient cohorts in Germany and Italy.

The subnetworks offer greater predictive value because they are based not on expression levels of individual genes or proteins, but on how they dynamically interact and change over time, influencing the course of the CLL and patient symptoms.

“In a sense, we looked at families rather than individuals,” said Kipps. “If you find in an interconnected family where most genes or proteins are expressed at higher levels, it becomes more likely that these genes and proteins have functional significance.”

He added that while the subnetworks abound in data, their complexity actually makes them easy to interpret and understand. “It’s like when you look out of a window and see the sky, clouds, trees, people, cars. You’re getting tremendous amounts of information that individually doesn’t tell you much. But when you look at the scene as a whole, you see patterns and networks. This work is similar. We’re taking all of the individual gene expression patterns and making sense of them as a whole. We’re more able to more clearly see how they control and regulate function.”

The findings help define how CLL — and perhaps other cancers — evolve over time, becoming more aggressive and deadly. “It’s as if each tumor has a clock which determines how frequently it may acquire the chance changes that make it behave more aggressively. Although the rates can vary, it appears that tumors march down similar pathways, which converge over time to a point where they become aggressive enough to require therapy.”

The study may alter how scientists think about CLL and how clinicians treat the disease: whether it is better to wait for later stages of the disease when tumor cells are more fragile and easier to kill, or treat early-stage indolent tumor cells aggressively, when they are fewer in number but harder to find and more resistant to therapy.

 

Abstract:

The clinical course of patients with chronic lymphocytic leukemia (CLL) is heterogeneous. Several prognostic factors have been identified that can stratify patients into groups that differ in their relative tendency for disease progression and/or survival. Here, we pursued a subnetwork-based analysis of gene expression profiles to discriminate between groups of patients with disparate risks for CLL progression. From an initial cohort of 130 patients, we identified 38 prognostic subnetworks that could predict the relative risk for disease progression requiring therapy from the time of sample collection, more accurately than established markers. The prognostic power of these subnetworks then was validated on two other cohorts of patients. We noted reduced divergence in gene expression between leukemia cells of CLL patients classified at diagnosis with aggressive versus indolent disease over time. The predictive subnetworks vary in levels of expression over time but exhibit increased similarity at later time points prior to therapy, suggesting that degenerate pathways apparently converge into common pathways that are associated with disease progression. As such, these results have implications for understanding cancer evolution and for the development of novel treatment strategies for patients with CLL.

 

Ref:

http://www.sciencedaily.com/releases/2012/07/120727154020.htm

http://bloodjournal.hematologylibrary.org/content/early/2012/07/26/blood-2012-03-416461

Read Full Post »

 

Reporter: Aviva Lev-Ari, PhD, RN

July 20, 2012

The 2012 Intelligent Systems for Molecular Biology conference held this week in Long Beach, Calif., marked the 20th anniversary of what is considered the largest meeting in computational biology.

As part of the festivities at this year’s meeting, two founding members of the International Society for Computational Biology, which plans and manages ISMB, presented an anniversary keynote.

Lawrence Hunter, who directs the computational bioscience program and the center for computational pharmacology at the University of Colorado School of Medicine, and Richard Lathrop, a professor in the department of computer science at the University of California, Irvine, delivered the keynote, which traced the early days of the meeting, with its initial focus on artificial intelligence, to its current focus on computational biology.

BioInform caught up with Hunter, who was the first president of ISCB, after his talk to discuss the history of the conference and possible future directions for the community. What follows is an edited version of the conversation.

It’s been 20 years since the first ISMB. How has the meeting evolved over the years?

ISMB has gone through several stages. In the very beginning it was almost entirely computer scientists and there were really clear themes that emerged from the meeting. [For example, at] the third meeting … half of the papers were about hidden Markov models. As the field has grown and changed, there is a much less clear division between the computer scientist and the biologist. We’ve really become computational biologists and so the level of biological sophistication has gone up and the field has diversified so that there is really rarely a clear theme anymore; it’s multifaceted and diverse.

Another thing that’s changed is the orientation toward medicine. In the early days of the field, we were grappling with much more basic science problems and while there is still a lot of that, there is a much higher proportion of work that’s translational or clinical. Whether it’s drug repositioning, where I think there is real potential to change the pharmaceutical industry based on the kind of informatics work that’s done here, to an increase in the use of clinical data in the techniques that are being proposed here — whether it’s text mining or patient records or formalin-fixed, paraffin-embedded samples and the challenges in doing transcriptomics in those kinds of clinical samples — we are much more tightly connected to human health than we were 20 years ago.

Is that a good thing? Does the focus on health mean that bioinformatics tool development in other areas is being neglected?

I think it’s a good thing. Everybody wants to be relevant. Scientists don’t want to do things in the abstract; they want to do things that make a difference in people’s lives. One of the biggest ways to make a difference in people’s lives with bioinformatics is through medicine or pharmacology. There has never been a big contingent of folks working in agriculture but there are always a few … so far, the agricultural impacts have been smaller than the medical ones. And there are plenty of people doing basic science who are trying to understand how life works, [and] not so much trying to affect disease. I think there is a good balance and it will shift around from time to time. It would be great if there were more agricultural kinds of applications … [but] there is much more funding for things with medical applications than there are for ones with ag applications.

Following up on comments about funding, do you find that researchers have gotten better at including a budget for informatics in their grant proposals?

I think reviewers demand pretty sophisticated informatics in a lot of grants. For NIH grants, especially for the bigger, more prestigious ones — R01s or program projects or the [Clinical and Translational Science Awards] — all of those require a pretty good degree of informatics sophistication, I think, in order to do well. Looking over the last 20 years, one thing that has improved, although it could still use work, is study sections at [the National Institutes of Health], the review panels, becoming more sophisticated about computation. For a long time there was no standing study section at NIH that was specifically computational. Now there are two. There is also increasing sophistication on other study sections, so if you sit on an NIGMS panel, for example, there are going to be at least a couple of people who are pretty sophisticated about the informatics looking at those applications.

For the really large center proposals, and I am thinking now about the CTSA awards, there was such an emphasis on the informatics in the program announcement from the NIH that it changed institutions. Medical schools started adding divisions or departments of biomedical informatics in response to NIH requirements that the grant proposals be more sophisticated.

You mentioned earlier that ISMB has evolved since it first launched. Do you think that the meeting and ISCB in general have stayed true to their initial mandates?

It’s evolved. When we first put it together, we were thinking about artificial intelligence and robotics in molecular biology. It was much narrower. There were already conferences on, say, biological databases and we didn’t think that it was our topic. There was also the RECOMB [Conference on Research in Computational Molecular Biology] community, the algorithms community, and we separated from them too so that original vision was much narrower. ISMB has turned into a much more inclusive conference and ISCB a more inclusive society.

ISCB and ISMB both start with ‘IS’ but the ‘IS’es are different. ISMB, the conference, was about intelligent systems, that is, about AI. ISCB is the International Society for Computational Biology; it’s a much broader mandate. It includes databases and algorithms and visualization and all kinds of things that aren’t intelligent systems. That’s been a big change from the initial vision and, I think, ultimately a good one. I think the boundary lines were not productive and while I am still very interested in the artificial intelligence question, the blending of people working from different areas of computer science all sort of pulling towards solving problems motivated by biology has really been productive and so I am glad we’ve changed a bit from the initial vision.

Is there still room for AI?

The AI stuff has never gone away. There is tons of machine learning here, text mining, ontology, and knowledge representation here. One of the reasons I think this conference and this field and the original AI in molecular idea has been so successful is the technology works. It works in molecular biology almost better than it works in any other application area. So there is no shortage of intelligent systems at ISMB. It’s just more than that now.

Are there any computational issues that the community was dealing with 20 years ago that are still being dealt with today?

We go in cycles. If you go back to the very early ISMBs there was a lot of sequence analysis and alignment questions and relatively little dynamics. Fast forward 10 years, everything was microarrays and time series and concentration levels and sequence analysis was a boring solved problem. Fast forward 10 more years and we’ve gone back in a circle. Right now, microarrays are kind of a boring solved problem and sequence analysis is really interesting and hot again. The technology changes and so the problems change, nothing ever seems to stay solved. Either our ability to peer into the biology lets us know that we were naïve or over simplistic about something that we now need to go back and look at much more carefully. For example, the assumption that only protein-coding bits of the genome were transcribed underlay a lot of science for a long time. Now it turns out that a huge portion of the genome is transcribed and there is a lot of action going on in RNA editing and mircoRNAs and long non-coding RNAs are starting to look interesting again. As you look deeper, more interesting problems come up that you didn’t notice when you were making assumptions about how biology works.

It’s rare in our field that we prove some technique optimal. The best we can do is prove that my way of doing it is better than X,Y, and Z and so it’s a step forward but that always leaves the possibility that there is yet a still better way to do it and we still see people who are working on topics that have been well studied for a long time [such as] splice site identification, transcription start sites, structure prediction, function prediction problems that have been studied for a long time, yet new methods that are generally better come out. Even after working on it for 20 years, there is still the potential to do better.

Looking ahead 20 years from now, what do you see as the future of bioinformatics?

Let me take [a prediction] from my keynote. I think that we will see computer programs as increasingly independent and individuated intellectual partners. Right now, everybody using, say, Cufflinks uses the same version and it does the same thing every time. 20 years from now, I would expect that my computer program would be so customized to my way of thinking and what’s going on in my lab that the same computer program would do something different in somebody else’s lab. That doesn’t mean it’s not reproducible, we’ll know what it did and why, but that rather than having tens of thousands of copies that do the same thing, it’ll be more like having a computational member of the lab. It will know what we are after and what our interests are and what my collaborators want and who my competitors are and be much more individualized. I am not going to say that we’ll have a program that everyone thinks is a mind 20 years from now … but I think along the path to developing genuine artificial intelligence, all minds are unique, everybody is different, and that’s going to be increasingly true to programs too.

http://www.genomeweb.com//node/1108711?hq_e=el&hq_m=1314078&hq_l=7&hq_v=e1df6f3681

Uduak Grace Thomas is the editor of GenomeWeb’s BioInform. She covers bioinformatics, computational biology, and life science informatics. E-mail her here or follow her GenomeWeb Twitter account at @BioInformGW.

 

Read Full Post »

Stanford Study Finds miRNA-320a a Broad Regulator of Glycolysis, Potential Drug Target

 Reporter: Aviva Lev-Ari, PhD, RN

A study by Stanford researchers has found that microRNA-320a appears to regulate glycolysis in response to oxidative stress in several biological systems, including lung cancer and wasting of disused muscle.

The Stanford team was initially interested in better understanding the wasting of diaphragm muscles due to mechanical ventilation, but expanded its study to look at lung cancer and an experimental in vitro model of oxidative stress, as well as the similarity of pathogenic glycolytic pathways across these biological systems.

The group profiled miRNA and protein expression in samples from human diaphragm muscles under mechanical ventilation to identify miRNAs associated with the glycolytic rate-limiting enzyme phosphofructokinase, or PFKm, without which glycolysis is reduced.

The group initially identified 28 miRNAs that were significantly downregulated and three that were upregulated in the ventilated human diaphragm samples. Using predictive software, the group pinpointed miR-320a as being potentially involved in the regulation PFKm.

To validate miR-320a, the researchers looked at all three experimental systems — samples of diaphragm tissue, lung cancer, and an in vitro cell model under oxidative stress. In all three, miR-320a was down-regulated in the samples versus the control.

The group also confirmed that miR-320a influences PFKm in each system, and further demonstrated that miR-320a knockdown increased lactate levels in vitro; and thathigher miR-320a levels reduced lactate levels in in vivo mouse experiments.

The group wrote that the study shows for the first time that glycolytic activity “is increased in diaphragm tissue that is noncontractile as a result of full mechanical ventilator support.” The results also confirmed that glycolysis up-regulation, or the Warburg effect, is present in lung adenocarcinoma, and that both otherwise divergent disorders are in fact linked by the influence of miR-320a.

The finding has implications for cancer treatment, as well as more effective treatment for dysfunctional diaphragm muscles following breathing support using a ventilator, according to the team, which published the study online in the FASEB Journal earlier this month.

Glycolysis is the process of converting sugar into energy, and is implicated in the growth of some cancers through a process called the Warburg effect. To the Stanford team, the Warburg effect seen in lung adenocarcinoma “appears to closely mimic” that of dysfunctional human diaphragm tissue after mechanical ventilation therapy, a condition called ventilator-induced diaphragm dysfunction, or VIDD.

The Stanford researchers claim that their study shows that these very divergent biological systems share the same glycolysis regulatory apparatus involving miR-320a, which the authors believe they are the first to identify.

Additionally, “miR-320 regulation of glycolysis may represent a general mechanism underlying other clinical diseases that are associated with changes in energy supply,” the researchers wrote, such as cardiac ischemia, to insulin resistance.

In cancer specifically, down-regulation of miR-320a has been previously reported in a number of malignancies, the group reported. Coupled with the fact that the Warburg effect is thought to be important in many cancers, and the results of the group’s study in adenocarcinoma, this suggests that miR-320a “may be directly related” to the development of cancer, and that the associated glycolysis may be a potential drug target.

FASEB J. 2012 Jul 5. [Epub ahead of print]

Oxidative stress-responsive microRNA-320 regulates glycolysis in diverse biological systems.

Tang HLee MSharpe OSalamone LNoonan EJHoang CDLevine SRobinson WHShrager JB.

Source

*Division of Thoracic Surgery, Department of Cardiothoracic Surgery.

Abstract

Glycolysis is the initial step of glucose catabolism and is up-regulated in cancer cells (the Warburg Effect). Such shifts toward a glycolytic phenotype have not been explored widely in other biological systems, and the molecular mechanisms underlying the shifts remain unknown. With proteomics, we observed increased glycolysis in disused human diaphragm muscle. In disused muscle, lung cancer, and H(2)O(2)-treated myotubes, we show up-regulation of the rate-limiting glycolytic enzyme muscle-type phosphofructokinase (PFKm, >2 fold, P<0.05) and accumulation of lactate (>150%, P<0.05). Using microRNA profiling, we identify miR-320a as a regulator of PFKm expression. Reduced miR-320a levels (to ∼50% of control, P<0.05) are associated with the increased PFKm in each of these diverse systems. Manipulation of miR-320a levels both in vitro and in vivo alters PFKm and lactate levels in the expected directions. Further, miR-320a appears to regulate oxidative stress-induced PFKm expression, and reduced miR-320a allows greater induction of glycolysis in response to H(2)O(2) treatment. We show that this microRNA-mediated regulation occurs through PFKm’s 3′ untranslated region and that Ets proteins are involved in the regulation of PFKm via miR-320a. These findings suggest that oxidative stress-responsive microRNA-320a may regulate glycolysis broadly within nature.-Tang, H., Lee, M., Sharpe, O., Salamone, L., Noonan, E. J., Hoang, C. D., Levine, S., Robinson, W. H., Shrager, J. B. Oxidative stress-responsive microRNA-320 regulates glycolysis in diverse biological systems.

Read Full Post »

« Newer Posts - Older Posts »