Feeds:
Posts
Comments

Archive for the ‘Pharmaceutical Drug Discovery’ Category

Eight Subcellular Pathologies driving Chronic Metabolic Diseases – Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics: Impact on Pharmaceuticals in Use

Eight Subcellular Pathologies driving Chronic Metabolic Diseases – Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics: Impact on Pharmaceuticals in Use

Curators:

 

THE VOICE of Aviva Lev-Ari, PhD, RN

In this curation we wish to present two breaking through goals:

Goal 1:

Exposition of a new direction of research leading to a more comprehensive understanding of Metabolic Dysfunctional Diseases that are implicated in effecting the emergence of the two leading causes of human mortality in the World in 2023: (a) Cardiovascular Diseases, and (b) Cancer

Goal 2:

Development of Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics for these eight subcellular causes of chronic metabolic diseases. It is anticipated that it will have a potential impact on the future of Pharmaceuticals to be used, a change from the present time current treatment protocols for Metabolic Dysfunctional Diseases.

According to Dr. Robert Lustig, M.D, an American pediatric endocrinologist. He is Professor emeritus of Pediatrics in the Division of Endocrinology at the University of California, San Francisco, where he specialized in neuroendocrinology and childhood obesity, there are eight subcellular pathologies that drive chronic metabolic diseases.

These eight subcellular pathologies can’t be measured at present time.

In this curation we will attempt to explore methods of measurement for each of these eight pathologies by harnessing the promise of the emerging field known as Bioelectronics.

Unmeasurable eight subcellular pathologies that drive chronic metabolic diseases

  1. Glycation
  2. Oxidative Stress
  3. Mitochondrial dysfunction [beta-oxidation Ac CoA malonyl fatty acid]
  4. Insulin resistance/sensitive [more important than BMI], known as a driver to cancer development
  5. Membrane instability
  6. Inflammation in the gut [mucin layer and tight junctions]
  7. Epigenetics/Methylation
  8. Autophagy [AMPKbeta1 improvement in health span]

Diseases that are not Diseases: no drugs for them, only diet modification will help

Image source

Robert Lustig, M.D. on the Subcellular Processes That Belie Chronic Disease

https://www.youtube.com/watch?v=Ee_uoxuQo0I

 

Exercise will not undo Unhealthy Diet

Image source

Robert Lustig, M.D. on the Subcellular Processes That Belie Chronic Disease

https://www.youtube.com/watch?v=Ee_uoxuQo0I

 

These eight Subcellular Pathologies driving Chronic Metabolic Diseases are becoming our focus for exploration of the promise of Bioelectronics for two pursuits:

  1. Will Bioelectronics be deemed helpful in measurement of each of the eight pathological processes that underlie and that drive the chronic metabolic syndrome(s) and disease(s)?
  2. IF we will be able to suggest new measurements to currently unmeasurable health harming processes THEN we will attempt to conceptualize new therapeutic targets and new modalities for therapeutics delivery – WE ARE HOPEFUL

In the Bioelecronics domain we are inspired by the work of the following three research sources:

  1. Biological and Biomedical Electrical Engineering (B2E2) at Cornell University, School of Engineering https://www.engineering.cornell.edu/bio-electrical-engineering-0
  2. Bioelectronics Group at MIT https://bioelectronics.mit.edu/
  3. The work of Michael Levin @Tufts, The Levin Lab
Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tufts University and Tufts Center for Regenerative and Developmental Biology. Wikipedia
Born: 1969 (age 54 years), Moscow, Russia
Education: Harvard University (1992–1996), Tufts University (1988–1992)
Affiliation: University of Cape Town
Research interests: Allergy, Immunology, Cross Cultural Communication
Awards: Cozzarelli prize (2020)
Doctoral advisor: Clifford Tabin
Most recent 20 Publications by Michael Levin, PhD
SOURCE
SCHOLARLY ARTICLE
The nonlinearity of regulation in biological networks
1 Dec 2023npj Systems Biology and Applications9(1)
Co-authorsManicka S, Johnson K, Levin M
SCHOLARLY ARTICLE
Toward an ethics of autopoietic technology: Stress, care, and intelligence
1 Sep 2023BioSystems231
Co-authorsWitkowski O, Doctor T, Solomonova E
SCHOLARLY ARTICLE
Closing the Loop on Morphogenesis: A Mathematical Model of Morphogenesis by Closed-Loop Reaction-Diffusion
14 Aug 2023Frontiers in Cell and Developmental Biology11:1087650
Co-authorsGrodstein J, McMillen P, Levin M
SCHOLARLY ARTICLE
30 Jul 2023Biochim Biophys Acta Gen Subj1867(10):130440
Co-authorsCervera J, Levin M, Mafe S
SCHOLARLY ARTICLE
Regulative development as a model for origin of life and artificial life studies
1 Jul 2023BioSystems229
Co-authorsFields C, Levin M
SCHOLARLY ARTICLE
The Yin and Yang of Breast Cancer: Ion Channels as Determinants of Left–Right Functional Differences
1 Jul 2023International Journal of Molecular Sciences24(13)
Co-authorsMasuelli S, Real S, McMillen P
SCHOLARLY ARTICLE
Bioelectricidad en agregados multicelulares de células no excitables- modelos biofísicos
Jun 2023Revista Española de Física32(2)
Co-authorsCervera J, Levin M, Mafé S
SCHOLARLY ARTICLE
Bioelectricity: A Multifaceted Discipline, and a Multifaceted Issue!
1 Jun 2023Bioelectricity5(2):75
Co-authorsDjamgoz MBA, Levin M
SCHOLARLY ARTICLE
Control Flow in Active Inference Systems – Part I: Classical and Quantum Formulations of Active Inference
1 Jun 2023IEEE Transactions on Molecular, Biological, and Multi-Scale Communications9(2):235-245
Co-authorsFields C, Fabrocini F, Friston K
SCHOLARLY ARTICLE
Control Flow in Active Inference Systems – Part II: Tensor Networks as General Models of Control Flow
1 Jun 2023IEEE Transactions on Molecular, Biological, and Multi-Scale Communications9(2):246-256
Co-authorsFields C, Fabrocini F, Friston K
SCHOLARLY ARTICLE
Darwin’s agential materials: evolutionary implications of multiscale competency in developmental biology
1 Jun 2023Cellular and Molecular Life Sciences80(6)
Co-authorsLevin M
SCHOLARLY ARTICLE
Morphoceuticals: Perspectives for discovery of drugs targeting anatomical control mechanisms in regenerative medicine, cancer and aging
1 Jun 2023Drug Discovery Today28(6)
Co-authorsPio-Lopez L, Levin M
SCHOLARLY ARTICLE
Cellular signaling pathways as plastic, proto-cognitive systems: Implications for biomedicine
12 May 2023Patterns4(5)
Co-authorsMathews J, Chang A, Devlin L
SCHOLARLY ARTICLE
Making and breaking symmetries in mind and life
14 Apr 2023Interface Focus13(3)
Co-authorsSafron A, Sakthivadivel DAR, Sheikhbahaee Z
SCHOLARLY ARTICLE
The scaling of goals from cellular to anatomical homeostasis: an evolutionary simulation, experiment and analysis
14 Apr 2023Interface Focus13(3)
Co-authorsPio-Lopez L, Bischof J, LaPalme JV
SCHOLARLY ARTICLE
The collective intelligence of evolution and development
Apr 2023Collective Intelligence2(2):263391372311683SAGE Publications
Co-authorsWatson R, Levin M
SCHOLARLY ARTICLE
Bioelectricity of non-excitable cells and multicellular pattern memories: Biophysical modeling
13 Mar 2023Physics Reports1004:1-31
Co-authorsCervera J, Levin M, Mafe S
SCHOLARLY ARTICLE
There’s Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-Scale Machines
1 Mar 2023Biomimetics8(1)
Co-authorsBongard J, Levin M
SCHOLARLY ARTICLE
Transplantation of fragments from different planaria: A bioelectrical model for head regeneration
7 Feb 2023Journal of Theoretical Biology558
Co-authorsCervera J, Manzanares JA, Levin M
SCHOLARLY ARTICLE
Bioelectric networks: the cognitive glue enabling evolutionary scaling from physiology to mind
1 Jan 2023Animal Cognition
Co-authorsLevin M
SCHOLARLY ARTICLE
Biological Robots: Perspectives on an Emerging Interdisciplinary Field
1 Jan 2023Soft Robotics
Co-authorsBlackiston D, Kriegman S, Bongard J
SCHOLARLY ARTICLE
Cellular Competency during Development Alters Evolutionary Dynamics in an Artificial Embryogeny Model
1 Jan 2023Entropy25(1)
Co-authorsShreesha L, Levin M
5

5 total citations on Dimensions.

Article has an altmetric score of 16
SCHOLARLY ARTICLE
1 Jan 2023BIOLOGICAL JOURNAL OF THE LINNEAN SOCIETY138(1):141
Co-authorsClawson WP, Levin M
SCHOLARLY ARTICLE
Future medicine: from molecular pathways to the collective intelligence of the body
1 Jan 2023Trends in Molecular Medicine
Co-authorsLagasse E, Levin M

THE VOICE of Dr. Justin D. Pearlman, MD, PhD, FACC

PENDING

THE VOICE of  Stephen J. Williams, PhD

Ten TakeAway Points of Dr. Lustig’s talk on role of diet on the incidence of Type II Diabetes

 

  1. 25% of US children have fatty liver
  2. Type II diabetes can be manifested from fatty live with 151 million  people worldwide affected moving up to 568 million in 7 years
  3. A common myth is diabetes due to overweight condition driving the metabolic disease
  4. There is a trend of ‘lean’ diabetes or diabetes in lean people, therefore body mass index not a reliable biomarker for risk for diabetes
  5. Thirty percent of ‘obese’ people just have high subcutaneous fat.  the visceral fat is more problematic
  6. there are people who are ‘fat’ but insulin sensitive while have growth hormone receptor defects.  Points to other issues related to metabolic state other than insulin and potentially the insulin like growth factors
  7. At any BMI some patients are insulin sensitive while some resistant
  8. Visceral fat accumulation may be more due to chronic stress condition
  9. Fructose can decrease liver mitochondrial function
  10. A methionine and choline deficient diet can lead to rapid NASH development

 

Read Full Post »

Artificial Intelligence (AI) Used to Successfully Determine Most Likely Repurposed Antibiotic Against Deadly Superbug Acinetobacter baumanni

Reporter: Stephen J. Williams, Ph.D.

The World Health Organization has identified 3 superbugs, or infective micororganisms displaying resistance to common antibiotics and multidrug resistance, as threats to humanity:

Three bacteria were listed as critical:

  • Acinetobacter baumannii bacteria that are resistant to important antibiotics called carbapenems. Acinetobacter baumannii are highly-drug resistant bacteria that can cause a range of infections for hospitalized patients, including pneumonia, wound, or blood infections.
  • Pseudomonas aeruginosa, which are resistant to carbapenems. Pseudomonas aeruginosa can cause skin rashes and ear infectious in healthy people but also severe blood infections and pneumonia when contracted by sick people in the hospital.
  • Enterobacteriaceae — a family of bacteria that live in the human gut — that are resistant to both carbepenems and another class of antibiotics, cephalosporins.

 

It has been designated critical need for development of  antibiotics to these pathogens.  Now researchers at Mcmaster University and others in the US had used artificial intelligence (AI) to screen libraries of over 7,000 chemicals to find a drug that could be repurposed to kill off the pathogen.

Liu et. Al. (1) published their results of an AI screen to narrow down potential chemicals that could work against Acinetobacter baumanii in Nature Chemical Biology recently.

Abstract

Acinetobacter baumannii is a nosocomial Gram-negative pathogen that often displays multidrug resistance. Discovering new antibiotics against A. baumannii has proven challenging through conventional screening approaches. Fortunately, machine learning methods allow for the rapid exploration of chemical space, increasing the probability of discovering new antibacterial molecules. Here we screened ~7,500 molecules for those that inhibited the growth of A. baumannii in vitro. We trained a neural network with this growth inhibition dataset and performed in silico predictions for structurally new molecules with activity against A. baumannii. Through this approach, we discovered abaucin, an antibacterial compound with narrow-spectrum activity against A. baumannii. Further investigations revealed that abaucin perturbs lipoprotein trafficking through a mechanism involving LolE. Moreover, abaucin could control an A. baumannii infection in a mouse wound model. This work highlights the utility of machine learning in antibiotic discovery and describes a promising lead with targeted activity against a challenging Gram-negative pathogen.

Schematic workflow for incorporation of AI for antibiotic drug discovery for A. baumannii from 1. Liu, G., Catacutan, D.B., Rathod, K. et al. Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

Figure source: https://www.nature.com/articles/s41589-023-01349-8

Article Source: https://www.nature.com/articles/s41589-023-01349-8

  1. Liu, G., Catacutan, D.B., Rathod, K. et al.Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumanniiNat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

 

 

For reference to WHO and lists of most pathogenic superbugs see https://www.scientificamerican.com/article/who-releases-list-of-worlds-most-dangerous-superbugs/

The finding was first reported by the BBC.

Source: https://www.bbc.com/news/health-65709834

By James Gallagher

Health and science correspondent

Scientists have used artificial intelligence (AI) to discover a new antibiotic that can kill a deadly species of superbug.

The AI helped narrow down thousands of potential chemicals to a handful that could be tested in the laboratory.

The result was a potent, experimental antibiotic called abaucin, which will need further tests before being used.

The researchers in Canada and the US say AI has the power to massively accelerate the discovery of new drugs.

It is the latest example of how the tools of artificial intelligence can be a revolutionary force in science and medicine.

Stopping the superbugs

Antibiotics kill bacteria. However, there has been a lack of new drugs for decades and bacteria are becoming harder to treat, as they evolve resistance to the ones we have.

More than a million people a year are estimated to die from infections that resist treatment with antibiotics.The researchers focused on one of the most problematic species of bacteria – Acinetobacter baumannii, which can infect wounds and cause pneumonia.

You may not have heard of it, but it is one of the three superbugs the World Health Organization has identified as a “critical” threat.

It is often able to shrug off multiple antibiotics and is a problem in hospitals and care homes, where it can survive on surfaces and medical equipment.

Dr Jonathan Stokes, from McMaster University, describes the bug as “public enemy number one” as it’s “really common” to find cases where it is “resistant to nearly every antibiotic”.

 

Artificial intelligence

To find a new antibiotic, the researchers first had to train the AI. They took thousands of drugs where the precise chemical structure was known, and manually tested them on Acinetobacter baumannii to see which could slow it down or kill it.

This information was fed into the AI so it could learn the chemical features of drugs that could attack the problematic bacterium.

The AI was then unleashed on a list of 6,680 compounds whose effectiveness was unknown. The results – published in Nature Chemical Biology – showed it took the AI an hour and a half to produce a shortlist.

The researchers tested 240 in the laboratory, and found nine potential antibiotics. One of them was the incredibly potent antibiotic abaucin.

Laboratory experiments showed it could treat infected wounds in mice and was able to kill A. baumannii samples from patients.

However, Dr Stokes told me: “This is when the work starts.”

The next step is to perfect the drug in the laboratory and then perform clinical trials. He expects the first AI antibiotics could take until 2030 until they are available to be prescribed.

Curiously, this experimental antibiotic had no effect on other species of bacteria, and works only on A. baumannii.

Many antibiotics kill bacteria indiscriminately. The researchers believe the precision of abaucin will make it harder for drug-resistance to emerge, and could lead to fewer side-effects.

 

In principle, the AI could screen tens of millions of potential compounds – something that would be impractical to do manually.

“AI enhances the rate, and in a perfect world decreases the cost, with which we can discover these new classes of antibiotic that we desperately need,” Dr Stokes told me.

The researchers tested the principles of AI-aided antibiotic discovery in E. coli in 2020, but have now used that knowledge to focus on the big nasties. They plan to look at Staphylococcus aureus and Pseudomonas aeruginosa next.

“This finding further supports the premise that AI can significantly accelerate and expand our search for novel antibiotics,” said Prof James Collins, from the Massachusetts Institute of Technology.

He added: “I’m excited that this work shows that we can use AI to help combat problematic pathogens such as A. baumannii.”

Prof Dame Sally Davies, the former chief medical officer for England and government envoy on anti-microbial resistance, told Radio 4’s The World Tonight: “We’re onto a winner.”

She said the idea of using AI was “a big game-changer, I’m thrilled to see the work he (Dr Stokes) is doing, it will save lives”.

Other related articles and books published in this Online Scientific Journal include the following:

Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases, Reproductive Genomic Endocrinology

(3 book series: Volume 1, 2&3, 4)

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

 

 

 

 

 

 

 

 

 

 

  • The Immune System, Stress Signaling, Infectious Diseases and Therapeutic Implications:

 

  • Series D, VOLUME 2

Infectious Diseases and Therapeutics

and

  • Series D, VOLUME 3

The Immune System and Therapeutics

(Series D: BioMedicine & Immunology) Kindle Edition.

On Amazon.com since September 4, 2017

(English Edition) Kindle Edition – as one Book

https://www.amazon.com/dp/B075CXHY1B $115

 

Bacterial multidrug resistance problem solved by a broad-spectrum synthetic antibiotic

The Journey of Antibiotic Discovery

FDA cleared Clever Culture Systems’ artificial intelligence tech for automated imaging, analysis and interpretation of microbiology culture plates speeding up Diagnostics

Artificial Intelligence: Genomics & Cancer

Read Full Post »

 

The Vibrant Philly Biotech Scene: Recent Happenings & Deals

Curator: Stephen J. Williams, Ph.D.

 

As the office and retail commercial real estate market has been drying up since the COVID pandemic, commercial real estate developers in the Philadelphia area have been turning to the health science industry to suit their lab space needs.  This includes refurbishing old office space as well as new construction.

Gattuso secures $290M construction loan for life sciences building on Drexel campus

Source: https://www.bizjournals.com/philadelphia/news/2022/12/19/construction-loan-gattuso-drexel-life-sciences.html?utm_source=st&utm_medium=en&utm_campaign=BN&utm_content=pl&ana=e_pl_BN&j=30034971&senddate=2022-12-20

 

By Ryan Mulligan  –  Reporter, Philadelphia Business Journal

Dec 19, 2022

Gattuso Development Partners and Vigilant Holdings of New York have secured a $290 million construction loan for a major life sciences building set to be developed on Drexel University’s campus.

The funding comes from Houston-based Corebridge Financial, with an additional equity commitment from Boston-based Baupost Group, which is also a partner on the project. JLL’s Capital Markets group arranged the loan.

Plans for the University City project at 3201 Cuthbert St. carry a price tag of $400 million. The 11-story building will total some 520,000 square feet, making it the largest life sciences research and lab space in the city when it comes online.

The building at 3201 Cuthbert will rise on what had served as a recreation field used by Drexel and is located next to the Armory. Gattuso Development, which will lease the parcel from Drexel, expects to to complete the project by fall 2024. Robert A.M. Stern Architects designed the building.

 

A rendering of a $400 million lab and research facility Drexel University and Gattuso Development Partners plan to build at 3201 Cuthbert St. in Philadelphia.

Enlarge

A rendering of a $400 million lab and research facility Drexel University and Gattuso Development Partners plan to build at 3201 Cuthbert St. in Philadelphia.

The building is 45% leased by Drexel and SmartLabs, an operator of life sciences labs. Drexel plans to occupy about 60,000 square feet, while SmartLabs will lease two floors totaling 117,000 square feet.

“We believe the project validates Philadelphia’s emergence as a global hub for life sciences research, and we are excited to begin construction,” said John Gattuso, the co-founder and president of Philadelphia-based Gattuso Development.

Ryan Ade, Brett Segal and Christopher Peck of JLL arranged the financing.

The project is another play in what amounts to an arms race for life sciences space and tenants in University City. Spark Therapeutics plans to build a $575 million, 500,000-square-foot gene therapy manufacturing plant on Drexel’s campus. One uCity Square, a $280 million, 400,000-square-foot life sciences building, was recently completed at 38th and Market streets. At 3151 Market St., a $307 million, 417,000-square-foot life sciences building is proposed as part of the Schuylkill Yards development.

Tmunity CEO Usman Azam departing to lead ‘stealth’ NYC biotech firm

 

By John George  –  Senior Reporter, Philadelphia Business Journal

Feb 7, 2022

The CEO of one of Philadelphia’s oldest cell therapy companies is departing to take a new job in the New York City area.

Usman “Oz” Azam, who has been CEO of Tmunity Therapeutics since 2016, will lead an unnamed biotechnology company currently operating in stealth mode.

In a posting on his LinkedIn page, Azam said, “After a decade immersed in cell therapies and immuno-oncology, I am now turning my attention to a new opportunity, and will be going back to where I started my life sciences career in neurosciences.”

Tmunity, a University of Pennsylvania spinout, is looking to apply CAR T-cell therapy, which has proved to be successful in treating liquid cancers, for the treatment of solid tumors.

Last summer, Tmunity suspended clinical testing of its lead cell therapy candidate targeting prostate cancer after two patients in the study died. Azam, in an interview with the Business Journal in June, said the company, which had grown to about 50 employees since its launch in 2015, laid off an undisclosed number of employees as a result of the setback.

Azam said on LinkedIn he is still a big believer in CAR T-cell therapy, noting Tmunity co-founder Dr. Carl June and his colleagues at Penn just published in Nature the 10-year landmark clinical outcomes study with the first CD19 CAR-T patients and programs.

“It’s just the beginning,” he stated. “I’m excited about the prospect of so many new cell- and gene-based therapies emerging in the next five to 10 years to tackle many solid and liquid tumors, and I hope we all continue to see the remarkable impact this makes on patients and families around the world.”

Azam could not be reached for comment Monday. Tmunity has engaged a search firm to identify his successor.

Tmunity, which is based in Philadelphia, has its own manufacturing operations in East Norriton. Tmunity’s founders include June and fellow Penn cell therapy pioneer Bruce Levine, who led the development of a CAR T-cell therapy now marketed by Novartis as Kymriah, a treatment for certain types of blood cancers.

In therapy using CAR-T cells, a patient’s T cells — part of their immune system — are removed and genetically modified in the laboratory. After they are re-injected into a patient, the T cells are better able to attack and destroy tumors. CAR is an acronym for chimeric antigen receptor. Chimeric antigen receptors are receptor proteins that have been engineered to give T cells their improved ability to target tumors.

Source: https://www.bizjournals.com/philadelphia/news/2022/02/07/tmunity-therapeutics-philadelphia-cell-azam-oz.html?utm_source=st&utm_medium=en&utm_campaign=BN&utm_content=pl&ana=e_pl_BN&j=30034971&senddate=2022-12-20

 

PIDC names U.S. Department of Treasury veteran, Philadelphia native as next president

 
By   –  Reporter, Philadelphia Business Journal

 

The Philadelphia Industrial Development Corp. has tapped U.S. Department of Treasury veteran Jodie Harris to be its next president.

Harris succeeds Anne Bovaird Nevins, who spent 15 years in the organization and took over as president in January 2020 before stepping down at the end of last year. Executive Vice President Sam Rhoads has been interim president.

Harris, a Philadelphia native who currently serves as director of the Community Development Financial Institutions Fund for the Department of Treasury, was picked after a regional and national search and will begin her tenure as president on June 1. She becomes the 12th head of PIDC and the first African-American woman to lead the organization.

PIDC is a public-private economic development corporation founded by the city and the Chamber of Commerce for Greater Philadelphia in 1958. It mainly uses industrial and commercial real estate projects to attract jobs, foster business opportunities and spur overall community growth. The organization has spurred over $18.5 billion in financing across its 65 years.

PIDC has its hand in development projects spanning the city, including master planning roles in expansive campuses like the Philadelphia Navy Yard and the Lower Schuylkill Biotech Campus in Southwest Philadelphia.

In a statement, Harris said that it is “a critical time for Philadelphia’s economy.”

“I’m especially excited for the opportunity to lead such an important and impactful organization in my hometown of Philadelphia,” Harris said. “As head of the CDFI Fund, I know first-hand what it takes to drive meaningful, sustainable, and equitable economic growth, especially in historically underserved communities.”

Harris is a graduate of the University of Maryland and received an MBA and master of public administration from New York University. In the Treasury Department, Harris’ most recent work aligns with PIDC’s economic development mission. At the Community Development Financial Institutions Fund, she oversaw a $331 million budget, mainly comprised of grant and administrative funding for various economic programs. Under Harris’ watch, the fund distributed over $3 billion in pandemic recovery funding, its highest level of appropriated grants ever.

Harris has been a part of the Treasury Department for 15 years, including as director of community and economic development policy.

In addition to government work, Harris has previously spent time in the private, academia and nonprofit sectors. In the beginning of her career, Harris worked at Meridian Bank and Accenture before turning to become a social and education policy researcher at New York University. She also spent two years as president of the Urban Business Assistance Corporation in New York.

Mayor Jim Kenney said that Philadelphia is “poised for long-term growth” and Harris will help drive it.

Source: https://www.bizjournals.com/philadelphia/news/2023/02/23/pidc-names-next-president-treasury.html 

$250M life sciences conversion planned for Philadelphia’s historic Quartermaster site

 
By   –  Reporter, Philadelphia Business Journal

Listen to this article     3 min

Real estate company SkyREM plans to spend $250 million converting the historic Quartermaster site in South Philadelphia to a life sciences campus with restaurants and a hotel.

The redevelopment would feature wet and dry lab space for research, development and bio-manufacturing.

The renamed Quartermaster Science + Technology Park is near the southwest corner of Oregon Avenue and South 20th Street in the city’s Girard Estates neighborhood. It’s east of the Quartermaster Plaza retail center, which sold last year for $100 million.

The 24-acre campus is planned to have six acres of green space, an Aldi grocery store opening by March and already is the headquarters for Indego, the bicycle share program in Philadelphia.

Six buildings totaling 1 million square feet of space would be used for research and development labs. There’s 500,000 square feet of vacant space available for life sciences and high technology companies with availabilities as small as 1,000 square feet up to 250,000 square feet contiguous. There’s also 150,000 square feet of retail space available.

The office park has 200,000 square feet already occupied by tenants. The Philadelphia Job Corps Center and Delaware Valley Intelligence Center are tenants at the site.

The campus was previously used by the military as a place to produce clothing, footwear and personal equipment during World War I and II. The clothing factory closed in 1994. The Philadelphia Quartermaster Depot was listed on the National Register of Historic Places in 2010.

“We had a vision to preserve the legacy of this built-to-last historic Philadelphia landmark and transform it to create a vibrant space where the best and brightest want to innovate, collaborate, and work,” SkyREM CEO and Founder Alex Dembitzer said in a statement.

SkyREM, a real estate investor and developer, has corporate offices in New York and Philadelphia. The company acquired the site in 2001.

Vered Nohi, SkyREM’s regional executive director of new business development, called the redevelopment “transformational” for Philadelphia.

 
 

Quartermaster would join a wave of new life sciences projects being developed in the surrounding area and across the region.

The site is near both interstates 76 and 95 and is about 2 miles north of the Philadelphia Navy Yard, which has undergone a similar transformation from a military hub to a major life sciences and mixed-use redevelopment project. The Philadelphia Industrial Development Corp. is also in the process of selecting a developer to create a massive cell and gene therapy manufacturing complex across two sites totaling about 40 acres on Southwest Philadelphia’s Lower Schuylkill riverfront.

At 34th Street and Grays Ferry Avenue, the University of Pennsylvania is teaming with Longfellow Real Estate Partners on proposed a $365 million, 455,000-square-foot life sciences and biomanufacturing building at Pennovation Works.

 

SkyREM is working with Maryland real estate firm Scheer Partners to lease the science and technology space. Philadelphia’s MPN Realty will handle leasing of the retail space. Architecture firm Fifteen is working on the project’s design.

Scheer Partners Senior Vice President Tim Conrey said the Quartermaster conversion will help companies solve for “speed to market” as demand for life science space in the region has been strong.

Brandywine pauses new spec office development, continues to bet big on life sciences

By   –  Reporter, Philadelphia Business Journal

 

Brandywine Realty Trust originally planned to redevelop a Radnor medical office into lab and office space, split 50-50 between the two uses.

After changes in demand for lab and office space, Brandywine (NYSE: BDN) recently completed the 168,000-square-foot, four-story building at 250 King of Prussia Road in Radnor fully for life sciences.

“The pipeline is now 100% life sciences, which, while requiring more capital, is also generating longer term leases at a higher return on cost,” Brandywine CEO Jerry Sweeney of the project said during the company’s fourth-quarter earnings call on Thursday.

At the same time, Brandywine is holding off on developing new office buildings unless it has a tenant lined up in advance.

The shift reflects how Philadelphia-based Brandywine continues to lean into — and bet big — on life sciences.

Brandywine is the city’s largest owner of trophy office buildings and has several major development projects in the works. The company is planning to eventually develop 3 million square feet of life sciences space. For now, 800,000 square feet of life sciences space is under development, including a 12-story, 417,000-square-foot life sciences building at 3151 Market St. and a 29-story building with 200,000 square feet of life sciences space at 3025 John F. Kennedy Blvd. Both are part of the multi-phase Schuylkill Yards project underway near 30th Street Station in University City.

Once its existing projects are completed, Brandywine would have 800,000 square feet of life sciences space, making up 8% of its portfolio.Sweeney said the company wants to grow that figure to 21%.

Brandywine is developing a 145,000-square-foot, build-to-suit office building at 155 King of Prussia Road in Radnor for Arkema, a France-based global supplier of specialty materials. The building will be Arkema’s North American headquarters. Construction began in January and is scheduled to be completed in late 2024.

Brandywine reported that since November it raised over $705 million through fourth-quarter asset sales, an unsecured bond transaction and a secured loan. The company has “complete availability” on its $600 million unsecured line of credit, Sweeney said.

Brandywine sold a 95% leased, 86,000-square-foot office building at 200 Barr Harbor Drive in West Conshohocken for $30.5 million. The company also sold its 50% ownership interest in the 1919 Market joint venture for $83.2 million to an undisclosed buyer. 1919 Market St. is a 29-story building with apartments, office and commercial space. Brandywine co-developed the property with LCOR and the California State Teacher’s Retirement System.

Brandywine declined to comment and LCOR could not be reached.

Brandywine’s core portfolio is 91% leased.

The project at 250 King of Prussia Road cost $103.7 million and was recently completed. The renovation included 12-foot high floor-to-ceiling glass on the second floor, a new roof, lobby, elevator core, common area with a skylight and an added structured parking deck.

Located in the Radnor Life Science Center, a new campus with nearly 1 million square feet of lab, research and office space, Sweeney said it’s a “magnet” for biotech companies. Avantor, a global manufacturer and distributor of life sciences products, is headquartered in the complex.

 

Sweeney said Brandywine is “very confident” demand will stay strong for life sciences in Radnor. The building at 250 King of Prussia Road is projected to be fully leased by early 2024.

“Larger users we’re talking to, they just tend to take a little bit more time than we would like as they go through technical requirements and space planning requirements,” Sweeney said.

While Brandywine is aiming to increase its life sciences footprint, the company is being selective about what it builds next. The company may steer away from developments other than life sciences. The Schuylkill Yards project, for example, features a significant life sciences portion in University City.

“Other than fully leased build-to-suit opportunities, our future development starts are on hold,” Sweeney said, “pending more leasing on the existing joint venture pipeline and more clarity on the cost of debt capital and cap rates.”

 

Brandywine said about 70% to 75%of suburban tenants have returned to offices while that number has been around 50% in Philadelphia. At this point, though, it hasn’t yet affected demand when leasing space. Some tenants, for example, have moved out of the city while others have moved in.

In the fourth quarter, Brandywine had $55.7 million funds from operations, or 32 cents per share. That’s down from $60.4 million, or 35 cents per share, in the fourth quarter of 2021. Brandywine generated $129 million in revenue in the fourth quarter, up slightly from $125.5 in the year-ago period.

Brandywine stock is up 6.4% since the start of the year to $6.70 per share on Monday afternoon.

Many of Brandywine’s properties are in desirable locations, which have seen demand remain strong despite challenges facing offices, on par with industry trends.

Brandywine’s 12-story, 417,000-square-foot building at 3151 Market St. is on budget for $308 million and on schedule to be completed in the second quarter of 2024. Sweeney said Brandywine anticipates entering a construction loan in the second half of 2023, which would help complete the project. The building, being developed along with a global institutional investor,would be used for life sciences, innovation and office space as part of the larger Schuylkill Yards development in University City.

The company’s 29-story building at 3025 John F. Kennedy Blvd. with 200,000 square feet of life sciences space and 326 luxury apartments, is also on budget, costing $287.3 million, and on time, eyeing completion in the third quarter of this year.

Source: https://www.bizjournals.com/philadelphia/news/2023/02/06/brandywine-realty-life-sciences-development.html

Read Full Post »

Bacterial multidrug resistance problem solved by a broad-spectrum synthetic antibiotic

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

There is an increasing demand for new antibiotics that effectively treat patients with refractory bacteremia, do not evoke bacterial resistance, and can be readily modified to address current and anticipated patient needs. Recently scientists described a promising compound of COE (conjugated oligo electrolytes) family, COE2-2hexyl, that exhibited broad-spectrum antibacterial activity. COE2-2hexyl effectively-treated mice infected with bacteria derived from sepsis patients with refractory bacteremia, including a CRE K. pneumoniae strain resistant to nearly all clinical antibiotics tested. Notably, this lead compound did not evoke drug resistance in several pathogens tested. COE2-2hexyl has specific effects on multiple membrane-associated functions (e.g., septation, motility, ATP synthesis, respiration, membrane permeability to small molecules) that may act together to abrogate bacterial cell viability and the evolution of drug-resistance. Impeding these bacterial properties may occur through alteration of vital protein–protein or protein-lipid membrane interfaces – a mechanism of action distinct from many membrane disrupting antimicrobials or detergents that destabilize membranes to induce bacterial cell lysis. The diversity and ease of COE design and chemical synthesis have the potential to establish a new standard for drug design and personalized antibiotic treatment.

Recent studies have shown that small molecules can preferentially target bacterial membranes due to significant differences in lipid composition, presence of a cell wall, and the absence of cholesterol. The inner membranes of Gram-negative bacteria are generally more negatively charged at their surface because they contain more anionic lipids such as cardiolipin and phosphatidylglycerol within their outer leaflet compared to mammalian membranes. In contrast, membranes of mammalian cells are largely composed of more-neutral phospholipids, sphingomyelins, as well as cholesterol, which affords membrane rigidity and ability to withstand mechanical stresses; and may stabilize the membrane against structural damage to membrane-disrupting agents such as COEs. Consistent with these studies, COE2-2hexyl was well tolerated in mice, suggesting that COEs are not intrinsically toxic in vivo, which is often a primary concern with membrane-targeting antibiotics. The COE refinement workflow potentially accelerates lead compound optimization by more rapid screening of novel compounds for the iterative directed-design process. It also reduces the time and cost of subsequent biophysical characterization, medicinal chemistry and bioassays, ultimately facilitating the discovery of novel compounds with improved pharmacological properties.

Additionally, COEs provide an approach to gain new insights into microbial physiology, including membrane structure/function and mechanism of drug action/resistance, while also generating a suite of tools that enable the modulation of bacterial and mammalian membranes for scientific or manufacturing uses. Notably, further COE safety and efficacy studies are required to be conducted on a larger scale to ensure adequate understanding of the clinical benefits and risks to assure clinical efficacy and toxicity before COEs can be added to the therapeutic armamentarium. Despite these limitations, the ease of molecular design, synthesis and modular nature of COEs offer many advantages over conventional antimicrobials, making synthesis simple, scalable and affordable. It enables the construction of a spectrum of compounds with the potential for development as a new versatile therapy for the emergence and rapid global spread of pathogens that are resistant to all, or nearly all, existing antimicrobial medicines.

References:

https://www.thelancet.com/journals/ebiom/article/PIIS2352-3964(23)00026-9/fulltext#%20

https://pubmed.ncbi.nlm.nih.gov/36801104/

https://www.sciencedaily.com/releases/2023/02/230216161214.htm

https://www.nature.com/articles/s41586-021-04045-6

https://www.nature.com/articles/d43747-020-00804-y

Read Full Post »

2022 FDA Drug Approval List, 2022 Biological Approvals and Approved Cellular and Gene Therapy Products

 

 

Reporter: Aviva Lev-Ari, PhD, RN

SOURCE

Tal Bahar’s post on LinkedIn on 1/17/2023

Novel Drug Approvals for 2022

FDA’s Center for Drug Evaluation and Research (CDER)

New Molecular Entities (“NMEs”)

  • Some of these products have never been used in clinical practice. Below is a listing of new molecular entities and new therapeutic biological products that CDER approved in 2022. This listing does not contain vaccines, allergenic products, blood and blood products, plasma derivatives, cellular and gene therapy products, or other products that the Center for Biologics Evaluation and Research approved in 2022. 
  • Others are the same as, or related to, previously approved products, and they will compete with those products in the marketplace. See Drugs@FDA for information about all of CDER’s approved drugs and biological products. 

Certain drugs are classified as new molecular entities (“NMEs”) for purposes of FDA review. Many of these products contain active moieties that FDA had not previously approved, either as a single ingredient drug or as part of a combination product. These products frequently provide important new therapies for patients. Some drugs are characterized as NMEs for administrative purposes, but nonetheless contain active moieties that are closely related to active moieties in products that FDA has previously approved. FDA’s classification of a drug as an “NME” for review purposes is distinct from FDA’s determination of whether a drug product is a “new chemical entity” or “NCE” within the meaning of the Federal Food, Drug, and Cosmetic Act. 

INNOVATION   PREDICTABILITY   ACCESS FDA’s Center for Drug Evaluation and Research

January 2023

Table of Contents

 SOURCE

2022 Biological Approvals

The Center for Biologics Evaluation and Research (CBER) regulates products under a variety of regulatory authorities.  See the Development & Approval Process page for a description of what products are approved as Biologics License Applications (BLAs), Premarket Approvals (PMAs), New Drug Applications (NDAs) or 510Ks.

Biologics License Applications and Supplements

New BLAs (except those for blood banking), and BLA supplements that are expected to significantly enhance the public health (e.g., for new/expanded indications, new routes of administration, new dosage formulations and improved safety).

Other Applications Approved or Cleared by the Center for Biologics Evaluation and Research (CBER)

Medical devices involved in the collection, processing, testing, manufacture and administration of licensed blood, blood components and cellular products.

Key Resources

SOURCE

https://www.fda.gov/vaccines-blood-biologics/development-approval-process-cber/2022-biological-approvals

 

Approved Cellular and Gene Therapy Products

Below is a list of licensed products from the Office of Tissues and Advanced Therapies (OTAT).


Approved Products


 

Resources For You


SOURCE

https://www.fda.gov/vaccines-blood-biologics/cellular-gene-therapy-products/approved-cellular-and-gene-therapy-products

 

2022 forecast: Cell, gene therapy makers push past regulatory, payer hurdles to set up high hopes for next year

There are five FDA-approved CAR-T treatments for blood cancers and two gene therapies to treat rare diseases now on the market in the U.S. The late-stage pipeline could produce several more cancer CAR-Ts and gene therapies to treat a range of diseases.

RELATED: ASH: Bristol Myers’ Breyanzi, Gilead’s Yescarta lock horns in race to move CAR-T therapy to earlier lymphoma

One of the biggest races to watch in the cell therapy space will be that between Gilead Sciences’ Yescarta and Bristol Myers Squibb’s Breyanzi, both of which are gunning to move their CAR-Ts into earlier lines of treatment in large B-cell lymphoma (LBCL). At ASH, both companies rolled out impressive data from their trials in the second-line setting, but Gilead could have the upper hand by virtue of its three-year head start in the market, analysts said. Gilead expects to hear from the FDA on a label expansion in the second-line setting in April.

Read Full Post »

The drug efflux pump MDR1 promotes intrinsic and acquired resistance to PROTACs in cancer cells

Reporter: Stephen J. Williams, PhD.
Below is one of the first reports  on the potential mechanisms of intrinsic and acquired resistance to PROTAC therapy in cancer cells.
Proteolysis-targeting chimeras (PROTACs) are a promising new class of drugs that selectively degrade cellular proteins of interest. PROTACs that target oncogene products are avidly being explored for cancer therapies, and several are currently in clinical trials. Drug resistance is a substantial challenge in clinical oncology, and resistance to PROTACs has been reported in several cancer cell models. Here, using proteomic analysis, we found intrinsic and acquired resistance mechanisms to PROTACs in cancer cell lines mediated by greater abundance or production of the drug efflux pump MDR1. PROTAC-resistant cells were resensitized to PROTACs by genetic ablation of ABCB1 (which encodes MDR1) or by coadministration of MDR1 inhibitors. In MDR1-overexpressing colorectal cancer cells, degraders targeting either the kinases MEK1/2 or the oncogenic mutant GTPase KRASG12C synergized with the dual epidermal growth factor receptor (EGFR/ErbB)/MDR1 inhibitor lapatinib. Moreover, compared with single-agent therapies, combining MEK1/2 degraders with lapatinib improved growth inhibition of MDR1-overexpressing KRAS-mutant colorectal cancer xenografts in mice. Together, our findings suggest that concurrent blockade of MDR1 will likely be required with PROTACs to achieve durable protein degradation and therapeutic response in cancer.

INTRODUCTION

Proteolysis-targeting chimeras (PROTACs) have emerged as a revolutionary new class of drugs that use cancer cells’ own protein destruction machinery to selectively degrade essential tumor drivers (1). PROTACs are small molecules with two functional ends, wherein one end binds to the protein of interest, whereas the other binds to an E3 ubiquitin ligase (23), bringing the ubiquitin ligase to the target protein, leading to its ubiquitination and subsequent degradation by the proteasome. PROTACs have enabled the development of drugs against previously “undruggable” targets and require neither catalytic activity nor high-affinity target binding to achieve target degradation (4). In addition, low doses of PROTACs can be highly effective at inducing degradation, which can reduce off-target toxicity associated with high dosing of traditional inhibitors (3). PROTACs have been developed for a variety of cancer targets, including oncogenic kinases (5), epigenetic proteins (6), and, recently, KRASG12C proteins (7). PROTACs targeting the androgen receptor or estrogen receptor are avidly being evaluated in clinical trials for prostate cancer (NCT03888612) or breast cancer (NCT04072952), respectively.
However, PROTACs may not escape the overwhelming challenge of drug resistance that befalls so many cancer therapies (8). Resistance to PROTACs in cultured cells has been shown to involve genomic alterations in their E3 ligase targets, such as decreased expression of Cereblon (CRBN), Von Hippel Lindau (VHL), or Cullin2 (CUL2) (911). Up-regulation of the drug efflux pump encoded by ABCB1—MDR1 (multidrug resistance 1), a member of the superfamily of adenosine 5′-triphosphate (ATP)–binding cassette (ABC) transporters—has been shown to convey drug resistance to many anticancer drugs, including chemotherapy agents, kinase inhibitors, and other targeted agents (12). Recently, PROTACs were shown to be substrates for MDR1 (1013), suggesting that drug efflux represents a potential limitation for degrader therapies. Here, using degraders (PROTACs) against bromodomain and extraterminal (BET) bromodomain (BBD) proteins and cyclin-dependent kinase 9 (CDK9) as a proof of concept, we applied proteomics to define acquired resistance mechanisms to PROTAC therapies in cancer cells after chronic exposure. Our study reveals a role for the drug efflux pump MDR1 in both acquired and intrinsic resistance to protein degraders in cancer cells and supports combination therapies involving PROTACs and MDR1 inhibitors to achieve durable protein degradation and therapeutic responses.

Fig. 1. Proteomic characterization of degrader-resistant cancer cell lines.
(A) Workflow for identifying protein targets up-regulated in degrader-resistant cancer cells. Single-run proteome analysis was performed, and changes in protein levels among parent and resistant cells were determined by LFQ. m/z, mass/charge ratio. (B and C) Cell viability assessed by CellTiter-Glo in parental and dBET6- or Thal SNS 032–resistant A1847 cells treated with increasing doses of dBET6 (B) or Thal SNS 032 (C) for 5 days. Data were analyzed as % of DMSO control, presented as means ± SD of three independent assays. Growth inhibitory 50% (GI50) values were determined using Prism software. (D to G) Immunoblotting for degrader targets and downstream signaling in parental A1847 cells and their derivative dBET6-R or Thal-R cells treated with increasing doses of dBET6 or Thal SNS 032 for 4 hours. The dBET6-R and Thal-R cells were continuously cultured in 500 nM PROTAC. Blots are representative, and densitometric analyses are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 values, quantitating either (E) the dose of dBET6 that reduces BRD2, BRD3, or BRD4 or (G) the dose of Thal SNS 032 that reduces CDK9 protein levels 50% of the DMSO control treatment, were determined with Prism software. Pol II, polymerase II. (H to K) Volcano plot of proteins with increased or reduced abundance in dBET6-R (H) or Thal-R (I) A1847 cells relative to parental cells. Differences in protein log2 LFQ intensities among degrader-resistant and parental cells were determined by paired t test permutation-based adjusted P values at FDR of <0.05 using Perseus software. The top 10 up-regulated proteins in each are shown in (J) and (K), respectively. FC, fold change. (L and M) ABCB1 log2 LFQ values in dBET6-R cells from (H) and Thal-R cells from (I) compared with those in parental A1847 cells. Data are presented as means ± SD from three independent assays. By paired t test permutation-based adjusted P values at FDR of <0.05 using Perseus software, ***P ≤ 0.001. (N) Cell viability assessed by CellTiter-Glo in parental and MZ1-resistant SUM159 cells treated with increasing doses of MZ1 for 5 days. Data were analyzed as % of DMSO control, presented as means of three independent assays. GI50 values were determined using Prism software. (O and P) Immunoblotting for degrader targets and downstream signaling in parental or MZ1-R SUM159 cells treated with increasing doses of MZ1 for 24 hours. The MZ1-R cells were continuously cultured in 500 nM MZ1. Blots are representative, and densitometric analyses are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 values were determined in Prism software. (Q and R) Top 10 up-regulated proteins (Q) and ABCB1 log2 LFQ values (R) in MZ1-R cells relative to parental SUM159 cells

Fig. 2. Chronic exposure to degraders induces MDR1 expression and drug efflux activity.
(A) ABCB1 mRNA levels in parental and degrader-resistant cell lines as determined by qRT-PCR. Data are means ± SD of three independent experiments. ***P ≤ 0.001 by Student’s t test. (B) Immunoblot analysis of MDR1 protein levels in parental and degrader-resistant cell lines. Blots are representative of three independent experiments. (C to E) Immunofluorescence (“IF”) microscopy of MDR1 protein levels in A1847 dBET6-R (C), SUM159 MZ1-R (D), and Thal-R A1847 cells (E) relative to parental cells. Nuclear staining by DAPI. Images are representative of three independent experiments. Scale bars, 100 μm. (F) Drug efflux activity in A1847 dBET6-R, SUM159 MZ1-R, and Thal-R A1847 cells relative to parental cells (Par.) using rhodamine 123 efflux assays. Bars are means ± SD of three independent experiments. ***P ≤ 0.001 by Student’s t test. (G) Intracellular dBET6 levels in parental or dBET-R A1847 cells transfected with a CRBN sensor and treated with increasing concentrations of dBET6. Intracellular dBET6 levels measured using the CRBN NanoBRET target engagement assay. Data were analyzed as % of DMSO control, presented as means ± SD of three independent assays. *P ≤ 0.05, **P ≤ 0.01, and ***P ≤ 0.001 by Student’s t test. (H and I) FISH analysis of representative drug-sensitive parental and drug-resistant A1847 (H) and SUM159 (I) cells using ABCB1 and control XCE 7 centromere probes. Images of interphase nuclei were captured with a Metasystems Metafer microscope workstation, and the raw images were extracted and processed to depict ABCB1 signals in magenta, centromere 7 signals in cyan, and DAPI-stained nuclei in blue. (J and K) CpG methylation status of the ABCB1 downstream promoter (coordinates: chr7.87,600,166-87,601,336) by bisulfite amplicon sequencing in parent and degrader-resistant A1847 (J) and SUM159 (K) cells. Images depict the averaged percentage of methylation for each region of the promoter, where methylation status is depicted by color as follows: red, methylated; blue, unmethylated. Schematic of the ABCB1 gene with the location of individual CpG sites is shown. Graphs are representative of three independent experiments. (L and M) Immunoblot analysis of MDR1 protein levels after short-term exposure [for hours (h) or days (d) as indicated] to BET protein degraders dBET6 or MZ1 (100 nM) in A1847 (L) and SUM159 (M) cells, respectively. Blots are representative of three independent experiments. (N to P) Immunoblot analysis of MDR1 protein levels in A1847 and SUM159 cells after long-term exposure (7 to 30 days) to BET protein degraders dBET6 (N), Thal SNS 032 (O), or MZ1 (P), each at 500 nM. Blots are representative of three independent experiments. (Q and R) Immunoblot analysis of MDR1 protein levels in degrader-resistant A1847 (Q) and SUM159 (R) cells after PROTAC removal for 2 or 7 days. Blots are representative of three independent experiments.

 

Fig. 3. Blockade of MDR1 activity resensitizes degrader-resistant cells to PROTACs.
(A and B) Cell viability by CellTiter-Glo assay in parental and degrader-resistant A1847 (A) and SUM159 (B) cells transfected with control siRNA or siRNAs targeting ABCB1 and cultured for 120 hours. Data were analyzed as % of control, presented as means ± SD of three independent assays. ***P ≤ 0.001 by Student’s t test. (C and D) Immunoblot analysis of degrader targets after ABCB1 knockdown in parental and degrader-resistant A1847 (C) and SUM159 (D) cells. Blots are representative, and densitometric analyses using ImageJ are means ± SD of three blots, each normalized to the loading control, GAPDH. (E) Drug efflux activity, using the rhodamine 123 efflux assay, in degrader-resistant cells after MDR1 inhibition by tariquidar (0.1 μM). Data are means ± SD of three independent experiments. ***P ≤ 0.001 by Student’s t test. (F to H) Cell viability by CellTiter-Glo assay in parental and dBET6-R (F) or Thal-R (G) A1847 cells or MZ1-R SUM159 cells (H) treated with increasing concentrations of tariquidar. Data are % of DMSO control, presented as means ± SD of three independent assays. GI50 value determined with Prism software. (I to K) Immunoblot analysis of degrader targets after MDR1 inhibition (tariquidar, 0.1 μM for 24 hours) in parental and degrader-resistant A1847 cells (I and J) and SUM159 cells (K). Blots are representative, and densitometric analyses are means ± SD from three blots, each normalized to the loading control, GAPDH. (L and M) A 14-day colony formation assessed by crystal violet staining of (L) A1847 cells or (M) SUM159 cells treated with degrader (0.1 μM; dBET6 or MZ1, respectively) and MDR1 inhibitor tariquidar (0.1 μM). Images are representative of three biological replicates. (N) Immunoblotting for MDR1 in SUM159 cells stably expressing FLAG-MDR1 after selection with hygromycin. (O) Long-term 14-day colony formation assay of SUM159 cells expressing FLAG-MDR1 that were treated with DMSO, MZ1 (0.1 μM), or MZ1 and tariquidar (0.1 μM) for 14 days, assessed by crystal violet staining. Representative images of three biological replicates are shown. (P and Q) RT-PCR (P) and immunoblot (Q) analysis of ABCB1 mRNA and MDR1 protein levels, respectively, in parental or MZ1-R HCT116, OVCAR3, and MOLT4 cells.

 

Fig. 4. Overexpression of MDR1 conveys intrinsic resistance to degrader therapies in cancer cells.
(A) Frequency of ABCB1 mRNA overexpression in a panel of cancer cell lines, obtained from cBioPortal for Cancer Genomics using Z-score values of >1.2 for ABCB1 mRNA levels (30). (B) Immunoblot for MDR1 protein levels in a panel of 10 cancer cell lines. Blots are representative of three independent experiments. (C) Cell viability by CellTiter-Glo assay in cancer cell lines expressing high or low MDR1 protein levels and treated with Thal SNS 032 for 5 days. Data were analyzed as % of DMSO control, presented as means ± SD of three independent assays. GI50 values were determined with Prism software. (D to F) Immunoblot analysis of CDK9 in MDR1-low (D) or MDR1-high (E) cell lines after Thal SNS 032 treatment for 4 hours. Blots are representative, and densitometric analyses using ImageJ are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 value determined with Prism. (G and H) Immunoblotting of control and MDR1-knockdown DLD-1 cells treated for 4 hours with increasing concentrations of Thal SNS 032 [indicated in (H)]. Blots are representative, and densitometric analysis data are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 value determined with Prism. (I) Drug efflux activity using rhodamine 123 efflux assays in DLD-1 cells treated with DMSO or 0.1 μM tariquidar. Data are means ± SD of three independent experiments. ***P ≤ 0.001 by Student’s t test. (J) Intracellular Thal SNS 032 levels, using the CRBN NanoBRET target engagement assay, in MDR1-overexpressing DLD-1 cells treated with DMSO or 0.1 μM tariquidar and increasing doses of Thal SNS 032. Data are % of DMSO control, presented as means ± SD of three independent assays. **P ≤ 0.01 and ***P ≤ 0.001 by Student’s t test. (K to N) Immunoblotting in DLD-1 cells treated with increasing doses of Thal SNS 032 (K and L) or dBET6 (M and N) alone or with tariquidar (0.1 μM) for 4 hours. Blots are representative, and densitometric analyses are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 value of Thal SNS 032 for CDK9 reduction (L) or of dBET6 for BRD4 reduction (N) determined with Prism. (O to T) Bliss synergy scores based on cell viability by CellTiter-Glo assay, colony formation, and immunoblotting in DLD-1 cells treated with the indicated doses of Thal SNS 032 (O to Q) or dBET6 (R to T) alone or with tariquidar. Cells were treated for 14 days for colony formation assays and 24 hours for immunoblotting.

 

Fig. 5. Repurposing dual kinase/MDR1 inhibitors to overcome degrader resistance in cancer cells.
(A and B) Drug efflux activity by rhodamine 123 efflux assays in degrader-resistant [dBET-R (A) or Thal-R (B)] A1847 cells after treatment with tariquidar, RAD001, or lapatinib (each 2 μM). Data are means ± SD of three independent experiments. *P ≤ 0.05 by Student’s t test. (C and D) CellTiter-Glo assay for the cell viability of parental, dBET6-R, or Thal-R A1847 cells treated with increasing concentrations of RAD001 (C) or lapatinib (D). Data were analyzed as % of DMSO control, presented as means ± SD of three independent assays. GI50 values were determined with Prism software. (E to I) Immunoblot analysis of degrader targets in parental (E), dBET6-R (F and G), and Thal-R (H and I) A1847 cells treated with increasing concentrations of RAD001 or lapatinib for 4 hours. Blots are representative, and densitometric analyses are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 value of dBET6 for BRD4 reduction (G) or of Thal SNS 032 for CDK9 reduction (I) determined with Prism. (J) Immunoblotting for cleaved PARP in dBET6-R or Thal-R A1847 cells treated with RAD001, lapatinib, or tariquidar (each 2 μM) for 24 hours. Blots are representative of three independent blots. (K to N) Immunoblotting for BRD4 in DLD-1 cells treated with increasing doses of dBET6 alone or in combination with either RAD001 or lapatinib [each 2 μM (K and L)] or KU-0063794 or afatinib [each 2 μM (M and N)] for 4 hours. Blots are representative of three independent experiments and, in (L), are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 value for BRD4 reduction (L) determined in Prism. (O) Colony formation by DLD-1 cells treated with DMSO, dBET6 (0.1 μM), lapatinib (2 μM), afatinib (2 μM), RAD001 (2 μM), KU-0063794 (2 μM), or the combination of inhibitor and dBET6 for 14 days. Images representative of three independent assays. (P and Q) Immunoblotting for CDK9 in DLD-1 cells treated with increasing doses of Thal SNS 032 and/or RAD001 (2 μM) or lapatinib (2 μM) for 4 hours. Blots are representative, and densitometric analyses are means ± SD from three blots, each normalized to the loading control, GAPDH. DC50 value for CDK9 reduction determined with Prism (Q). (R) Colony formation in DLD-1 cells treated with DMSO, Thal SNS 032 (0.5 μM), lapatinib (2 μM), and/or RAD001 (2 μM) as indicated for 14 days.

 

Fig. 6. Combining MEK1/2 degraders with lapatinib synergistically kills MDR1-overexpressing KRAS-mutant CRC cells and tumors.
(A and B) ABCB1 expression in KRAS-mutant CRC cell lines from cBioPortal (30) (A) and MDR1 abundance in select KRAS-mutant CRC cell lines (B). (C) Cell viability assessed by CellTiter-Glo in CRC cells treated with increasing doses of MS432 for 5 days, analyzed as % of DMSO control. GI50 value determined with Prism software. (D) Colony formation by CRC cells 14 days after treatment with 1 μM MS432. (E) MEK1/2 protein levels assessed by immunoblot in CRC lines SKCO1 (low MDR1) or LS513 (high MDR1) treated with increasing doses of MS432 for 4 hours. (F) Rhodamine 123 efflux in LS513 cells treated with DMSO, 2 μM tariquidar, or 2 μM lapatinib. (G and H) Immunoblotting analysis in LS513 cells treated with increasing doses of MS432 alone or in combination with tariquidar (0.1 μM) or lapatinib (5 μM) for 24 hours. DC50 value for MEK1 levels determined with Prism. (I) Immunoblotting in LS513 cells treated with DMSO, PD0325901 (0.01 μM), lapatinib (5 μM), or the combination for 48 hours. (J and K) Immunoblotting in LS513 cells treated either with DMSO, MS432 (1 μM), tariquidar (0.1 μM) (J), or lapatinib (5 μM) (K), alone or in combination. (L) Bliss synergy scores determined from cell viability assays (CellTiter-Glo) in LS513 cells treated with increasing concentrations of MS432, lapatinib, or the combination. (M and N) Colony formation by LS513 cells (M) and others (N) treated with DMSO, lapatinib (2 μM), MS432 (1 μM), or the combination for 14 days. (O and P) Immunoblotting in LS513 cells treated with increasing doses of MS934 alone (O) or combined with lapatinib (5 μM) (P) for 24 hours. (Q and R) Tumor volume of LS513 xenografts (Q) and the body weights of the tumor-bearing nude mice (R) treated with vehicle, MS934 (50 mg/kg), lapatinib (100 mg/kg), or the combination. n = 5 mice per treatment group. In (A) to (R), blots and images are representative of three independent experiments, and quantified data are means ± SD [SEM in (Q) and (R)] of three independent experiments; ***P ≤ 0.001 by Student’s t test.

 

Fig. 7. Lapatinib treatment improves KRASG12C degrader therapies in MDR1-overexpressing CRC cell lines.
(A and B) Colony formation by SW1463 (A) or SW837 (B) cells treated with DMSO, LC-2 (1 μM), or MRTX849 (1 μM) for 14 days. Images representative of three independent assays. (C to E) Immunoblotting in SW1463 cells (C and D) and SW837 cells (E) treated with DMSO, LC-2 (1 μM), tariquidar (0.1 μM) (C), or lapatinib (5 μM) (D and E) alone or in combination for 48 hours. Blots are representative of three independent experiments. (F and G) Bliss synergy scores based on CellTiter-Glo assay for the cell viability of SW1463 (F) or SW837 (G) cells treated with increasing concentrations of LC-2, lapatinib, or the combination. Data are means of three experiments ± SD. (H and I) Colony formation of SW1463 (H) or SW837 (I) cells treated as indicated (−, DMSO; LC-2, 1 μM; lapatinib, 2 μM; tariquidar, 0.1 μM) for 14 days. Images representative of three independent assays. (J) Rationale for combining lapatinib with MEK1/2 or KRASG12C degraders in MDR1-overexpressing CRC cell lines. Simultaneous blockade of MDR1 and ErbB receptor signaling overcomes degrader resistance and ErbB receptor kinome reprogramming, resulting in sustained inhibition of KRAS effector signaling.

SOURCE

Other articles in this Open Access Scientific Journal on PROTAC therapy in cancer include

Accelerating PROTAC drug discovery: Establishing a relationship between ubiquitination and target protein degradation

The Vibrant Philly Biotech Scene: Proteovant Therapeutics Using Artificial Intelligence and Machine Learning to Develop PROTACs

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Read Full Post »

AI enabled Drug Discovery and Development: The Challenges and the Promise

Reporter: Aviva Lev-Ari, PhD, RN

 

Early Development

Caroline Kovac (the first IBM GM of Life Sciences) is the one who started in silico development of drugs in 2000 using a big db of substances and computer power. She transformed an idea into $2b business. Most of the money was from big pharma. She was asking what is are the new drugs they are planning to develop and provided the four most probable combinations of substances, based on in Silicon work. 

Carol Kovac

General Manager, Healthcare and Life Sciences, IBM

from speaker at conference on 2005

Carol Kovac is General Manager of IBM Healthcare and Life Sciences responsible for the strategic direction of IBM′s global healthcare and life sciences business. Kovac leads her team in developing the latest information technology solutions and services, establishing partnerships and overseeing IBM investment within the healthcare, pharmaceutical and life sciences markets. Starting with only two employees as an emerging business unit in the year 2000, Kovac has successfully grown the life sciences business unit into a multi-billion dollar business and one of IBM′s most successful ventures to date with more than 1500 employees worldwide. Kovac′s prior positions include general manager of IBM Life Sciences, vice president of Technical Strategy and Division Operations, and vice president of Services and Solutions. In the latter role, she was instrumental in launching the Computational Biology Center at IBM Research. Kovac sits on the Board of Directors of Research!America and Africa Harvest. She was inducted into the Women in Technology International Hall of Fame in 2002, and in 2004, Fortune magazine named her one of the 50 most powerful women in business. Kovac earned her Ph.D. in chemistry at the University of Southern California.

SOURCE

https://www.milkeninstitute.org/events/conferences/global-conference/2005/speaker-detail/1536

 

In 2022

The use of artificial intelligence in drug discovery, when coupled with new genetic insights and the increase of patient medical data of the last decade, has the potential to bring novel medicines to patients more efficiently and more predictably.

WATCH VIDEO

https://www.youtube.com/watch?v=b7N3ijnv6lk

SOURCE

https://engineering.stanford.edu/magazine/promise-and-challenges-relying-ai-drug-development?utm_source=Stanford+ALL

Conversation among three experts:

Jack Fuchs, MBA ’91, an adjunct lecturer who teaches “Principled Entrepreneurial Decisions” at Stanford School of Engineering, moderated and explored how clearly articulated principles can guide the direction of technological advancements like AI-enabled drug discovery.

Kim Branson, Global head of AI and machine learning at GSK.

Russ Altman, the Kenneth Fong Professor of Bioengineering, of genetics, of medicine (general medical discipline), of biomedical data science and, by courtesy, of computer science.

 

Synthetic Biology Software applied to development of Galectins Inhibitors at LPBI Group

 

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Curators: Dr. Stephen J. Williams and Aviva Lev-Ari, PhD, RN

Using Structural Computation Models to Predict Productive PROTAC Ternary Complexes

Ternary complex formation is necessary but not sufficient for target protein degradation. In this research, Bai et al. have addressed questions to better understand the rate-limiting steps between ternary complex formation and target protein degradation. They have developed a structure-based computer model approach to predict the efficiency and sites of target protein ubiquitination by CRNB-binding PROTACs. Such models will allow a more complete understanding of PROTAC-directed degradation and allow crafting of increasingly effective and specific PROTACs for therapeutic applications.

Another major feature of this research is that it a result of collaboration between research groups at Amgen, Inc. and Promega Corporation. In the past commercial research laboratories have shied away from collaboration, but the last several years have found researchers more open to collaborative work. This increased collaboration allows scientists to bring their different expertise to a problem or question and speed up discovery. According to Dr. Kristin Riching, Senior Research Scientist at Promega Corporation, “Targeted protein degraders have broken many of the rules that have guided traditional drug development, but it is exciting to see how the collective learnings we gain from their study can aid the advancement of this new class of molecules to the clinic as effective therapeutics.”

Literature Reviewed

Bai, N. , Riching K.M. et al. (2022) Modeling the CRLRA ligase complex to predict target protein ubiquitination induced by cereblon-recruiting PROTACsJ. Biol. Chem.

The researchers NanoBRET assays as part of their model validation. Learn more about NanoBRET technology at the Promega.com website.

SOURCE

https://www.promegaconnections.com/protac-ternary-complex/?utm_campaign=ms-2022-pharma_tpd&utm_source=linkedin&utm_medium=Khoros&utm_term=sf254230485&utm_content=030822ct-blogsf254230485&sf254230485=1

Read Full Post »

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Curators: Dr. Stephen J. Williams and Aviva Lev-Ari, PhD, RN

UPDATED on 11/5/2021

Introducing Isomorphic Labs

I believe we are on the cusp of an incredible new era of biological and medical research. Last year DeepMind’s breakthrough AI system AlphaFold2 was recognised as a solution to the 50-year-old grand challenge of protein folding, capable of predicting the 3D structure of a protein directly from its amino acid sequence to atomic-level accuracy. This has been a watershed moment for computational and AI methods for biology.
Building on this advance, today, I’m thrilled to announce the creation of a new Alphabet company –  Isomorphic Labs – a commercial venture with the mission to reimagine the entire drug discovery process from the ground up with an AI-first approach and, ultimately, to model and understand some of the fundamental mechanisms of life.

For over a decade DeepMind has been in the vanguard of advancing the state-of-the-art in AI, often using games as a proving ground for developing general purpose learning systems, like AlphaGo, our program that beat the world champion at the complex game of Go. We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself. One of the most important applications of AI that I can think of is in the field of biological and medical research, and it is an area I have been passionate about addressing for many years. Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.

An AI-first approach to drug discovery and biology
The pandemic has brought to the fore the vital work that brilliant scientists and clinicians do every day to understand and combat disease. We believe that the foundational use of cutting edge computational and AI methods can help scientists take their work to the next level, and massively accelerate the drug discovery process. AI methods will increasingly be used not just for analysing data, but to also build powerful predictive and generative models of complex biological phenomena. AlphaFold2 is an important first proof point of this, but there is so much more to come. 
At its most fundamental level, I think biology can be thought of as an information processing system, albeit an extraordinarily complex and dynamic one. Taking this perspective implies there may be a common underlying structure between biology and information science – an isomorphic mapping between the two – hence the name of the company. Biology is likely far too complex and messy to ever be encapsulated as a simple set of neat mathematical equations. But just as mathematics turned out to be the right description language for physics, biology may turn out to be the perfect type of regime for the application of AI.

What’s next for Isomorphic Labs
This is just the beginning of what we hope will become a radical new approach to drug discovery, and I’m incredibly excited to get this ambitious new commercial venture off the ground and to partner with pharmaceutical and biomedical companies. I will serve as CEO for Isomorphic’s initial phase, while remaining as DeepMind CEO, partially to help facilitate collaboration between the two companies where relevant, and to set out the strategy, vision and culture of the new company. This will of course include the building of a world-class multidisciplinary team, with deep expertise in areas such as AI, biology, medicinal chemistry, biophysics, and engineering, brought together in a highly collaborative and innovative environment. (We are hiring!
As pioneers in the emerging field of ‘digital biology’, we look forward to helping usher in an amazingly productive new age of biomedical breakthroughs. Isomorphic’s mission could not be a more important one: to use AI to accelerate drug discovery, and ultimately, find cures for some of humanity’s most devastating diseases.

SOURCE

https://www.isomorphiclabs.com/blog

DeepMind creates ‘transformative’ map of human proteins drawn by artificial intelligence

DeepMind plans to release hundreds of millions of protein structures for free

James Vincent July 22, 2021 11:00 am

AI research lab DeepMind has created the most comprehensive map of human proteins to date using artificial intelligence. The company, a subsidiary of Google-parent Alphabet, is releasing the data for free, with some scientists comparing the potential impact of the work to that of the Human Genome Project, an international effort to map every human gene.

Proteins are long, complex molecules that perform numerous tasks in the body, from building tissue to fighting disease. Their purpose is dictated by their structure, which folds like origami into complex and irregular shapes. Understanding how a protein folds helps explain its function, which in turn helps scientists with a range of tasks — from pursuing fundamental research on how the body works, to designing new medicines and treatments.
 “the culmination of the entire 10-year-plus lifetime of DeepMind” 
Previously, determining the structure of a protein relied on expensive and time-consuming experiments. But last year DeepMind showed it can produce accurate predictions of a protein’s structure using AI software called AlphaFold. Now, the company is releasing hundreds of thousands of predictions made by the program to the public.
“I see this as the culmination of the entire 10-year-plus lifetime of DeepMind,” company CEO and co-founder Demis Hassabis told The Verge. “From the beginning, this is what we set out to do: to make breakthroughs in AI, test that on games like Go and Atari, [and] apply that to real-world problems, to see if we can accelerate scientific breakthroughs and use those to benefit humanity.”



Two examples of protein structures predicted by AlphaFold (in blue) compared with experimental results (in green). 
Image: DeepMind


There are currently around 180,000 protein structures available in the public domain, each produced by experimental methods and accessible through the Protein Data Bank. DeepMind is releasing predictions for the structure of some 350,000 proteins across 20 different organisms, including animals like mice and fruit flies, and bacteria like 
E. coli. (There is some overlap between DeepMind’s data and pre-existing protein structures, but exactly how much is difficult to quantify because of the nature of the models.) Most significantly, the release includes predictions for 98 percent of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. It isn’t the first public dataset of human proteins, but it is the most comprehensive and accurate.

If they want, scientists can download the entire human proteome for themselves, says AlphaFold’s technical lead John Jumper. “There is a HumanProteome.zip effectively, I think it’s about 50 gigabytes in size,” Jumper tells The Verge. “You can put it on a flash drive if you want, though it wouldn’t do you much good without a computer for analysis!”
 “anyone can use it for anything” 
After launching this first tranche of data, DeepMind plans to keep adding to the store of proteins, which will be maintained by Europe’s flagship life sciences lab, the European Molecular Biology Laboratory (EMBL). By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,” according to Edith Heard, director general of the EMBL.
The data will be free in perpetuity for both scientific and commercial researchers, says Hassabis. “Anyone can use it for anything,” the DeepMind CEO noted at a press briefing. “They just need to credit the people involved in the citation.”

The benefits of protein folding


Understanding a protein’s structure is useful for scientists across a range of fields. The information can help design new medicines, synthesize novel enzymes that break down waste materials, and create crops that are resistant to viruses or extreme weather. Already, DeepMind’s protein predictions are being used for medical research, including studying the workings of SARS-CoV-2, the virus that causes COVID-19.
 “it will definitely have a huge impact for the scientific community” 
New data will speed these efforts, but scientists note it will still take a lot of time to turn this information into real-world results. “I don’t think it’s going to be something that changes the way patients are treated within the year, but it will definitely have a huge impact for the scientific community,” Marcelo C. Sousa, a professor at the University of Colorado’s biochemistry department, told The Verge.
Scientists will have to get used to having such information at their fingertips, says DeepMind senior research scientist Kathryn Tunyasuvunakool. “As a biologist, I can confirm we have no playbook for looking at even 20,000 structures, so this [amount of data] is hugely unexpected,” Tunyasuvunakool told The Verge. “To be analyzing hundreds of thousands of structures — it’s crazy.”

Notably, though, DeepMind’s software produces predictions of protein structures rather than experimentally determined models, which means that in some cases further work will be needed to verify the structure. DeepMind says it spent a lot of time building accuracy metrics into its AlphaFold software, which ranks how confident it is for each prediction.

Example protein structures predicted by AlphaFold.
Image: DeepMind
Predictions of protein structures are still hugely useful, though. Determining a protein’s structure through experimental methods is expensive, time-consuming, and relies on a lot of trial and error. That means even a low-confidence prediction can save scientists years of work by pointing them in the right direction for research.
Helen Walden, a professor of structural biology at the University of Glasgow, tells The Verge that DeepMind’s data will “significantly ease” research bottlenecks, but that “the laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain.
Sousa, who has previously used data from AlphaFold in his work, says for scientists the impact will be felt immediately. “In our collaboration we had with DeepMind, we had a dataset with a protein sample we’d had for 10 years, and we’d never got to the point of developing a model that fit,” he says. “DeepMind agreed to provide us with a structure, and they were able to solve the problem in 15 minutes after we’d been sitting on it for 10 years.”

Why protein folding is so difficult

Proteins are constructed from chains of amino acids, which come in 20 different varieties in the human body. As any individual protein can be comprised of hundreds of individual amino acids, each of which can fold and twist in different directions, it means a molecule’s final structure has an incredibly large number of possible configurations. One estimate is that the typical protein can be folded in 10^300 ways — that’s a 1 followed by 300 zeroes.

 Protein folding has been a “grand challenge” of biology for decades 

Because proteins are too small to examine with microscopes, scientists have had to indirectly determine their structure using expensive and complicated methods like nuclear magnetic resonance and X-ray crystallography. The idea of determining the structure of a protein simply by reading a list of its constituent amino acids has been long theorized but difficult to achieve, leading many to describe it as a “grand challenge” of biology.
In recent years, though, computational methods — particularly those using artificial intelligence — have suggested such analysis is possible. With these techniques, AI systems are trained on datasets of known protein structures and use this information to create their own predictions.

DeepMind’s AlphaFold software has significantly increased the accuracy of computational protein-folding, as shown by its performance in the CASP competition. 
Image: DeepMind
Many groups have been working on this problem for years, but DeepMind’s deep bench of AI talent and access to computing resources allowed it to accelerate progress dramatically. Last year, the company competed in an international protein-folding competition known as CASP and blew away the competition. Its results were so accurate that computational biologist John Moult, one of CASP’s co-founders, said that “in some sense the problem [of protein folding] is solved.”

DeepMind’s AlphaFold program has been upgraded since last year’s CASP competition and is now 16 times faster. “We can fold an average protein in a matter of minutes, most cases seconds,” says Hassabis.

@@@@@@@

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

@@@@@@@

Liam McGuffin, a professor at Reading University who developed some of the UK’s leading protein-folding software, praised the technical brilliance of AlphaFold, but also noted that the program’s success relied on decades of prior research and public data. “DeepMind has vast resources to keep this database up to date and they are better placed to do this than any single academic group,” McGuffin told The Verge. “I think academics would have got there in the end, but it would have been slower because we’re not as well resourced.”

Why does DeepMind care?

Many scientists The Verge spoke to noted the generosity of DeepMind in releasing this data for free. After all, the lab is owned by Google-parent Alphabet, which has been pouring huge amounts of resources into commercial healthcare projects. DeepMind itself loses a lot of money each year, and there have been numerous reports of tensions between the company and its parent firm over issues like research autonomy and commercial viability.

Hassabis, though, tells The Verge that the company always planned to make this information freely available, and that doing so is a fulfillment of DeepMind’s founding ethos. He stresses that DeepMind’s work is used in lots of places at Google — “almost anything you use, there’s some of our technology that’s part of that under the hood” — but that the company’s primary goal has always been fundamental research.
 “There’s many ways value can be attained.” 

“The agreement when we got acquired is that we are here primarily to advance the state of AGI and AI technologies and then use that to accelerate scientific breakthroughs,” says Hassabis. “[Alphabet] has plenty of divisions focused on making money,” he adds, noting that DeepMind’s focus on research “brings all sorts of benefits, in terms of prestige and goodwill for the scientific community. There’s many ways value can be attained.”
Hassabis predicts that AlphaFold is a sign of things to come — a project that shows the huge potential of artificial intelligence to handle messy problems like human biology.

“I think we’re at a really exciting moment,” he says. “In the next decade, we, and others in the AI field, are hoping to produce amazing breakthroughs that will genuinely accelerate solutions to the really big problems we have here on Earth.”


SOURCE

https://www.theverge.com/platform/amp/2021/7/22/22586578/deepmind-alphafold-ai-protein-folding-human-proteome-released-for-free?__twitter_impression=true

Potential Use of Protein Folding Predictions for Drug Discovery

PROTAC Technology: Opportunities and Challenges

  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240Publication Date:March 12, 2020https://doi.org/10.1021/acsmedchemlett.9b00597Copyright © 2020 American Chemical Society

Abstract

PROTACs-induced targeted protein degradation has emerged as a novel therapeutic strategy in drug development and attracted the favor of academic institutions, large pharmaceutical enterprises (e.g., AstraZeneca, Bayer, Novartis, Amgen, Pfizer, GlaxoSmithKline, Merck, and Boehringer Ingelheim, etc.), and biotechnology companies. PROTACs opened a new chapter for novel drug development. However, any new technology will face many new problems and challenges. Perspectives on the potential opportunities and challenges of PROTACs will contribute to the research and development of new protein degradation drugs and degrader tools.

Although PROTAC technology has a bright future in drug development, it also has many challenges as follows:
(1)
Until now, there is only one example of PROTAC reported for an “undruggable” target; (18) more cases are needed to prove the advantages of PROTAC in “undruggable” targets in the future.
(2)
“Molecular glue”, existing in nature, represents the mechanism of stabilized protein–protein interactions through small molecule modulators of E3 ligases. For instance, auxin, the plant hormone, binds to the ligase SCF-TIR1 to drive recruitment of Aux/IAA proteins and subsequently triggers its degradation. In addition, some small molecules that induce targeted protein degradation through “molecular glue” mode of action have been reported. (21,22) Furthermore, it has been recently reported that some PROTACs may actually achieve target protein degradation via a mechanism that includes “molecular glue” or via “molecular glue” alone. (23) How to distinguish between these two mechanisms and how to combine them to work together is one of the challenges for future research.
(3)
Since PROTAC acts in a catalytic mode, traditional methods cannot accurately evaluate the pharmacokinetics (PK) and pharmacodynamics (PD) properties of PROTACs. Thus, more studies are urgently needed to establish PK and PD evaluation systems for PROTACs.
(4)
How to quickly and effectively screen for target protein ligands that can be used in PROTACs, especially those targeting protein–protein interactions, is another challenge.
(5)
How to understand the degradation activity, selectivity, and possible off-target effects (based on different targets, different cell lines, and different animal models) and how to rationally design PROTACs etc. are still unclear.
(6)
The human genome encodes more than 600 E3 ubiquitin ligases. However, there are only very few E3 ligases (VHL, CRBN, cIAPs, and MDM2) used in the design of PROTACs. How to expand E3 ubiquitin ligase scope is another challenge faced in this area.

PROTAC technology is rapidly developing, and with the joint efforts of the vast number of scientists in both academia and industry, these problems shall be solved in the near future.

PROTACs have opened a new chapter for the development of new drugs and novel chemical knockdown tools and brought unprecedented opportunities to the industry and academia, which are mainly reflected in the following aspects:
(1)
Overcoming drug resistance of cancer. In addition to traditional chemotherapy, kinase inhibitors have been developing rapidly in the past 20 years. (12) Although kinase inhibitors are very effective in cancer therapy, patients often develop drug resistance and disease recurrence, consequently. PROTACs showed greater advantages in drug resistant cancers through degrading the whole target protein. For example, ARCC-4 targeting androgen receptor could overcome enzalutamide-resistant prostate cancer (13) and L18I targeting BTK could overcome C481S mutation. (14)
(2)
Eliminating both the enzymatic and nonenzymatic functions of kinase. Traditional small molecule inhibitors usually inhibit the enzymatic activity of the target, while PROTACs affect not only the enzymatic activity of the protein but also nonenzymatic activity by degrading the entire protein. For example, FAK possesses the kinase dependent enzymatic functions and kinase independent scaffold functions, but regulating the kinase activity does not successfully inhibit all FAK function. In 2018, a highly effective and selective FAK PROTAC reported by Craig M. Crews’ group showed a far superior activity to clinical candidate drug in cell migration and invasion. (15) Therefore, PROTAC can expand the druggable space of the existing targets and regulate proteins that are difficult to control by traditional small molecule inhibitors.
(3)
Degrade the “undruggable” protein target. At present, only 20–25% of the known protein targets (include kinases, G protein-coupled receptors (GPCRs), nuclear hormone receptors, and iron channels) can be targeted by using conventional drug discovery technologies. (16,17) The proteins that lack catalytic activity and/or have catalytic independent functions are still regarded as “undruggable” targets. The involvement of Signal Transducer and Activator of Transcription 3 (STAT3) in the multiple signaling pathway makes it an attractive therapeutic target; however, the lack of an obviously druggable site on the surface of STAT3 limited the development of STAT3 inhibitors. Thus, there are still no effective drugs directly targeting STAT3 approved by the Food and Drug Administration (FDA). In November 2019, Shaomeng Wang’s group first reported a potent PROTAC targeting STAT3 with potent biological activities in vitro and in vivo. (18) This successful case confirms the key potential of PROTAC technology, especially in the field of “undruggable” targets, such as K-Ras, a tricky tumor target activated by multiple mutations as G12A, G12C, G12D, G12S, G12 V, G13C, and G13D in the clinic. (19)
(4)
Fast and reversible chemical knockdown strategy in vivo. Traditional genetic protein knockout technologies, zinc-finger nuclease (ZFN), transcription activator-like effector nuclease (TALEN), or CRISPR-Cas9, usually have a long cycle, irreversible mode of action, and high cost, which brings a lot of inconvenience for research, especially in nonhuman primates. In addition, these genetic animal models sometimes produce phenotypic misunderstanding due to potential gene compensation or gene mutation. More importantly, the traditional genetic method cannot be used to study the function of embryonic-lethal genes in vivo. Unlike DNA-based protein knockout technology, PROTACs knock down target proteins directly, rather than acting at the genome level, and are suitable for the functional study of embryonic-lethal proteins in adult organisms. In addition, PROTACs provide exquisite temporal control, allowing the knockdown of a target protein at specific time points and enabling the recovery of the target protein after withdrawal of drug treatment. As a new, rapid and reversible chemical knockdown method, PROTAC can be used as an effective supplement to the existing genetic tools. (20)

SOURCE

PROTAC Technology: Opportunities and Challenges
  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240

Goal in Drug Design: Eliminating both the enzymatic and nonenzymatic functions of kinase.

Work-in-Progress

Induction and Inhibition of Protein in Galectins Drug Design

Work-in-Progress

Screening Proteins in DeepMind’s AlphaFold DataBase

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

Work-in-Progress

Other related research published in this Open Access Online Journal include the following:

Synthetic Biology in Drug Discovery

Peroxisome proliferator-activated receptor (PPAR-gamma) Receptors Activation: PPARγ transrepression  for Angiogenesis in Cardiovascular Disease and PPARγ transactivation for Treatment of Diabetes

Read Full Post »

A laboratory for the use of AI for drug development has been launched in collaboration with Pfizer, Teva, AstraZeneca, Mark and Amazon

Reporter: Aviva Lev-Ari, PhD, RN

AION Labs unites pharma, technology and funds companies including IBF to invest in startups to integrate developments in cloud computing and artificial intelligence to improve drug development capabilities. An alliance of four leading pharmaceutical companies –  
AION Labs
 , the first innovation lab of its kind in the world and a pioneer in the process of adopting cloud technologies, artificial intelligence and computer science to solve the R&D challenges of the pharma industry, today announces its launch.
AstraZeneca ,  
Mark ,  
Pfizer  and 
Teva  – and two leading companies in the field of high-tech and biotech investments, respectively – AWS ( 
Amazon Web Services Inc ) and the Israeli investment fund IBF ( 
Israel Biotech Fund ) – which joined together to establish groundbreaking ventures Through artificial intelligence and computer science to change the way new therapies are discovered and developed.  “We are excited to launch the new innovation lab in favor of discoveries of drugs and medical devices using groundbreaking computational tools,” said Matti Gil, CEO of AION Labs. We are prepared and ready to make a difference in the process of therapeutic discoveries and their development. 
With a strong pool of talent from Israel and the world, cloud technology and artificial intelligence at the heart of our activities and a significant commitment by the State of Israel, we are ready to contribute to the health and well-being of the human race and promote industry in Israel. 
I thank the partners for the trust, and it is an honor for me to lead such a significant initiative. ” 
In addition, AION Labs has announced a strategic partnership with X  
BioMed  , an independent biomedical research institute operating in Heidelberg, Germany. 
BioMed X has a proven track record in advancing research innovations in the field of biomedicine at the interface between academic research and the pharmaceutical industry. 
BioMed X’s innovation model, based on global mass sourcing and incubators to cultivate the most brilliant talent and ideas, will serve as the R & D engine to drive AION Labs’ enterprise model.

SOURCE

Read Full Post »

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Curator: Stephen J. Williams, PhD

Marc W. Kirschner*

Department of Systems Biology
Harvard Medical School

Boston, Massachusetts 02115

With the new excitement about systems biology, there is understandable interest in a definition. This has proven somewhat difficult. Scientific fields, like spe­cies, arise by descent with modification, so in their ear­liest forms even the founders of great dynasties are only marginally different than their sister fields and spe­cies. It is only in retrospect that we can recognize the significant founding events. Before embarking on a def­inition of systems biology, it may be worth remember­ing that confusion and controversy surrounded the in­troduction of the term “molecular biology,” with claims that it hardly differed from biochemistry. Yet in retro­spect molecular biology was new and different. It intro­duced both new subject matter and new technological approaches, in addition to a new style.

As a point of departure for systems biology, consider the quintessential experiment in the founding of molec­ular biology, the one gene one enzyme hypothesis of Beadle and Tatum. This experiment first connected the genotype directly to the phenotype on a molecular level, although efforts in that direction can certainly be found in the work of Archibald Garrod, Sewell Wright, and others. Here a protein (in this case an enzyme) is seen to be a product of a single gene, and a single function; the completion of a specific step in amino acid biosynthesis is the direct result. It took the next 30 years to fill in the gaps in this process. Yet the one gene one enzyme hypothesis looks very different to us today. What is the function of tubulin, of PI-3 kinase or of rac? Could we accurately predict the phenotype of a nonle­thal mutation in these genes in a multicellular organ­ism? Although we can connect structure to the gene, we can no longer infer its larger purpose in the cell or in the organism. There are too many purposes; what the protein does is defined by context. The context also includes a history, either developmental or physiologi­cal. Thus the behavior of the Wnt signaling pathway depends on the previous lineage, the “where and when” questions of embryonic development. Similarly the behavior of the immune system depends on previ­ous experience in a variable environment. All of these features stress how inadequate an explanation for function we can achieve solely by trying to identify genes (by annotating them!) and characterizing their transcriptional control circuits.

That we are at a crossroads in how to explore biology is not at all clear to many. Biology is hardly in its dotage; the process of discovery seems to have been per­fected, accelerated, and made universally applicable to all fields of biology. With the completion of the human genome and the genomes of other species, we have a glimpse of many more genes than we ever had before to study. We are like naturalists discovering a new con­tinent, enthralled with the diversity itself. But we have also at the same time glimpsed the finiteness of this list of genes, a disturbingly small list. We have seen that the diversity of genes cannot approximate the diversity of functions within an organism. In response, we have argued that combinatorial use of small numbers of components can generate all the diversity that is needed. This has had its recent incarnation in the sim­plistic view that the rules of cis-regulatory control on DNA can directly lead to an understanding of organ­isms and their evolution. Yet this assumes that the gene products can be linked together in arbitrary combina­tions, something that is not assured in chemistry. It also downplays the significant regulatory features that in­volve interactions between gene products, their local­ization, binding, posttranslational modification, degra­dation, etc. The big question to understand in biology is not regulatory linkage but the nature of biological systems that allows them to be linked together in many nonlethal and even useful combinations. More and more we come to realize that understanding the con­served genes and their conserved circuits will require an understanding of their special properties that allow them to function together to generate different pheno­types in different tissues of metazoan organisms. These circuits may have certain robustness, but more impor­tant they have adaptability and versatility. The ease of putting conserved processes under regulatory control is an inherent design feature of the processes them­selves. Among other things it loads the deck in evolu­tionary variation and makes it more feasible to generate useful phenotypes upon which selection can act.

Systems biology offers an opportunity to study how the phenotype is generated from the genotype and with it a glimpse of how evolution has crafted the pheno­type. One aspect of systems biology is the develop­ment of techniques to examine broadly the level of pro­tein, RNA, and DNA on a gene by gene basis and even the posttranslational modification and localization of proteins. In a very short time we have witnessed the development of high-throughput biology, forcing us to consider cellular processes in toto. Even though much of the data is noisy and today partially inconsistent and incomplete, this has been a radical shift in the way we tear apart problems one interaction at a time. When coupled with gene deletions by RNAi and classical methods, and with the use of chemical tools tailored to proteins and protein domains, these high-throughput techniques become still more powerful.

High-throughput biology has opened up another im­portant area of systems biology: it has brought us out into the field again or at least made us aware that there is a world outside our laboratories. Our model systems have been chosen intentionally to be of limited genetic diversity and examined in a highly controlled and repro­ducible environment. The real world of ecology, evolu­tion, and human disease is a very different place. When genetics separated from the rest of biology in the early part of the 20th century, most geneticists sought to understand heredity and chose to study traits in the organism that could be easily scored and could be used to reveal genetic mechanisms. This was later ex­tended to powerful effect to use genetics to study cell biological and developmental mechanisms. Some ge­neticists, including a large school in Russia in the early 20th century, continued to study the genetics of natural populations, focusing on traits important for survival. That branch of genetics is coming back strongly with the power of phenotypic assays on the RNA and pro­tein level. As human beings we are most concerned not with using our genetic misfortunes to unravel biology’s complexity (important as that is) but with the role of our genetics in our individual survival. The context for understanding this is still not available, even though the data are now coming in torrents, for many of the genes that will contribute to our survival will have small quan­titative effects, partially masked or accentuated by other genetic and environmental conditions. To under­stand the genetic basis of disease will require not just mapping these genes but an understanding of how the phenotype is created in the first place and the messy interactions between genetic variation and environ­mental variation.

Extracts and explants are relatively accessible to syn­thetic manipulation. Next there is the explicit recon­struction of circuits within cells or the deliberate modifi­cation of those circuits. This has occurred for a while in biology, but the difference is that now we wish to construct or intervene with the explicit purpose of de­scribing the dynamical features of these synthetic or partially synthetic systems. There are more and more tools to intervene and more and more tools to measure. Although these fall short of total descriptions of cells and organisms, the detailed information will give us a sense of the special life-like processes of circuits, pro­teins, cells in tissues, and whole organisms in their en­vironment. This meso-scale systems biology will help establish the correspondence between molecules and large-scale physiology.

You are probably running out of patience for some definition of systems biology. In any case, I do not think the explicit definition of systems biology should come from me but should await the words of the first great modern systems biologist. She or he is probably among us now. However, if forced to provide some kind of label for systems biology, I would simply say that systems biology is the study of the behavior of complex biologi­cal organization and processes in terms of the molecu­lar constituents. It is built on molecular biology in its special concern for information transfer, on physiology for its special concern with adaptive states of the cell and organism, on developmental biology for the impor­tance of defining a succession of physiological states in that process, and on evolutionary biology and ecol­ogy for the appreciation that all aspects of the organ­ism are products of selection, a selection we rarely understand on a molecular level. Systems biology attempts all of this through quantitative measurement, modeling, reconstruction, and theory. Systems biology is not a branch of physics but differs from physics in that the primary task is to understand how biology gen­erates variation. No such imperative to create variation exists in the physical world. It is a new principle that Darwin understood and upon which all of life hinges. That sounds different enough for me to justify a new field and a new name. Furthermore, the success of sys­tems biology is essential if we are to understand life; its success is far from assured—a good field for those seeking risk and adventure.

Source: “Meaning of Systems Biology” Cell, Vol. 121, 503–504, May 20, 2005, DOI 10.1016/j.cell.2005.05.005

Old High-throughput Screening, Once the Gold Standard in Drug Development, Gets a Systems Biology Facelift

From Phenotypic Hit to Chemical Probe: Chemical Biology Approaches to Elucidate Small Molecule Action in Complex Biological Systems

Quentin T. L. Pasquer, Ioannis A. Tsakoumagkos and Sascha Hoogendoorn 

Molecules 202025(23), 5702; https://doi.org/10.3390/molecules25235702

Abstract

Biologically active small molecules have a central role in drug development, and as chemical probes and tool compounds to perturb and elucidate biological processes. Small molecules can be rationally designed for a given target, or a library of molecules can be screened against a target or phenotype of interest. Especially in the case of phenotypic screening approaches, a major challenge is to translate the compound-induced phenotype into a well-defined cellular target and mode of action of the hit compound. There is no “one size fits all” approach, and recent years have seen an increase in available target deconvolution strategies, rooted in organic chemistry, proteomics, and genetics. This review provides an overview of advances in target identification and mechanism of action studies, describes the strengths and weaknesses of the different approaches, and illustrates the need for chemical biologists to integrate and expand the existing tools to increase the probability of evolving screen hits to robust chemical probes.

5.1.5. Large-Scale Proteomics

While FITExP is based on protein expression regulation during apoptosis, a study of Ruprecht et al. showed that proteomic changes are induced both by cytotoxic and non-cytotoxic compounds, which can be detected by mass spectrometry to give information on a compound’s mechanism of action. They developed a large-scale proteome-wide mass spectrometry analysis platform for MOA studies, profiling five lung cancer cell lines with over 50 drugs. Aggregation analysis over the different cell lines and the different compounds showed that one-quarter of the drugs changed the abundance of their protein target. This approach allowed target confirmation of molecular degraders such as PROTACs or molecular glues. Finally, this method yielded unexpected off-target mechanisms for the MAP2K1/2 inhibitor PD184352 and the ALK inhibitor ceritinib [97]. While such a mapping approach clearly provides a wealth of information, it might not be easily attainable for groups that are not equipped for high-throughput endeavors.

All-in-all, mass spectrometry methods have gained a lot of traction in recent years and have been successfully applied for target deconvolution and MOA studies of small molecules. As with all high-throughput methods, challenges lie in the accessibility of the instruments (both from a time and cost perspective) and data analysis of complex and extensive data sets.

5.2. Genetic Approaches

Both label-based and mass spectrometry proteomic approaches are based on the physical interaction between a small molecule and a protein target, and focus on the proteome for target deconvolution. It has been long realized that genetics provides an alternative avenue to understand a compound’s action, either through precise modification of protein levels, or by inducing protein mutations. First realized in yeast as a genetically tractable organism over 20 years ago, recent advances in genetic manipulation of mammalian cells have opened up important opportunities for target identification and MOA studies through genetic screening in relevant cell types [98]. Genetic approaches can be roughly divided into two main areas, with the first centering on the identification of mutations that confer compound resistance (Figure 3a), and the second on genome-wide perturbation of gene function and the concomitant changes in sensitivity to the compound (Figure 3b). While both methods can be used to identify or confirm drug targets, the latter category often provides many additional insights in the compound’s mode of action.

Figure 3. Genetic methods for target identification and mode of action studies. Schematic representations of (a) resistance cloning, and (b) chemogenetic interaction screens.

5.2.1. Resistance Cloning

The “gold standard” in drug target confirmation is to identify mutations in the presumed target protein that render it insensitive to drug treatment. Conversely, different groups have sought to use this principle as a target identification method based on the concept that cells grown in the presence of a cytotoxic drug will either die or develop mutations that will make them resistant to the compound. With recent advances in deep sequencing it is now possible to then scan the transcriptome [99] or genome [100] of the cells for resistance-inducing mutations. Genes that are mutated are then hypothesized to encode the protein target. For this approach to be successful, there are two initial requirements: (1) the compound needs to be cytotoxic for resistant clones to arise, and (2) the cell line needs to be genetically unstable for mutations to occur in a reasonable timeframe.

In 2012, the Kapoor group demonstrated in a proof-of-concept study that resistance cloning in mammalian cells, coupled to transcriptome sequencing (RNA-seq), yields the known polo-like kinase 1 (PLK1) target of the small molecule BI 2536. For this, they used the cancer cell line HCT-116, which is deficient in mismatch repair and consequently prone to mutations. They generated and sequenced multiple resistant clones, and clustered the clones based on similarity. PLK1 was the only gene that was mutated in multiple groups. Of note, one of the groups did not contain PLK1 mutations, but rather developed resistance through upregulation of ABCBA1, a drug efflux transporter, which is a general and non-specific resistance mechanism [101]. In a following study, they optimized their pipeline “DrugTargetSeqR”, by counter-screening for these types of multidrug resistance mechanisms so that these clones were excluded from further analysis (Figure 3a). Furthermore, they used CRISPR/Cas9-mediated gene editing to determine which mutations were sufficient to confer drug resistance, and as independent validation of the biochemical relevance of the obtained hits [102].

While HCT-116 cells are a useful model cell line for resistance cloning because of their genomic instability, they may not always be the cell line of choice, depending on the compound and process that is studied. Povedana et al. used CRISPR/Cas9 to engineer mismatch repair deficiencies in Ewing sarcoma cells and small cell lung cancer cells. They found that deletion of MSH2 results in hypermutations in these normally mutationally silent cells, resulting in the formation of resistant clones in the presence of bortezomib, MLN4924, and CD437, which are all cytotoxic compounds [103]. Recently, Neggers et al. reasoned that CRISPR/Cas9-induced non-homologous end-joining repair could be a viable strategy to create a wide variety of functional mutants of essential genes through in-frame mutations. Using a tiled sgRNA library targeting 75 target genes of investigational neoplastic drugs in HAP1 and K562 cells, they generated several KPT-9274 (an anticancer agent with unknown target)-resistant clones, and subsequent deep sequencing showed that the resistant clones were enriched in NAMPT sgRNAs. Direct target engagement was confirmed by co-crystallizing the compound with NAMPT [104]. In addition to these genetic mutation strategies, an alternative method is to grow the cells in the presence of a mutagenic chemical to induce higher mutagenesis rates [105,106].

When there is already a hypothesis on the pathway involved in compound action, the resistance cloning methodology can be extended to non-cytotoxic compounds. Sekine et al. developed a fluorescent reporter model for the integrated stress response, and used this cell line for target deconvolution of a small molecule inhibitor towards this pathway (ISRIB). Reporter cells were chemically mutagenized, and ISRIB-resistant clones were isolated by flow cytometry, yielding clones with various mutations in the delta subunit of guanine nucleotide exchange factor eIF2B [107].

While there are certainly successful examples of resistance cloning yielding a compound’s direct target as discussed above, resistance could also be caused by mutations or copy number alterations in downstream components of a signaling pathway. This is illustrated by clinical examples of acquired resistance to small molecules, nature’s way of “resistance cloning”. For example, resistance mechanisms in Hedgehog pathway-driven cancers towards the Smoothened inhibitor vismodegib include compound-resistant mutations in Smoothened, but also copy number changes in downstream activators SUFU and GLI2 [108]. It is, therefore, essential to conduct follow-up studies to confirm a direct interaction between a compound and the hit protein, as well as a lack of interaction with the mutated protein.

5.2.3. “Chemogenomics”: Examples of Gene-Drug Interaction Screens

When genetic perturbations are combined with small molecule drugs in a chemogenetic interaction screen, the effect of a gene’s perturbation on compound action is studied. Gene perturbation can render the cells resistant to the compound (suppressor interaction), or conversely, result in hypersensitivity and enhanced compound potency (synergistic interaction) [5,117,121]. Typically, cells are treated with the compound at a sublethal dose, to ascertain that both types of interactions can be found in the final dataset, and often it is necessary to use a variety of compound doses (i.e., LD20, LD30, LD50) and timepoints to obtain reliable insights (Figure 3b).

An early example of successful coupling of a phenotypic screen and downstream genetic screening for target identification is the study of Matheny et al. They identified STF-118804 as a compound with antileukemic properties. Treatment of MV411 cells, stably transduced with a high complexity, genome-wide shRNA library, with STF-118804 (4 rounds of increasing concentration) or DMSO control resulted in a marked depletion of cells containing shRNAs against nicotinamide phosphoribosyl transferase (NAMPT) [122].

The Bassik lab subsequently directly compared the performance of shRNA-mediated knockdown versus CRISPR/Cas9-knockout screens for the target elucidation of the antiviral drug GSK983. The data coming out of both screens were complementary, with the shRNA screen resulting in hits leading to the direct compound target and the CRISPR screen giving information on cellular mechanisms of action of the compound. A reason for this is likely the level of protein depletion that is reached by these methods: shRNAs lead to decreased protein levels, which is advantageous when studying essential genes. However, knockdown may not result in a phenotype for non-essential genes, in which case a full CRISPR-mediated knockout is necessary to observe effects [123].

Another NAMPT inhibitor was identified in a CRISPR/Cas9 “haplo-insufficiency (HIP)”-like approach [124]. Haploinsuffiency profiling is a well-established system in yeast which is performed in a ~50% protein background by heterozygous deletions [125]. As there is no control over CRISPR-mediated loss of alleles, compound treatment was performed at several timepoints after addition of the sgRNA library to HCT116 cells stably expressing Cas9, in the hope that editing would be incomplete at early timepoints, resulting in residual protein levels. Indeed, NAMPT was found to be the target of phenotypic hit LB-60-OF61, especially at earlier timepoints, confirming the hypothesis that some level of protein needs to be present to identify a compound’s direct target [124]. This approach was confirmed in another study, thereby showing that direct target identification through CRISPR-knockout screens is indeed possible [126].

An alternative strategy was employed by the Weissman lab, where they combined genome-wide CRISPR-interference and -activation screens to identify the target of the phase 3 drug rigosertib. They focused on hits that had opposite action in both screens, as in sensitizing in one but protective in the other, which were related to microtubule stability. In a next step, they created chemical-genetic profiles of a variety of microtubule destabilizing agents, rationalizing that compounds with the same target will have similar drug-gene interactions. For this, they made a focused library of sgRNAs, based on the most high-ranking hits in the rigosertib genome-wide CRISPRi screen, and compared the focused screen results of the different compounds. The profile for rigosertib clustered well with that of ABT-571, and rigorous target validation studies confirmed rigosertib binding to the colchicine binding site of tubulin—the same site as occupied by ABT-571 [127].

From the above examples, it is clear that genetic screens hold a lot of promise for target identification and MOA studies for small molecules. The CRISPR screening field is rapidly evolving, sgRNA libraries are continuously improving and increasingly commercially available, and new tools for data analysis are being developed [128]. The challenge lies in applying these screens to study compounds that are not cytotoxic, where finding the right dosage regimen will not be trivial.

SYSTEMS BIOLOGY AND CANCER RESEARCH & DRUG DISCOVERY

Integrative Analysis of Next-Generation Sequencing for Next-Generation Cancer Research toward Artificial Intelligence

Youngjun Park, Dominik Heider and Anne-Christin Hauschild. Cancers 202113(13), 3148; https://doi.org/10.3390/cancers13133148

Abstract

The rapid improvement of next-generation sequencing (NGS) technologies and their application in large-scale cohorts in cancer research led to common challenges of big data. It opened a new research area incorporating systems biology and machine learning. As large-scale NGS data accumulated, sophisticated data analysis methods became indispensable. In addition, NGS data have been integrated with systems biology to build better predictive models to determine the characteristics of tumors and tumor subtypes. Therefore, various machine learning algorithms were introduced to identify underlying biological mechanisms. In this work, we review novel technologies developed for NGS data analysis, and we describe how these computational methodologies integrate systems biology and omics data. Subsequently, we discuss how deep neural networks outperform other approaches, the potential of graph neural networks (GNN) in systems biology, and the limitations in NGS biomedical research. To reflect on the various challenges and corresponding computational solutions, we will discuss the following three topics: (i) molecular characteristics, (ii) tumor heterogeneity, and (iii) drug discovery. We conclude that machine learning and network-based approaches can add valuable insights and build highly accurate models. However, a well-informed choice of learning algorithm and biological network information is crucial for the success of each specific research question

1. Introduction

The development and widespread use of high-throughput technologies founded the era of big data in biology and medicine. In particular, it led to an accumulation of large-scale data sets that opened a vast amount of possible applications for data-driven methodologies. In cancer, these applications range from fundamental research to clinical applications: molecular characteristics of tumors, tumor heterogeneity, drug discovery and potential treatments strategy. Therefore, data-driven bioinformatics research areas have tailored data mining technologies such as systems biology, machine learning, and deep learning, elaborated in this review paper (see Figure 1 and Figure 2). For example, in systems biology, data-driven approaches are applied to identify vital signaling pathways [1]. This pathway-centric analysis is particularly crucial in cancer research to understand the characteristics and heterogeneity of the tumor and tumor subtypes. Consequently, this high-throughput data-based analysis enables us to explore characteristics of cancers with a systems biology and a systems medicine point of view [2].Combining high-throughput techniques, especially next-generation sequencing (NGS), with appropriate analytical tools has allowed researchers to gain a deeper systematic understanding of cancer at various biological levels, most importantly genomics, transcriptomics, and epigenetics [3,4]. Furthermore, more sophisticated analysis tools based on computational modeling are introduced to decipher underlying molecular mechanisms in various cancer types. The increasing size and complexity of the data required the adaptation of bioinformatics processing pipelines for higher efficiency and sophisticated data mining methodologies, particularly for large-scale, NGS datasets [5]. Nowadays, more and more NGS studies integrate a systems biology approach and combine sequencing data with other types of information, for instance, protein family information, pathway, or protein–protein interaction (PPI) networks, in an integrative analysis. Experimentally validated knowledge in systems biology may enhance analysis models and guides them to uncover novel findings. Such integrated analyses have been useful to extract essential information from high-dimensional NGS data [6,7]. In order to deal with the increasing size and complexity, the application of machine learning, and specifically deep learning methodologies, have become state-of-the-art in NGS data analysis.

Figure 1. Next-generation sequencing data can originate from various experimental and technological conditions. Depending on the purpose of the experiment, one or more of the depicted omics types (Genomics, Transcriptomics, Epigenomics, or Single-Cell Omics) are analyzed. These approaches led to an accumulation of large-scale NGS datasets to solve various challenges of cancer research, molecular characterization, tumor heterogeneity, and drug target discovery. For instance, The Cancer Genome Atlas (TCGA) dataset contains multi-omics data from ten-thousands of patients. This dataset facilitates a variety of cancer researches for decades. Additionally, there are also independent tumor datasets, and, frequently, they are analyzed and compared with the TCGA dataset. As the large scale of omics data accumulated, various machine learning techniques are applied, e.g., graph algorithms and deep neural networks, for dimensionality reduction, clustering, or classification. (Created with BioRender.com.)

Figure 2. (a) A multitude of different types of data is produced by next-generation sequencing, for instance, in the fields of genomics, transcriptomics, and epigenomics. (b) Biological networks for biomarker validation: The in vivo or in vitro experiment results are considered ground truth. Statistical analysis on next-generation sequencing data produces candidate genes. Biological networks can validate these candidate genes and highlight the underlying biological mechanisms (Section 2.1). (c) De novo construction of Biological Networks: Machine learning models that aim to reconstruct biological networks can incorporate prior knowledge from different omics data. Subsequently, the model will predict new unknown interactions based on new omics information (Section 2.2). (d) Network-based machine learning: Machine learning models integrating biological networks as prior knowledge to improve predictive performance when applied to different NGS data (Section 2.3). (Created with BioRender.com).

Therefore, a large number of studies integrate NGS data with machine learning and propose a novel data-driven methodology in systems biology [8]. In particular, many network-based machine learning models have been developed to analyze cancer data and help to understand novel mechanisms in cancer development [9,10]. Moreover, deep neural networks (DNN) applied for large-scale data analysis improved the accuracy of computational models for mutation prediction [11,12], molecular subtyping [13,14], and drug repurposing [15,16]. 

2. Systems Biology in Cancer Research

Genes and their functions have been classified into gene sets based on experimental data. Our understandings of cancer concentrated into cancer hallmarks that define the characteristics of a tumor. This collective knowledge is used for the functional analysis of unseen data.. Furthermore, the regulatory relationships among genes were investigated, and, based on that, a pathway can be composed. In this manner, the accumulation of public high-throughput sequencing data raised many big-data challenges and opened new opportunities and areas of application for computer science. Two of the most vibrantly evolving areas are systems biology and machine learning which tackle different tasks such as understanding the cancer pathways [9], finding crucial genes in pathways [22,53], or predicting functions of unidentified or understudied genes [54]. Essentially, those models include prior knowledge to develop an analysis and enhance interpretability for high-dimensional data [2]. In addition to understanding cancer pathways with in silico analysis, pathway activity analysis incorporating two different types of data, pathways and omics data, is developed to understand heterogeneous characteristics of the tumor and cancer molecular subtyping. Due to its advantage in interpretability, various pathway-oriented methods are introduced and become a useful tool to understand a complex diseases such as cancer [55,56,57].

In this section, we will discuss how two related research fields, namely, systems biology and machine learning, can be integrated with three different approaches (see Figure 2), namely, biological network analysis for biomarker validation, the use of machine learning with systems biology, and network-based models.

2.1. Biological Network Analysis for Biomarker Validation

The detection of potential biomarkers indicative of specific cancer types or subtypes is a frequent goal of NGS data analysis in cancer research. For instance, a variety of bioinformatics tools and machine learning models aim at identify lists of genes that are significantly altered on a genomic, transcriptomic, or epigenomic level in cancer cells. Typically, statistical and machine learning methods are employed to find an optimal set of biomarkers, such as single nucleotide polymorphisms (SNPs), mutations, or differentially expressed genes crucial in cancer progression. Traditionally, resource-intensive in vitro analysis was required to discover or validate those markers. Therefore, systems biology offers in silico solutions to validate such findings using biological pathways or gene ontology information (Figure 2b) [58]. Subsequently, gene set enrichment analysis (GSEA) [50] or gene set analysis (GSA) [59] can be used to evaluate whether these lists of genes are significantly associated with cancer types and their specific characteristics. GSA, for instance, is available via web services like DAVID [60] and g:Profiler [61]. Moreover, other applications use gene ontology directly [62,63]. In addition to gene-set-based analysis, there are other methods that focuse on the topology of biological networks. These approaches evaluate various network structure parameters and analyze the connectivity of two genes or the size and interconnection of their neighbors [64,65]. According to the underlying idea, the mutated gene will show dysfunction and can affect its neighboring genes. Thus, the goal is to find abnormalities in a specific set of genes linked with an edge in a biological network. For instance, KeyPathwayMiner can extract informative network modules in various omics data [66]. In summary, these approaches aim at predicting the effect of dysfunctional genes among neighbors according to their connectivity or distances from specific genes such as hubs [67,68]. During the past few decades, the focus of cancer systems biology extended towards the analysis of cancer-related pathways since those pathways tend to carry more information than a gene set. Such analysis is called Pathway Enrichment Analysis (PEA) [69,70]. The use of PEA incorporates the topology of biological networks. However, simultaneously, the lack of coverage issue in pathway data needs to be considered. Because pathway data does not cover all known genes yet, an integration analysis on omics data can significantly drop in genes when incorporated with pathways. Genes that can not be mapped to any pathway are called ‘pathway orphan.’ In this manner, Rahmati et al. introduced a possible solution to overcome the ‘pathway orphan’ issue [71]. At the bottom line, regardless of whether researchers consider gene-set or pathway-based enrichment analysis, the performance and accuracy of both methods are highly dependent on the quality of the external gene-set and pathway data [72].

2.2. De Novo Construction of Biological Networks

While the known fraction of existing biological networks barely scratches the surface of the whole system of mechanisms occurring in each organism, machine learning models can improve on known network structures and can guide potential new findings [73,74]. This area of research is called de novo network construction (Figure 2c), and its predictive models can accelerate experimental validation by lowering time costs [75,76]. This interplay between in silico biological networks building and mining contributes to expanding our knowledge in a biological system. For instance, a gene co-expression network helps discover gene modules having similar functions [77]. Because gene co-expression networks are based on expressional changes under specific conditions, commonly, inferring a co-expression network requires many samples. The WGCNA package implements a representative model using weighted correlation for network construction that leads the development of the network biology field [78]. Due to NGS developments, the analysis of gene co-expression networks subsequently moved from microarray-based to RNA-seq based experimental data [79]. However, integration of these two types of data remains tricky. Ballouz et al. compared microarray and NGS-based co-expression networks and found the existence of a bias originating from batch effects between the two technologies [80]. Nevertheless, such approaches are suited to find disease-specific co-expressional gene modules. Thus, various studies based on the TCGA cancer co-expression network discovered characteristics of prognostic genes in the network [81]. Accordingly, a gene co-expression network is a condition-specific network rather than a general network for an organism. Gene regulatory networks can be inferred from the gene co-expression network when various data from different conditions in the same organism are available. Additionally, with various NGS applications, we can obtain multi-modal datasets about regulatory elements and their effects, such as epigenomic mechanisms on transcription and chromatin structure. Consequently, a gene regulatory network can consist of solely protein-coding genes or different regulatory node types such as transcription factors, inhibitors, promoter interactions, DNA methylations, and histone modifications affecting the gene expression system [82,83]. More recently, researchers were able to build networks based on a particular experimental setup. For instance, functional genomics or CRISPR technology enables the high-resolution regulatory networks in an organism [84]. Other than gene co-expression or regulatory networks, drug target, and drug repurposing studies are active research areas focusing on the de novo construction of drug-to-target networks to allow the potential repurposing of drugs [76,85].

2.3. Network Based Machine Learning

A network-based machine learning model directly integrates the insights of biological networks within the algorithm (Figure 2d) to ultimately improve predictive performance concerning cancer subtyping or susceptibility to therapy. Following the establishment of high-quality biological networks based on NGS technologies, these biological networks were suited to be integrated into advanced predictive models. In this manner, Zhang et al., categorized network-based machine learning approaches upon their usage into three groups: (i) model-based integration, (ii) pre-processing integration, and (iii) post-analysis integration [7]. Network-based models map the omics data onto a biological network, and proper algorithms travel the network while considering both values of nodes and edges and network topology. In the pre-processing integration, pathway or other network information is commonly processed based on its topological importance. Meanwhile, in the post-analysis integration, omics data is processed solely before integration with a network. Subsequently, omics data and networks are merged and interpreted. The network-based model has advantages in multi-omics integrative analysis. Due to the different sensitivity and coverage of various omics data types, a multi-omics integrative analysis is challenging. However, focusing on gene-level or protein-level information enables a straightforward integration [86,87]. Consequently, when different machine learning approaches tried to integrate two or more different data types to find novel biological insights, one of the solutions is reducing the search space to gene or protein level and integrated heterogeneous datatypes [25,88].

In summary, using network information opens new possibilities for interpretation. However, as mentioned earlier, several challenges remain, such as the coverage issue. Current databases for biological networks do not cover the entire set of genes, transcripts, and interactions. Therefore, the use of networks can lead to loss of information for gene or transcript orphans. The following section will focus on network-based machine learning models and their application in cancer genomics. We will put network-based machine learning into the perspective of the three main areas of application, namely, molecular characterization, tumor heterogeneity analysis, and cancer drug discovery.

3. Network-Based Learning in Cancer Research

As introduced previously, the integration of machine learning with the insights of biological networks (Figure 2d) ultimately aims at improving predictive performance and interpretability concerning cancer subtyping or treatment susceptibility.

3.1. Molecular Characterization with Network Information

Various network-based algorithms are used in genomics and focus on quantifying the impact of genomic alteration. By employing prior knowledge in biological network algorithms, performance compared to non-network models can be improved. A prominent example is HotNet. The algorithm uses a thermodynamics model on a biological network and identifies driver genes, or prognostic genes, in pan-cancer data [89]. Another study introduced a network-based stratification method to integrate somatic alterations and expression signatures with network information [90]. These approaches use network topology and network-propagation-like algorithms. Network propagation presumes that genomic alterations can affect the function of neighboring genes. Two genes will show an exclusive pattern if two genes complement each other, and the function carried by those two genes is essential to an organism [91]. This unique exclusive pattern among genomic alteration is further investigated in cancer-related pathways. Recently, Ku et al. developed network-centric approaches and tackled robustness issues while studying synthetic lethality [92]. Although synthetic lethality was initially discovered in model organisms of genetics, it helps us to understand cancer-specific mutations and their functions in tumor characteristics [91].

Furthermore, in transcriptome research, network information is used to measure pathway activity and its application in cancer subtyping. For instance, when comparing the data of two or more conditions such as cancer types, GSEA as introduced in Section 2 is a useful approach to get an overview of systematic changes [50]. It is typically used at the beginning of a data evaluation [93]. An experimentally validated gene set can provide information about how different conditions affect molecular systems in an organism. In addition to the gene sets, different approaches integrate complex interaction information into GSEA and build network-based models [70]. In contrast to GSEA, pathway activity analysis considers transcriptome data and other omics data and structural information of a biological network. For example, PARADIGM uses pathway topology and integrates various omics in the analysis to infer a patient-specific status of pathways [94]. A benchmark study with pan-cancer data recently reveals that using network structure can show better performance [57]. In conclusion, while the loss of data is due to the incompleteness of biological networks, their integration improved performance and increased interpretability in many cases.

3.2. Tumor Heterogeneity Study with Network Information

The tumor heterogeneity can originate from two directions, clonal heterogeneity and tumor impurity. Clonal heterogeneity covers genomic alterations within the tumor [95]. While de novo mutations accumulate, the tumor obtains genomic alterations with an exclusive pattern. When these genomic alterations are projected on the pathway, it is possible to observe exclusive relationships among disease-related genes. For instance, the CoMEt and MEMo algorithms examine mutual exclusivity on protein–protein interaction networks [96,97]. Moreover, the relationship between genes can be essential for an organism. Therefore, models analyzing such alterations integrate network-based analysis [98].

In contrast, tumor purity is dependent on the tumor microenvironment, including immune-cell infiltration and stromal cells [99]. In tumor microenvironment studies, network-based models are applied, for instance, to find immune-related gene modules. Although the importance of the interaction between tumors and immune cells is well known, detailed mechanisms are still unclear. Thus, many recent NGS studies employ network-based models to investigate the underlying mechanism in tumor and immune reactions. For example, McGrail et al. identified a relationship between the DNA damage response protein and immune cell infiltration in cancer. The analysis is based on curated interaction pairs in a protein–protein interaction network [100]. Most recently, Darzi et al. discovered a prognostic gene module related to immune cell infiltration by using network-centric approaches [101]. Tu et al. presented a network-centric model for mining subnetworks of genes other than immune cell infiltration by considering tumor purity [102].

3.3. Drug Target Identification with Network Information

In drug target studies, network biology is integrated into pharmacology [103]. For instance, Yamanishi et al. developed novel computational methods to investigate the pharmacological space by integrating a drug-target protein network with genomics and chemical information. The proposed approaches investigated such drug-target network information to identify potential novel drug targets [104]. Since then, the field has continued to develop methods to study drug target and drug response integrating networks with chemical and multi-omic datasets. In a recent survey study by Chen et al., the authors compared 13 computational methods for drug response prediction. It turned out that gene expression profiles are crucial information for drug response prediction [105].

Moreover, drug-target studies are often extended to drug-repurposing studies. In cancer research, drug-repurposing studies aim to find novel interactions between non-cancer drugs and molecular features in cancer. Drug-repurposing (or repositioning) studies apply computational approaches and pathway-based models and aim at discovering potential new cancer drugs with a higher probability than de novo drug design [16,106]. Specifically, drug-repurposing studies can consider various areas of cancer research, such as tumor heterogeneity and synthetic lethality. As an example, Lee et al. found clinically relevant synthetic lethality interactions by integrating multiple screening NGS datasets [107]. This synthetic lethality and related-drug datasets can be integrated for an effective combination of anticancer therapeutic strategy with non-cancer drug repurposing.

4. Deep Learning in Cancer Research

DNN models develop rapidly and become more sophisticated. They have been frequently used in all areas of biomedical research. Initially, its development was facilitated by large-scale imaging and video data. While most data sets in the biomedical field would not typically be considered big data, the rapid data accumulation enabled by NGS made it suitable for the application of DNN models requiring a large amount of training data [108]. For instance, in 2019, Samiei et al. used TCGA-based large-scale cancer data as benchmark datasets for bioinformatics machine learning research such as Image-Net in the computer vision field [109]. Subsequently, large-scale public cancer data sets such as TCGA encouraged the wide usage of DNNs in the cancer domain [110]. Over the last decade, these state-of-the-art machine learning methods have been incorporated in many different biological questions [111].

In addition to public cancer databases such as TCGA, the genetic information of normal tissues is stored in well-curated databases such as GTEx [112] and 1000Genomes [113]. These databases are frequently used as control or baseline training data for deep learning [114]. Moreover, other non-curated large-scale data sources such as GEO (https://www.ncbi.nlm.nih.gov/geo/, accessed on 20 May 2021) can be leveraged to tackle critical aspects in cancer research. They store a large-scale of biological data produced under various experimental setups (Figure 1). Therefore, an integration of GEO data and other data requires careful preprocessing. Overall, an increasing amount of datasets facilitate the development of current deep learning in bioinformatics research [115].

4.1. Challenges for Deep Learning in Cancer Research

Many studies in biology and medicine used NGS and produced large amounts of data during the past few decades, moving the field to the big data era. Nevertheless, researchers still face a lack of data in particular when investigating rare diseases or disease states. Researchers have developed a manifold of potential solutions to overcome this lack of data challenges, such as imputation, augmentation, and transfer learning (Figure 3b). Data imputation aims at handling data sets with missing values [116]. It has been studied on various NGS omics data types to recover missing information [117]. It is known that gene expression levels can be altered by different regulatory elements, such as DNA-binding proteins, epigenomic modifications, and post-transcriptional modifications. Therefore, various models integrating such regulatory schemes have been introduced to impute missing omics data [118,119]. Some DNN-based models aim to predict gene expression changes based on genomics or epigenomics alteration. For instance, TDimpute aims at generating missing RNA-seq data by training a DNN on methylation data. They used TCGA and TARGET (https://ocg.cancer.gov/programs/target/data-matrix, accessed on 20 May 2021) data as proof of concept of the applicability of DNN for data imputation in a multi-omics integration study [120]. Because this integrative model can exploit information in different levels of regulatory mechanisms, it can build a more detailed model and achieve better performance than a model build on a single-omics dataset [117,121]. The generative adversarial network (GAN) is a DNN structure for generating simulated data that is different from the original data but shows the same characteristics [122]. GANs can impute missing omics data from other multi-omics sources. Recently, the GAN algorithm is getting more attention in single-cell transcriptomics because it has been recognized as a complementary technique to overcome the limitation of scRNA-seq [123]. In contrast to data imputation and generation, other machine learning approaches aim to cope with a limited dataset in different ways. Transfer learning or few-shot learning, for instance, aims to reduce the search space with similar but unrelated datasets and guide the model to solve a specific set of problems [124]. These approaches train models with data of similar characteristics and types but different data to the problem set. After pre-training the model, it can be fine-tuned with the dataset of interest [125,126]. Thus, researchers are trying to introduce few-shot learning models and meta-learning approaches to omics and translational medicine. For example, Select-ProtoNet applied the ProtoTypical Network [127] model to TCGA transcriptome data and classified patients into two groups according to their clinical status [128]. AffinityNet predicts kidney and uterus cancer subtypes with gene expression profiles [129].

Figure 3. (a) In various studies, NGS data transformed into different forms. The 2-D transformed form is for the convolution layer. Omics data is transformed into pathway level, GO enrichment score, or Functional spectra. (b) DNN application on different ways to handle lack of data. Imputation for missing data in multi-omics datasets. GAN for data imputation and in silico data simulation. Transfer learning pre-trained the model with other datasets and fine-tune. (c) Various types of information in biology. (d) Graph neural network examples. GCN is applied to aggregate neighbor information. (Created with BioRender.com).

4.2. Molecular Charactization with Network and DNN Model

DNNs have been applied in multiple areas of cancer research. For instance, a DNN model trained on TCGA cancer data can aid molecular characterization by identifying cancer driver genes. At the very early stage, Yuan et al. build DeepGene, a cancer-type classifier. They implemented data sparsity reduction methods and trained the DNN model with somatic point mutations [130]. Lyu et al. [131] and DeepGx [132] embedded a 1-D gene expression profile to a 2-D array by chromosome order to implement the convolution layer (Figure 3a). Other algorithms, such as the deepDriver, use k-nearest neighbors for the convolution layer. A predefined number of neighboring gene mutation profiles was the input for the convolution layer. It employed this convolution layer in a DNN by aggregating mutation information of the k-nearest neighboring genes [11]. Instead of embedding to a 2-D image, DeepCC transformed gene expression data into functional spectra. The resulting model was able to capture molecular characteristics by training cancer subtypes [14].

Another DNN model was trained to infer the origin of tissue from single-nucleotide variant (SNV) information of metastatic tumor. The authors built a model by using the TCGA/ICGC data and analyzed SNV patterns and corresponding pathways to predict the origin of cancer. They discovered that metastatic tumors retained their original cancer’s signature mutation pattern. In this context, their DNN model obtained even better accuracy than a random forest model [133] and, even more important, better accuracy than human pathologists [12].

4.3. Tumor Heterogeneity with Network and DNN Model

As described in Section 4.1, there are several issues because of cancer heterogeneity, e.g., tumor microenvironment. Thus, there are only a few applications of DNN in intratumoral heterogeneity research. For instance, Menden et al. developed ’Scaden’ to deconvolve cell types in bulk-cell sequencing data. ’Scaden’ is a DNN model for the investigation of intratumor heterogeneity. To overcome the lack of training datasets, researchers need to generate in silico simulated bulk-cell sequencing data based on single-cell sequencing data [134]. It is presumed that deconvolving cell types can be achieved by knowing all possible expressional profiles of the cell [36]. However, this information is typically not available. Recently, to tackle this problem, single-cell sequencing-based studies were conducted. Because of technical limitations, we need to handle lots of missing data, noises, and batch effects in single-cell sequencing data [135]. Thus, various machine learning methods were developed to process single-cell sequencing data. They aim at mapping single-cell data onto the latent space. For example, scDeepCluster implemented an autoencoder and trained it on gene-expression levels from single-cell sequencing. During the training phase, the encoder and decoder work as denoiser. At the same time, they can embed high-dimensional gene-expression profiles to lower-dimensional vectors [136]. This autoencoder-based method can produce biologically meaningful feature vectors in various contexts, from tissue cell types [137] to different cancer types [138,139].

4.4. Drug Target Identification with Networks and DNN Models

In addition to NGS datasets, large-scale anticancer drug assays enabled the training train of DNNs. Moreover, non-cancer drug response assay datasets can also be incorporated with cancer genomic data. In cancer research, a multidisciplinary approach was widely applied for repurposing non-oncology drugs to cancer treatment. This drug repurposing is faster than de novo drug discovery. Furthermore, combination therapy with a non-oncology drug can be beneficial to overcome the heterogeneous properties of tumors [85]. The deepDR algorithm integrated ten drug-related networks and trained deep autoencoders. It used a random-walk-based algorithm to represent graph information into feature vectors. This approach integrated network analysis with a DNN model validated with an independent drug-disease dataset [15].

The authors of CDRscan did an integrative analysis of cell-line-based assay datasets and other drug and genomics datasets. It shows that DNN models can enhance the computational model for improved drug sensitivity predictions [140]. Additionally, similar to previous network-based models, the multi-omics application of drug-targeted DNN studies can show higher prediction accuracy than the single-omics method. MOLI integrated genomic data and transcriptomic data to predict the drug responses of TCGA patients [141].

4.5. Graph Neural Network Model

In general, the advantage of using a biological network is that it can produce more comprehensive and interpretable results from high-dimensional omics data. Furthermore, in an integrative multi-omics data analysis, network-based integration can improve interpretability over traditional approaches. Instead of pre-/post-integration of a network, recently developed graph neural networks use biological networks as the base structure for the learning network itself. For instance, various pathways or interactome information can be integrated as a learning structure of a DNN and can be aggregated as heterogeneous information. In a GNN study, a convolution process can be done on the provided network structure of data. Therefore, the convolution on a biological network made it possible for the GNN to focus on the relationship among neighbor genes. In the graph convolution layer, the convolution process integrates information of neighbor genes and learns topological information (Figure 3d). Consequently, this model can aggregate information from far-distant neighbors, and thus can outperform other machine learning models [142].

In the context of the inference problem of gene expression, the main question is whether the gene expression level can be explained by aggregating the neighboring genes. A single gene inference study by Dutil et al. showed that the GNN model outperformed other DNN models [143]. Moreover, in cancer research, such GNN models can identify cancer-related genes with better performance than other network-based models, such as HotNet2 and MutSigCV [144]. A recent GNN study with a multi-omics integrative analysis identified 165 new cancer genes as an interactive partner for known cancer genes [145]. Additionally, in the synthetic lethality area, dual-dropout GNN outperformed previous bioinformatics tools for predicting synthetic lethality in tumors [146]. GNNs were also able to classify cancer subtypes based on pathway activity measures with RNA-seq data. Lee et al. implemented a GNN for cancer subtyping and tested five cancer types. Thus, the informative pathway was selected and used for subtype classification [147]. Furthermore, GNNs are also getting more attention in drug repositioning studies. As described in Section 3.3, drug discovery requires integrating various networks in both chemical and genomic spaces (Figure 3d). Chemical structures, protein structures, pathways, and other multi-omics data were used in drug-target identification and repurposing studies (Figure 3c). Each of the proposed applications has a specialty in the different purposes of drug-related tasks. Sun et al. summarized GNN-based drug discovery studies and categorized them into four classes: molecular property and activity prediction, interaction prediction, synthesis prediction, and de novo drug design. The authors also point out four challenges in the GNN-mediated drug discovery. At first, as we described before, there is a lack of drug-related datasets. Secondly, the current GNN models can not fully represent 3-D structures of chemical molecules and protein structures. The third challenge is integrating heterogeneous network information. Drug discovery usually requires a multi-modal integrative analysis with various networks, and GNNs can improve this integrative analysis. Lastly, although GNNs use graphs, stacked layers still make it hard to interpret the model [148].

4.6. Shortcomings in AI and Revisiting Validity of Biological Networks as Prior Knowledge

The previous sections reviewed a variety of DNN-based approaches that present a good performance on numerous applications. However, it is hardly a panacea for all research questions. In the following, we will discuss potential limitations of the DNN models. In general, DNN models with NGS data have two significant issues: (i) data requirements and (ii) interpretability. Usually, deep learning needs a large proportion of training data for reasonable performance which is more difficult to achieve in biomedical omics data compared to, for instance, image data. Today, there are not many NGS datasets that are well-curated and -annotated for deep learning. This can be an answer to the question of why most DNN studies are in cancer research [110,149]. Moreover, the deep learning models are hard to interpret and are typically considered as black-boxes. Highly stacked layers in the deep learning model make it hard to interpret its decision-making rationale. Although the methodology to understand and interpret deep learning models has been improved, the ambiguity in the DNN models’ decision-making hindered the transition between the deep learning model and translational medicine [149,150].

As described before, biological networks are employed in various computational analyses for cancer research. The studies applying DNNs demonstrated many different approaches to use prior knowledge for systematic analyses. Before discussing GNN application, the validity of biological networks in a DNN model needs to be shown. The LINCS program analyzed data of ’The Connectivity Map (CMap) project’ to understand the regulatory mechanism in gene expression by inferring the whole gene expression profiles from a small set of genes (https://lincsproject.org/, accessed on 20 May 2021) [151,152]. This LINCS program found that the gene expression level is inferrable with only nearly 1000 genes. They called this gene list ’landmark genes’. Subsequently, Chen et al. started with these 978 landmark genes and tried to predict other gene expression levels with DNN models. Integrating public large-scale NGS data showed better performance than the linear regression model. The authors conclude that the performance advantage originates from the DNN’s ability to model non-linear relationships between genes [153].

Following this study, Beltin et al. extensively investigated various biological networks in the same context of the inference of gene expression level. They set up a simplified representation of gene expression status and tried to solve a binary classification task. To show the relevance of a biological network, they compared various gene expression levels inferred from a different set of genes, neighboring genes in PPI, random genes, and all genes. However, in the study incorporating TCGA and GTEx datasets, the random network model outperformed the model build on a known biological network, such as StringDB [154]. While network-based approaches can add valuable insights to analysis, this study shows that it cannot be seen as the panacea, and a careful evaluation is required for each data set and task. In particular, this result may not represent biological complexity because of the oversimplified problem setup, which did not consider the relative gene-expressional changes. Additionally, the incorporated biological networks may not be suitable for inferring gene expression profiles because they consist of expression-regulating interactions, non-expression-regulating interactions, and various in vivo and in vitro interactions.

“ However, although recently sophisticated applications of deep learning showed improved accuracy, it does not reflect a general advancement. Depending on the type of NGS data, the experimental design, and the question to be answered, a proper approach and specific deep learning algorithms need to be considered. Deep learning is not a panacea. In general, to employ machine learning and systems biology methodology for a specific type of NGS data, a certain experimental design, a particular research question, the technology, and network data have to be chosen carefully.”

References

  1. Janes, K.A.; Yaffe, M.B. Data-driven modelling of signal-transduction networks. Nat. Rev. Mol. Cell Biol. 20067, 820–828. [Google Scholar] [CrossRef] [PubMed]
  2. Kreeger, P.K.; Lauffenburger, D.A. Cancer systems biology: A network modeling perspective. Carcinogenesis 201031, 2–8. [Google Scholar] [CrossRef] [PubMed]
  3. Vucic, E.A.; Thu, K.L.; Robison, K.; Rybaczyk, L.A.; Chari, R.; Alvarez, C.E.; Lam, W.L. Translating cancer ‘omics’ to improved outcomes. Genome Res. 201222, 188–195. [Google Scholar] [CrossRef]
  4. Hoadley, K.A.; Yau, C.; Wolf, D.M.; Cherniack, A.D.; Tamborero, D.; Ng, S.; Leiserson, M.D.; Niu, B.; McLellan, M.D.; Uzunangelov, V.; et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell 2014158, 929–944. [Google Scholar] [CrossRef] [PubMed]
  5. Hutter, C.; Zenklusen, J.C. The cancer genome atlas: Creating lasting value beyond its data. Cell 2018173, 283–285. [Google Scholar] [CrossRef]
  6. Chuang, H.Y.; Lee, E.; Liu, Y.T.; Lee, D.; Ideker, T. Network-based classification of breast cancer metastasis. Mol. Syst. Biol. 20073, 140. [Google Scholar] [CrossRef]
  7. Zhang, W.; Chien, J.; Yong, J.; Kuang, R. Network-based machine learning and graph theory algorithms for precision oncology. NPJ Precis. Oncol. 20171, 25. [Google Scholar] [CrossRef] [PubMed]
  8. Ngiam, K.Y.; Khor, W. Big data and machine learning algorithms for health-care delivery. Lancet Oncol. 201920, e262–e273. [Google Scholar] [CrossRef]
  9. Creixell, P.; Reimand, J.; Haider, S.; Wu, G.; Shibata, T.; Vazquez, M.; Mustonen, V.; Gonzalez-Perez, A.; Pearson, J.; Sander, C.; et al. Pathway and network analysis of cancer genomes. Nat. Methods 201512, 615. [Google Scholar]
  10. Reyna, M.A.; Haan, D.; Paczkowska, M.; Verbeke, L.P.; Vazquez, M.; Kahraman, A.; Pulido-Tamayo, S.; Barenboim, J.; Wadi, L.; Dhingra, P.; et al. Pathway and network analysis of more than 2500 whole cancer genomes. Nat. Commun. 202011, 729. [Google Scholar] [CrossRef]
  11. Luo, P.; Ding, Y.; Lei, X.; Wu, F.X. deepDriver: Predicting cancer driver genes based on somatic mutations using deep convolutional neural networks. Front. Genet. 201910, 13. [Google Scholar] [CrossRef]
  12. Jiao, W.; Atwal, G.; Polak, P.; Karlic, R.; Cuppen, E.; Danyi, A.; De Ridder, J.; van Herpen, C.; Lolkema, M.P.; Steeghs, N.; et al. A deep learning system accurately classifies primary and metastatic cancers using passenger mutation patterns. Nat. Commun. 202011, 728. [Google Scholar] [CrossRef]
  13. Chaudhary, K.; Poirion, O.B.; Lu, L.; Garmire, L.X. Deep learning–based multi-omics integration robustly predicts survival in liver cancer. Clin. Cancer Res. 201824, 1248–1259. [Google Scholar] [CrossRef]
  14. Gao, F.; Wang, W.; Tan, M.; Zhu, L.; Zhang, Y.; Fessler, E.; Vermeulen, L.; Wang, X. DeepCC: A novel deep learning-based framework for cancer molecular subtype classification. Oncogenesis 20198, 44. [Google Scholar] [CrossRef]
  15. Zeng, X.; Zhu, S.; Liu, X.; Zhou, Y.; Nussinov, R.; Cheng, F. deepDR: A network-based deep learning approach to in silico drug repositioning. Bioinformatics 201935, 5191–5198. [Google Scholar] [CrossRef]
  16. Issa, N.T.; Stathias, V.; Schürer, S.; Dakshanamurthy, S. Machine and deep learning approaches for cancer drug repurposing. In Seminars in Cancer Biology; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
  17. Weinstein, J.N.; Collisson, E.A.; Mills, G.B.; Shaw, K.R.M.; Ozenberger, B.A.; Ellrott, K.; Shmulevich, I.; Sander, C.; Stuart, J.M.; Network, C.G.A.R.; et al. The cancer genome atlas pan-cancer analysis project. Nat. Genet. 201345, 1113. [Google Scholar] [CrossRef]
  18. The ICGC/TCGA Pan-Cancer Analysis of Whole Genomes Consortium. Pan-cancer analysis of whole genomes. Nature 2020578, 82. [Google Scholar] [CrossRef] [PubMed]
  19. King, M.C.; Marks, J.H.; Mandell, J.B. Breast and ovarian cancer risks due to inherited mutations in BRCA1 and BRCA2. Science 2003302, 643–646. [Google Scholar] [CrossRef] [PubMed]
  20. Courtney, K.D.; Corcoran, R.B.; Engelman, J.A. The PI3K pathway as drug target in human cancer. J. Clin. Oncol. 201028, 1075. [Google Scholar] [CrossRef] [PubMed]
  21. Parker, J.S.; Mullins, M.; Cheang, M.C.; Leung, S.; Voduc, D.; Vickery, T.; Davies, S.; Fauron, C.; He, X.; Hu, Z.; et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J. Clin. Oncol. 200927, 1160. [Google Scholar] [CrossRef]
  22. Yersal, O.; Barutca, S. Biological subtypes of breast cancer: Prognostic and therapeutic implications. World J. Clin. Oncol. 20145, 412. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, L.; Lee, V.H.; Ng, M.K.; Yan, H.; Bijlsma, M.F. Molecular subtyping of cancer: Current status and moving toward clinical applications. Brief. Bioinform. 201920, 572–584. [Google Scholar] [CrossRef] [PubMed]
  24. Jones, P.A.; Issa, J.P.J.; Baylin, S. Targeting the cancer epigenome for therapy. Nat. Rev. Genet. 201617, 630. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, S.; Chaudhary, K.; Garmire, L.X. More is better: Recent progress in multi-omics data integration methods. Front. Genet. 20178, 84. [Google Scholar] [CrossRef]
  26. Chin, L.; Andersen, J.N.; Futreal, P.A. Cancer genomics: From discovery science to personalized medicine. Nat. Med. 201117, 297. [Google Scholar] [CrossRef] [PubMed]

Use of Systems Biology in Anti-Microbial Drug Development

Genomics, Computational Biology and Drug Discovery for Mycobacterial Infections: Fighting the Emergence of Resistance. Asma Munir, Sundeep Chaitanya Vedithi, Amanda K. Chaplin and Tom L. Blundell. Front. Genet., 04 September 2020 | https://doi.org/10.3389/fgene.2020.00965

In an earlier review article (Waman et al., 2019), we discussed various computational approaches and experimental strategies for drug target identification and structure-guided drug discovery. In this review we discuss the impact of the era of precision medicine, where the genome sequences of pathogens can give clues about the choice of existing drugs, and repurposing of others. Our focus is directed toward combatting antimicrobial drug resistance with emphasis on tuberculosis and leprosy. We describe structure-guided approaches to understanding the impacts of mutations that give rise to antimycobacterial resistance and the use of this information in the design of new medicines.

Genome Sequences and Proteomic Structural Databases

In recent years, there have been many focused efforts to define the amino-acid sequences of the M. tuberculosis pan-genome and then to define the three-dimensional structures and functional interactions of these gene products. This work has led to essential genes of the bacteria being revealed and to a better understanding of the genetic diversity in different strains that might lead to a selective advantage (Coll et al., 2018). This will help with our understanding of the mode of antibiotic resistance within these strains and aid structure-guided drug discovery. However, only ∼10% of the ∼4128 proteins have structures determined experimentally.

Several databases have been developed to integrate the genomic and/or structural information linked to drug resistance in Mycobacteria (Table 1). These invaluable resources can contribute to better understanding of molecular mechanisms involved in drug resistance and improvement in the selection of potential drug targets.

There is a dearth of information related to structural aspects of proteins from M. leprae and their oligomeric and hetero-oligomeric organization, which has limited the understanding of physiological processes of the bacillus. The structures of only 12 proteins have been solved and deposited in the protein data bank (PDB). However, the high sequence similarity in protein coding genes between M. leprae and M. tuberculosis allows computational methods to be used for comparative modeling of the proteins of M. leprae. Mainly monomeric models using single template modeling have been defined and deposited in the Swiss Model repository (Bienert et al., 2017), in Modbase (Pieper et al., 2014), and in a collection with other infectious disease agents (Sosa et al., 2018). There is a need for multi-template modeling and building homo- and hetero-oligomeric complexes to better understand the interfaces, druggability and impacts of mutations.

We are now exploiting Vivace, a multi-template modeling pipeline developed in our lab for modeling the proteomes of M. tuberculosis (CHOPIN, see above) and M. abscessus [Mabellini Database (Skwark et al., 2019)], to model the proteome of M. leprae. We emphasize the need for understanding the protein interfaces that are critical to function. An example of this is that of the RNA-polymerase holoenzyme complex from M. leprae. We first modeled the structure of this hetero-hexamer complex and later deciphered the binding patterns of rifampin (Vedithi et al., 2018Figures 1A,B). Rifampin is a known drug to treat tuberculosis and leprosy. Owing to high rifampin resistance in tuberculosis and emerging resistance in leprosy, we used an approach known as “Computational Saturation Mutagenesis”, to identify sites on the protein that are less impacted by mutations. In this study, we were able to understand the association between predicted impacts of mutations on the structure and phenotypic rifampin-resistance outcomes in leprosy.

FIGURE 2

Figure 2. (A) Stability changes predicted by mCSM for systematic mutations in the ß-subunit of RNA polymerase in M. leprae. The maximum destabilizing effect from among all 19 possible mutations at each residue position is considered as a weighting factor for the color map that gradients from red (high destabilizing effects) to white (neutral to stabilizing effects) (Vedithi et al., 2020). (B) One of the known mutations in the ß-subunit of RNA polymerase, the S437H substitution which resulted in a maximum destabilizing effect [-1.701 kcal/mol (mCSM)] among all 19 possibilities this position. In the mutant, histidine (residue in green) forms hydrogen bonds with S434 and Q438, aromatic interactions with F431, and other ring-ring and π interactions with the surrounding residues which can impact the shape of the rifampin binding pocket and rifampin affinity to the ß-subunit [-0.826 log(affinity fold change) (mCSM-lig)]. Orange dotted lines represent weak hydrogen bond interactions. Ring-ring and intergroup interactions are depicted in cyan. Aromatic interactions are represented in sky-blue and carbonyl interactions in pink dotted lines. Green dotted lines represent hydrophobic interactions (Vedithi et al., 2020).

Examples of Understanding and Combatting Resistance

The availability of whole genome sequences in the present era has greatly enhanced the understanding of emergence of drug resistance in infectious diseases like tuberculosis. The data generated by the whole genome sequencing of clinical isolates can be screened for the presence of drug-resistant mutations. A preliminary in silico analysis of mutations can then be used to prioritize experimental work to identify the nature of these mutations.

FIGURE 3

Figure 3. (A) Mechanism of isoniazid activation and INH-NAD adduct formation. (B) Mutations mapped (Munir et al., 2019) on the structure of KatG (PDB ID:1SJ2; Bertrand et al., 2004).

Other articles related to Computational Biology, Systems Biology, and Bioinformatics on this online journal include:

20th Anniversary and the Evolution of Computational Biology – International Society for Computational Biology

Featuring Computational and Systems Biology Program at Memorial Sloan Kettering Cancer Center, Sloan Kettering Institute (SKI), The Dana Pe’er Lab

Quantum Biology And Computational Medicine

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

Older Posts »

%d bloggers like this: