• Home
  • 2.0 LPBI Executive Summary
  • 1.0 LPBI Executive Summary
  • Journal PharmaceuticalIntelligence.com
  • Portfolio of IP Assets
  • Knowledge PORTALS System (KPS)
  • 1.0 LPBI – 2012-2020 VISTA
  • LPBI Group’s History
  • 2.0 LPBI – 2021-2025 VISION
  • BioMed e-Series
  • Press Coverage
  • Investor Relations
  • Our TEAM
  • Founder
  • Funding, Deals & Partnerships
  • 1.0 LPBI Group News
  • 1.0 LPBI CALENDAR
  • 2.0 LPBI Group News
  • Testimonials about LPBI
  • DrugDiscovery @LPBI Group
  • Medical 3D Printing
  • e-VOICES Podcasting
  • LPBI Newsletters
  • Customer Surveys
  • Health Care INVESTOR’s Corner ($)
  • 2021 Summer Internship Portal
  • 2021-2022 Medical Text Analysis (NLP)
  • Artificial Intelligence: Genomics & Cancer
  • SOP Web STAT
  • Blockchain Transactions Network Ecosystem
  • Contact Us
  • 1.0 LPBI Brochure
  • 2.0 LPBI Brochure
  • 2.0 LPBI – Calendar of Zooms
  • Coronavirus, SARS-CoV-2 Portal
  • LPBI India
  • Synthetic Biology in Drug Discovery
  • Certificate One Year
  • NFT: Redefined Format of IP Assets
  • Audio English-Spanish: BioMed e-Series
  • Five Bilingual BioMed e-Series

Leaders in Pharmaceutical Business Intelligence (LPBI) Group

Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com

Feeds:
Posts
Comments

Artificial Intelligence: Genomics & Cancer

Artificial Intelligence: Genomics & Cancer

 

Artificial Intelligence Definitions

As a useful reference, HAI Associate Director Chris Manning has created an easy to use guide on key AI definitions.  

Click here to download

https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf?utm_source=Stanford+HAI&utm_campaign=1f98de67ae-EMAIL_CAMPAIGN_2020_11-02-20_01&utm_medium=email&utm_term=0_aaf04f4a4b-1f98de67ae-199837539

 

Updated on 8/27/2021

Workshop on Foundation Models

1,700 views
Streamed live 6 hours ago
 
 

 

Stanford HAI

5.01K subscribers

 
The Center for Research on Foundation Models (CRFM), a new initiative of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), invites you to the Workshop on Foundation Models from August 23-24, 2021. By foundation model (e.g. BERT, GPT-3, DALL-E), we mean a single model that is trained on raw data, potentially across multiple modalities, which can be usefully adapted to a wide range of tasks. These models have demonstrated clear potential, which we see as the beginnings of a sweeping paradigm shift in AI. They represent a dramatic increase in capability in terms of accuracy, generation quality, and extrapolation to new tasks, but they also pose clear risks such as use for widespread disinformation, potential exacerbation of historical inequities, and problematic centralization of power. Given their anticipated impact, we invite you to join us at this workshop, where scholars reflecting a diverse array of perspectives, disciplinary backgrounds (e.g. social science, economics, computer science, law, philosophy, information science) and sectors (academia and industry) will convene to provide vital expertise on the many dimensions of foundation models. Broadly, we will address the opportunities, challenges, limitations, and societal impact of foundation models. Given that future AI systems will likely rely heavily on foundation models, it is imperative that we, as a community, come together to develop more rigorous principles for foundation models and guidance for their responsible development and deployment. Specific points of emphasis include: What applications and communities might benefit the most from foundation models and what are some of the unique application-specific obstacles? How do we characterize and mitigate the disparate, and likely inequitable, effects of foundation models? How do multimodal methods and grounding impact conversations around meaning and semantics in foundation models? When foundation models are used in applications that cause harm, how do we handle matters of responsibility, accountability, and recourse? What should be the professional norms and ethical and legal considerations around the release and deployment of foundation models? How should various groups (e.g. academia, industry, government), given their complementary strengths, productively collaborate on developing foundation models? Given foundation models must be adapted for specific tasks, how do we evaluate them in ways that capture the needs of diverse stakeholders? Foundation models generally coincide with the centralization of power: how do we reason about this centralization, and its potential harms, and build ecosystems that better distribute the benefits of foundation models? Data plays a central role in foundation models: how do we think about data sourcing, selection, documentation, and how do we build principles to guide how data shapes foundation models? The scale of foundation models complicates principled scientific study: how do we build foundation models in a sound manner given the potential inability to run comprehensive experiments, and how do we reaffirm our commitments to open and reproducible science in spite of this scale?
WATCH VIDEO
https://www.youtube.com/watch?v=AYPOzc50PHw
 

Updated on 8/25/2021

Anthropic raises $124 million to build more reliable, general AI systems

Anthropic, an AI safety and research company, has raised $124 million in a Series A. The financing round will support Anthropic in executing against its research roadmap and building prototypes of reliable and steerable AI systems.

The company is led by siblings Dario Amodei (CEO) and Daniela Amodei (President). The Anthropic team has previously conducted research into GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Anthropic will use the funding for computationally-intensive research to develop large-scale AI systems that are steerable, interpretable, and robust.

“Anthropic’s goal is to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people. We’re thrilled to be working with investors that support us in this mission and expect to concentrate on research in the immediate term,” said Anthropic CEO Dario Amodei.

Anthropic will focus on research into increasing the safety of AI systems; specifically, the company is focusing on increasing the reliability of large-scale AI models, developing the techniques and tools to make them more interpretable, and building ways to more tightly integrate human feedback into the development and deployment of these systems.

The Series A round was led by Jaan Tallinn, technology investor and co-founder of Skype. The round included participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research, Eric Schmidt, and others.

To find out more about Anthropic’s research agenda and approach, you can read our website and its job postings. The company is hiring researchers, engineers, and operational experts to support it in executing against its research roadmap. Find out more here: Anthropic.com.

SOURCE

https://www.anthropic.com/news/announcement

Updated on 6/11/2021

GPT-3 Scared You? Meet Wu Dao 2.0: A Monster of 1.75 Trillion Parameters

Wu Dao 2.0 is 10x larger than GPT-3. Imagine what it can do.

 

Alberto Romero

Alberto Romero

5 days ago·6 min read

 
 
 

Final thoughts: Wu Dao 2.0 towards AGI

Some of BAAI’s most important members expressed their thoughts on Wu Dao 2.0’s role on the road towards AGI (artificial general intelligence):

“The way to artificial general intelligence is big models and big computer. […]What we are building is a power plant for the future of AI. With mega data, mega computing power, and mega models, we can transform data to fuel the AI applications of the future.”

— Dr. Zhang Hongjiang, chairman of BAAI

“These sophisticated models, trained on gigantic data sets, only require a small amount of new data when used for a specific feature because they can transfer knowledge already learned into new tasks, just like human beings. […] Large-scale pre-trained models are one of today’s best shortcuts to artificial general intelligence.”

— Blake Yan, AI researcher

Wu Dao 2.0 aims to enable machines to think like humans and achieve cognitive abilities beyond the Turing test.”

— Tang Jie, lead researcher behind Wu Dao 2.0

They bet on GPT-like multimodal and multitasking models to reach AGI. Without a doubt, Wu Dao 2.0 — as GPT-3 before it — is an important step towards AGI. Yet, how much closer it will take us is debatable. Some expertsargue we’ll need hybrid models to reach AGI. Others defend embodied AI, rejecting traditional bodiless paradigms, such as neural networks, entirely.

No one knows which is the right path. Even if larger pre-trained models are the logical trend today, we may be missing the forest for the trees, and we may end up reaching a less ambitious ceiling ahead. The only clear thing is that if the world has to suffer from environmental damage, harmful biases, or high economic costs, not even reaching AGI would be worth it.

 

SOURCE

https://towardsdatascience.com/gpt-3-scared-you-meet-wu-dao-2-0-a-monster-of-1-75-trillion-parameters-832cd83db484

What Computers Can’t Do: A Critique of Artificial Reason

by 

 

Hubert L. Dreyfus
 
https://www.goodreads.com/book/show/1039575.What_Computers_Can_t_Do

Updated on 6/7/2021

The State Of Data, May 2021

By Gil Press

Data is eating the World

Data is eating the World, one byte at a time

https://www.forbes.com/sites/gilpress/2021/05/31/the-state-of-data-may-2021/?sh=198964881cb4

 

Updated on 3/28/2021

Watch video

Kira Radinsky: Using Data To Fix the World #OCSummit19​

1,612 views
•Mar 10, 2019
 
 

Kira Radinsky, the Director of Data Science & Chief Scientist for eBay details how she’s using algorithm-based techniques to predict – and thus prevent – calamities such as disease outbreaks and genocide.
 
 
 
 

Updated on 2/2/2021

Gartner: The future of AI is not as rosy as some might think

A Gartner report predicts that the second-order consequences of widespread AI will have massive societal impacts, to the point of making us unsure if and when we can trust our own eyes.

By Brandon Vigliarolo | January 25, 2021, 11:53 AM PST

AI ethics: The future of AI could become scary

Gartner has released a series of Predicts 2021 research reports, including one that outlines the serious, wide-reaching ethical and social problems it predicts artificial intelligence (AI) to cause in the next several years. In Predicts 2021: Artificial Intelligence and Its Impact on People and Society, five Gartner analysts report on different predictions it believes will come to fruition by 2025. The report calls particular attention to what it calls second-order consequences of artificial intelligence that arise as unintended results of new technologies.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Generative AI, for example, is now able to create amazingly realistic photographs of people and objects that don’t actually exist; Gartner predicts that by 2023, 20% of account takeovers will use deepfakes generated by this type of AI. “AI capabilities that can create and generate hyper-realistic content will have a transformational effect on the extent to which people can trust their own eyes,” the report said.

The report tackles five different predictions for the AI market, and gives recommendations for how businesses can address those challenges and adapt to the future: 

  • By 2025, pre-trained AI models will be largely concentrated among 1% of vendors, making responsible use of AI a societal concern
  • In 2023, 20% of successful account takeover attacks will use deepfakes as part of social engineering attacks
  • By 2024, 60% of AI providers will include harm/misuse mitigation as a part of their software
  • By 2025, 10% of governments will avoid privacy and security concerns by using synthetic populations to train AI 
  • By 2025, 75% of workplace conversations will be recorded and analyzed for use in adding organizational value and assessing risk

Each of those analyses is enough to make AI-watchers sit up and take notice, but when combined it creates a picture of a grim future rife with ethical concerns, potential misuse of AI, and loss of privacy in the workplace. 

How businesses can respond 

Concerns over AI’s effect on privacy and truth are sure to be major topics in the coming years if Gartner’s analysts are accurate in their predictions, and successful businesses will need to be ready to adapt quickly to those concerns.

A recurring theme in the report is the establishment of ethics boards at companies that rely on AI, whether as a service or a product. This is mentioned particularly for businesses that plan to record and analyze workplace conversations: Boards with employee representation should be established to ensure fair use of conversations data, Gartner said.

SEE: Natural language processing: A cheat sheet (TechRepublic)

Gartner also recommends that businesses establish criteria for responsible AI consumption and prioritize vendors that “can demonstrate responsible development of AI and clarity in addressing related societal concerns.”

As for security concerns surrounding deepfakes and generative AI, Gartner recommends that organizations should schedule training about deepfakes. “We are now entering a zero-trust world. Nothing can be trusted unless it is certified as authenticated using cryptographic digital signatures,” the report said. 

There’s a lot to digest in this report, from figures saying that the best deepfake detection software will top out at a 50% identification rate in the long term, to the prediction that in 2023 a major US corporation will adopt conversation analysis to determine employee compensation. There’s much to be worried about in these analyses, but potential antidotes are included as well. The full report is available at Gartner, but interested parties will need to pay for access.

Also see

  • IT leader’s guide to deep learning (TechRepublic Premium)
  • Building the bionic brain (free PDF) (TechRepublic)
  • Hiring Kit: Autonomous Systems Engineer (TechRepublic Premium)
  • What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
  • Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)

SOURCE

https://www.techrepublic.com/google-amp/article/gartner-the-future-of-ai-is-not-as-rosy-as-some-might-think/?__twitter_impression=true

Updated on 1/17/2021

2020 Fall Conference Triangulating Intelligence: Melding Neuroscience, Psychology, and AI

This conference zeroed in on the latest research on cognitive science, neuroscience, vision, language, and thought, informing the pursuit of artificial intelligence.

Watch Videos
  • AI, NLP, ML for Text Analysis of LPBI’s four e-Books: Scientists and CS/AI Collaboration on What to look for AND setting guiding rules for algorithm searching the English Text in 

Cancer: Volume One:

Cancer: Volume Two:

Genomics: Volume One

Genomics: Volume Two

 

https://pharmaceuticalintelligence.com/2020-summer-internship/

 

PORTAL for 2020 SUMMER INTERNSHIP @LPBI on Data Curation and Data Annotation

LPBI’s 2020 Summer Interns 

LPBI’s 2020 Summer Interns in Data Science:

Daniel Menzin, BSc BioMedical Engineering, expected, May 2021, Research Assistant 4, Core Applications Developer and Acting CTO 

https://pharmaceuticalintelligence.com/contributors-biographies/research-assistants/daniel-menzin-bsc-biomedical-engineering-expected-may-2021-research-assistant-4-core-applications-developer-and-acting-cto/

Noam Steiner Tomer, Summer Internship, Research Assistant 1

https://pharmaceuticalintelligence.com/contributors-biographies/research-assistants/noam-steiner-tomer-summer-internship-research-assistant-1/

 

UPDATED on 7/14/2020

NLP Resources

  • University of New South Wales NLP Dictionary

https://www.cse.unsw.edu.au/~billw/nlpdict.html

  • Princeton WordNet 3.1: Thesaurus/Dictionary

http://wordnetweb.princeton.edu/perl/webwn|https://wordnet.princeton.edu/download/current-version

  • Natural Language Toolkit: (NLP Python Library) https://www.nltk.org/

UPDATED ON 6/19/2020

From @MIT

This week we released our annual list of 35 Innovators under 35. At a time when the news has felt relentless, getting to read about so many talented, mission-driven technologists has been a sight for sore eyes. It’s especially exciting to see how many of them are working directly with AI—either to advance the field through fundamental research or by applying it responsibly across their respective industries. Here are just a few of the people that inspired me:

Inioluwa Deborah Raji, 24, AI Now Institute. Inioluwa Deborah Raji was interning at machine-learning startup Clarifai when she had an experience she remembers as “horrible.” While building a computer vision model to help clients flag inappropriate images as “not safe for work,” she discovered it flagged photos of people of color much more often than those of white people. The realization led her down a path of algorithmic accountability research, where she is a leading voice today. Most recently, Amazon’s decision to implement a one-year moratorium on police use of Rekognition stemmed directly from a paper she co-authored with Joy Buolamwini (one of our innovators in 2018) to demonstrate the product’s discrimination. Read more here.

Manuel Le Gallo, 34, IBM Research. Much of the energy use in modern computing comes from the fact that data needs to be constantly transferred back and forth between the memory and the processor. So Manuel Le Gallo worked with his team at IBM Research to develop a system that uses memory itself to process data. Their early work has shown they can achieve both precision and huge energy savings. They recently completed a process using just 1% as much energy as would have been needed with conventional methods. Read more here.

Bo Li, 32, University of Illinois at Urbana-Champaign. A few years ago, Bo Li and her colleagues fooled an autonomous vehicle into reading a stop sign as one posting a 45 mph speed limit. All they’d done was place small black-and-white stickers on the sign in a seemingly random pattern to the human eye. But to a neural network, the graffiti-like arrangement turned into a powerful “adversarial attack”—making it see something that was not. It was one of the first times researchers had demonstrated how such vulnerabilities could translate into consequences in the physical world. Li now works on devising these subtle changes to better understand and prevent these attacks in the future. Read more here.

Andrej Karpathy, 33, Tesla. As a graduate student at Stanford, Andrej Karpathy extended techniques for building what are known as convolutional neural networks (CNNs)—systems that broadly mimic the neuron structure in the visual cortex. By combining CNNs with other deep-learning approaches, he created a system that was not just better at recognizing individual items in images, but capable of seeing an entire scene full of objects and effectively building a story of what was happening in it and what might happen next. Karpathy is now applying his knowledge to Tesla, where he oversees neural networks for the cars’ Autopilot feature. Read more here.

Leila Pirhaji, 34, Revivemed. Our bodies contain 100,000 metabolites, tiny molecules involved in our metabolism that show the effects of our genes and lifestyle. Such metabolites include everything from blood sugars and cholesterol to obscure molecules that appear in significant numbers only when someone is sick. The problem is that measuring and identifying these molecules is expensive and time consuming, and fewer than 5% of them in a patient can be identified using common technologies. So Leila Pirhaji developed a platform that uses machine learning to do it much more quickly. Her work could help us better detect and treat diseases. Read more here.

See the full list of 35 Innovators Under 35 here.

SOURCE

From: The Algorithm from MIT Technology Review <newsletters@technologyreview.com>

Reply-To: The Algorithm from MIT Technology Review <newsletters@technologyreview.com>

Date: Friday, June 19, 2020 at 12:38 PM

To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>

Subject: Meet these brilliant young AI innovators under 35

 

Important resources

 

  • Natural Language Processing (NLP) with Python: The Free eBook

https://www.kdnuggets.com/2020/06/natural-language-processing-python-free-ebook.html#.Xt4-SgW3r4w.linkedin

  • Data Curation and Data Annotation 

https://www.oreilly.com/library/view/natural-language-annotation/9781449332693/

Natural Language Annotation for Machine Learning

Natural Language Annotation for Machine Learning

by James Pustejovsky, Amber Stubbs
Released October 2012
Publisher(s): O’Reilly Media, Inc.
ISBN: 9781449306663

Explore a preview version of Natural Language Annotation for Machine Learning right now.

O’Reilly members get unlimited access to live online training experiences, plus books, videos, and digital content from 200+ publishers.

START YOUR FREE TRIAL

BUY ON AMAZON All of O’Reilly’s books are available for purchase in print on Amazon.com.

 

Breaking News May 2020

Artificial Intelligence Cannot Be Inventors, US Patent Office Rules

An AI system called DABUS “invented” two new devices, but the USPTO says only humans can do that.

By Samantha Cole
Apr 28 2020, 11:46am

On Monday, the United States Patent and Trademark Office published a decision that claims artificial intelligences cannot be inventors. Only “natural persons” currently have the right to get a patent.

Last year, two relatively mundane patents—one for a shape-shifting food container and another for an emergency flashlight—posed an existential question for international patent regulations around the world: Does an inventor have to be a human?

These two inventions were the work of DABUS, an artificial intelligence system created by physicist and AI researcher Stephen Thaler. Now, the USPTO has decided that DABUS and any other AI cannot be listed as an inventor on a patent filing.

Until now, US patent law was vague about whether machines could invent, referring to “individuals” as eligible inventors. Thaler, along with a group of patent law experts, argued that because Thaler didn’t have any expertise in containers or flashlights, and didn’t help DABUS make the inventions, it wouldn’t be right for him to be listed as the inventor.

“If I teach my Ph.D. student that and they go on to make a final complex idea, that doesn’t make me an inventor on their patent, so it shouldn’t with a machine,” Ryan Abbott, a law and health-sciences professor at the University of Surrey in the UK who led the group of legal experts in the AI patent project, told the Wall Street Journal last year.

In the UK, the DABUS patents were rejected under patent laws there that forbid non-natural persons from inventing. With this week’s announcement, the US has followed suit, stating that “only natural persons may be named as an inventor in a patent application.”

The DABUS case brings up similar questions of ownership and non-human rights—and even what makes us human, and what makes other entities not—as that infamous monkey-selfie copyright case, where PETA tried to say a monkey could own the rights to a selfie. Ultimately, under regulations from the U.S. Copyright Office that only photographs taken by humans can be copyrighted, PETA’s case was dismissed.

SOURCE

https://www.vice.com/en_us/article/akw5g4/artificial-intelligence-cannot-be-inventors-us-patent-office-rules

 
 

AI PORTAL on Genomics & Cancer @ LPBI

BACKGROUND ON AI and LPBI

  • Aviva was exposed to BioInformatics in Pharmaceutics – work of Dr. Philip L. Lorenzi, System Biology Department, MD Anderson Cancer Center in May 2014 at BioIT Conference
  • Aviva and Larry Published in September 2014

https://pharmaceuticalintelligence.com/2014/09/18/autophagy-modulating-proteins-and-small-molecules-candidate-targets-for-cancer-therapy-commentary-of-bioinformatics-approaches/

 

  • Aviva was exposed to NLP in Medicine in 9/2014 and at 2019 World Medical Innovation Forum Conference.
  • The Presentation in 4/2019 by

https://www.linguamatics.com/

 

On Sat, 8 Feb 2020 at 15:22 Aviva Lev-Ari <aviva.lev-ari@comcast.net> wrote:

UPDATED on 7/19/2020

The concept of HyperGraphDB and set of rules are needed to be created for the 1400 curations of Dr. Larry, more than half are about signaling pathways used in therapeutics using receptors.

THEN we will have the software for autoindependent semantic Text analysis of Dr. Larry’s UNIVERSE of articles – this was Dr. Williams great Idea.

(7/19/2020 – 5,875 (all articles) / 1400 (LHB’s Universe = 23.8%) of our IP in Journal articles.

Please read below, article and send me your comment on its relevance to our current search for a Partner with expertise in Machine Learning, Natural Language Processing, AI for text analysis.

https://www.wired.com/story/code-obsessed-novelist-builds-writing-bot-the-plot-thickens/

 

THE TASK AT HAND for 2020 SUMMER are FOUR BOOKS: 2 in Genomics and 2 in Cancer

I quote from 2/10/2020:

You need to invest in a very good Python coder who could:

1) Harvest the site

2) Clean it from HTML

3) Look at the markup of categories and titles and subtitles and store them separately 

4) Transform the data to XML 

Assignments for June 16, 2020 11AM EST and during 2020 Summer 

 

  • Research Text Analysis with NLP algorithms 

Please LOOK at these four books on which we will apply NLP algorithms this Summer

  • Series B: Frontiers in Genomics Research

Volume One: Genomics Orientations for Personalized Medicine

Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology

  • Series C: e-Books on Cancer & Oncology

Volume One: Cancer Biology and Genomics for Disease Diagnosis

Volume Two: Cancer Therapies: Metabolic, Genomics, Interventional, Immunotherapy and Nanotechnology in Therapy Delivery

 

Four of LPBI Group’s 16 volumes in Medicine deal with Genomics and

Two of LPBI Group’s 16 volumes in Medicine deal with Cancer and Oncology.

In June 2020 we are launching a new initiative on Artificial Intelligence in GENOMICS and in CANCER for pharmacological & therapeutics aims.

  • Applications of NLP, ML and AI on our six books is the Goal for automation of the exploration of novel pathways and potential off-label indications for existing therapies
  • For Artificial Intelligence in Medicine Search the SELECT CATEGORY field on the Home Page on the right hand side column on the Home Page
  • New content is forthcoming

 

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)

Like this:

Like Loading...

  • Follow Blog via Email

    Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 1,254 other subscribers
  • Recent Posts

    • Press Release for Five Bilingual BioMed e-Series January 29, 2023
    • 2022 FDA Drug Approval List, 2022 Biological Approvals and Approved Cellular and Gene Therapy Products January 17, 2023
    • Verily announced other organizational changes, 1/13/2023 January 16, 2023
    • Mission 4: Use of Systems Biology for Design of inhibitor of Galectins as Cancer Therapeutic – Strategy and Software December 13, 2022
    • Science Has A Systemic Problem, Not an Innovation Problem December 12, 2022
    • Chemistry Nobelist Carolyn Bertozzi’s years at UC Berkeley October 24, 2022
    • Technion #1 in Europe in Field of AI for 2nd Straight Year October 20, 2022
    • Endoglin Protein Interactome Profiling Identifies TRIM21 and Galectin-3 as New Binding Partners October 11, 2022
    • The drug efflux pump MDR1 promotes intrinsic and acquired resistance to PROTACs in cancer cells October 11, 2022
    • Cancer Policy Related News from Washington DC and New NCI Appointments October 4, 2022
  • Archives

  • Categories

  • Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    • 1 2012pharmaceutical
    • 1 Amandeep Kaur
    • 1 Aashir Awan, Phd
    • 1 Abhisar Anand
    • 1 Adina Hazan
    • 1 Alan F. Kaul, PharmD., MS, MBA, FCCP
    • 1 alexcrystal6
    • 1 anamikasarkar
    • 1 apreconasia
    • 1 aviralvatsa
    • 1 David Orchard-Webb, PhD
    • 1 danutdaagmailcom
    • 1 Demet Sag, Ph.D., CRA, GCP
    • 1 Dror Nir
    • 1 dsmolyar
    • 1 Ethan Coomber
    • 1 evelinacohn
    • 1 Gail S Thornton
    • 1 Irina Robu
    • 1 jayzmit48
    • 1 jdpmd
    • 1 jshertok
    • 1 kellyperlman
    • 1 Ed Kislauskis
    • 1 larryhbern
    • 1 Madison Davis
    • 1 marzankhan
    • 1 megbaker58
    • 1 ofermar2020
    • 1 Dr. Pati
    • 1 pkandala
    • 1 ritusaxena
    • 1 Rick Mandahl
    • 1 sjwilliamspa
    • 1 Srinivas Sriram
    • 1 stuartlpbi
    • 1 Dr. Sudipta Saha
    • 1 tildabarliya
    • 1 vaishnavee24
    • 1 zraviv06
    • 1 zs22

Powered by WordPress.com.

WPThemes.


 

Loading Comments...
 

    %d bloggers like this: