Feeds:
Posts
Comments

Archive for the ‘Scientific Publishing’ Category

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

Greylock Partners Announces Unique $500 Million Venture to act as Seed Capital Funding for Earliest Stage Startups

Reporter: Stephen J. Williams, Ph.D.

Greylock Partners CEO Reid Hoffman announces a $500 million fund to help the earliest stage startups find capital.

See video below:

https://www.bloomberg.com/multimedia/api/embed/iframe?id=798828e9-7850-4c83-9348-a35d5fad3e1c

https://www.bloomberg.com/news/videos/2021-09-24/intv-sara-guoh-greylock-partners-video

See transcript from Bloomberg.com

00:00This is a lot of money for seed stage deals which is typicallysmaller. Why do you want to make seed such a priority.

00:09So see it has always been a priority for us. We’ve been activeat this stage for a long time and some of our biggest wins

00:15historically have been incubation and seed. So I think companieslike Workday and Palo Alto Networks and more recently abnormal

00:21and Snorkel. And then this year 70 percent of our investmentsyou must mints or seeds before we announce this fund. And so

00:29when we saw this level of opportunity we also want to make surewe had enough funding to really back entrepreneurs and to

00:36support them through their journey and make sure entrepreneursalso know they have different options at the seed for the type

00:41of partners they work with. Now at the seed stage you’re talkingabout companies in their infancy. How early are you investing. I

00:49mean is this ideas on a napkin stage with a couple ofentrepreneurs that you believe in or is it beyond that.

00:58So there definitely is a whole range. We don’t catch everysingle person. Like the day they left their job. Right. But you

01:04know abnormal was to see it in 2018 when it was a slide deck andtwo co-founders. We backed another company recently and self on

01:12first capital. That was a repeat founder we have history with.Similarly no product yet. Just an idea and an early team. And so

01:20the range of when we do see it really depends on when weencounter companies. We do like to get to know people as early

01:26as possible. And sometimes that’s the right time for us to writethe check. Obviously Greylock is a multi-stage venture venture

01:32capital firm and I think founders might have the question here.You know if you give me the seed funding we’ll follow on and

01:38reserves come out of that same bucket. And what could this meanin terms of a longer term relationship with Greylock. What’s the

01:46answer to that. So the first thing I’d start with is seeds forus our core investments. Right. So many firms look at them as

01:54options to then follow on. We look at seeds as investments we’retrying to make money on. We’re building a relationship for the

02:01long term to begin with. Right. So. So I’d start with that thenI’d say it is a third of our fund. So it is a big piece of our

02:09investing. And and you know there are many instances where wethen follow on and invest even more because our conviction

02:16continues or even grows. But the point of us doing seed is notjust a follow on it’s to make that investment. How big is each

02:24deal. I mean would you say that seed is the new series A.I think I think that.

02:33Well let’s see the market data would tell us that round sizesoverall have increased for the same level of progress. And I

02:41think that makes sense right. And the reason being the markethas become a lot smarter at the attractiveness of early stage

02:48technology opportunities. And so great returns in tech venturecapital over many years mean there’s more capital than ever and

02:57people are savvier about software and Internet companies. ButI’d say there is you know I think kind of the noble creature

03:04doesn’t matter so much. We think of it as being the firstinstitutional partner to go to a set of founders. The world is

03:12changing quickly. I mean we’re still in the middle of apandemic. And who would’ve known that you know working from home

03:16was going to be a thing 18 months ago. What are the trends thatyou are most excited about right now that you’re doubling down

03:22on at the seed stage.Yeah. So we invest across the technology spectrum business

03:30consumer. The one you just mentioned in terms of just the seachange of the pandemic in terms of how we do our work together

03:36as one. I’m really excited about but we’ve been we’ve beeninvesting in let’s say just this. There’s a shortage globally

03:44because the pandemic. But even before of human connection andand intimacy and people look for it online. And so we invest in

03:53companies like Dischord and Common ROOM and Promotion that helppeople connect more online. So that’s when we’ll continue to

04:00invest in. And then of course we’re investing across all of yourusual range of SAS social data A.I. etc. and then spending more

04:10and more time in fintech and crypto in particular. Now what arethe potential problems with seed stage. Is that at a certain

04:16point as the company develops maybe they pivot they change. Overtime they could potentially ultimately compete with another one

04:23of your core portfolio companies. How do you manage that.So it’s a good question but it is also something that doesn’t

04:30only happen at the scene and funnily enough Greylock has been aninvestor in several companies that were like great companies

04:37post pivot right. So like first semester and discord and nextdoor after they decided to be what they are today. And so that

04:46you know I’d start with the premise of our our philosophy isthat the company should do what’s best for the company. And we

04:53know our our philosophy is to be fully behind companies and notto go invest in a bunch of competitors in a sector just because

04:59we like this sector. But if that were to happen you know wewould we would just divide those interests within the firm and

05:06like make sure that there’s no information flow and just addressit in a reasonable way. I’ve talked with many of your partners

05:12over the years about investing in more women. And I’m curioushow you look at it as an opportunity to potentially you know

05:22spread the wealth a little bit across more women entrepreneurspeople of color people who historically haven’t gotten a chance

05:29in Silicon Valley and Silicon Valley hasn’t benefited from theirideas.

05:34OK. So I’d say this is an issue that’s near and dear to myheart. We are working on it. Two of the last three founders I

05:40backed are women. One is the seed stage founder. One of thefounders. I backed at the seed stage is Hispanic. But. But I

05:49would say you know one thing I want to make sure is clear. Likeyou want to back great founders from diverse backgrounds across

05:56the spectrum. And like we wouldn’t like do it more in seedbecause seed isn’t important. Because it is important to us.

06:02Right. It’s just across the portfolio. This is a priority.

From TechStartups

Source: https://techstartups.com/2021/09/22/greylock-partners-raises-500-million-invest-seed-stage-startups/

Greylock Partners raises $500 million to invest in seed-stage startups

Nickie LouisePOSTED ON SEPTEMBER 22, 2021


Greylock Partners has raised $500 million to invest exclusively in seed-stage startups. The announcement comes a year after the firm raised $1 billion for its 16th flagship fund to invest in early- and growth-stage tech startups.

Guo and general partner Saam Motamedi said in an interview the fund is part of an expansion of a $1.1 billion fund, which we reported last year, to $1.6 billion, The Information reported. The funding is among the industry’s largest devoted to seed investments, which often represent a startup’s first outside capital.

The pool of funds will give the 56-year-old venture capital firm the ability to write large checks at “lean-in valuations” and emphasize its commitment to early-stage investing, said general partner Sarah Guo. In a thread post on Twitter, Greylock said, “We at @GreylockVC  are excited to announce we’ve raised $500M dedicated to seed investing. This is the industry’s largest pool of venture capital dedicated to backing founders at day one.”

Press Release from Grelock

More articles on Venture Capital on this Online Open Access Journal Include:

youngStartup Ventures “Where Innovation Meets Capital” – First Round of VC Firms Announced, August 4th – 6th, 2020.

Real Time Coverage @BIOConvention #BIO2019: Dealmakers’ Intentions: 2019 Market Outlook June 5 Philadelphia PA

Podcast Episodes by THE EUROPEAN VC

Real Time Coverage @BIOConvention #BIO2019: June 4 Morning Sessions; Global Biotech Investment & Public-Private Partnerships

37th Annual J.P. Morgan HEALTHCARE CONFERENCE: News at #JPM2019 for Jan. 8, 2019: Deals and Announcements

Tweet Collection by @pharma_BI and @AVIVA1950 and Re-Tweets for e-Proceedings 14th Annual BioPharma & Healthcare Summit, Friday, September 4, 2020, 8 AM EST to 3-30 PM EST – Virtual Edition

Read Full Post »

NCCN Shares Latest Expert Recommendations for Prostate Cancer in Spanish and Portuguese

Reporter: Stephen J. Williams, Ph.D.

Currently many biomedical texts and US government agency guidelines are only offered in English or only offered in different languages upon request. However Spanish is spoken in a majority of countries worldwide and medical text in that language would serve as an under-served need. In addition, Portuguese is the main language in the largest country in South America, Brazil.

The LPBI Group and others have noticed this need for medical translation to other languages. Currently LPBI Group is translating their medical e-book offerings into Spanish (for more details see https://pharmaceuticalintelligence.com/vision/)

Below is an article on The National Comprehensive Cancer Network’s decision to offer their cancer treatment guidelines in Spanish and Portuguese.

Source: https://www.nccn.org/home/news/newsdetails?NewsId=2871

PLYMOUTH MEETING, PA [8 September, 2021] — The National Comprehensive Cancer Network® (NCCN®)—a nonprofit alliance of leading cancer centers in the United States—announces recently-updated versions of evidence- and expert consensus-based guidelines for treating prostate cancer, translated into Spanish and Portuguese. NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) feature frequently updated cancer treatment recommendations from multidisciplinary panels of experts across NCCN Member Institutions. Independent studies have repeatedly found that following these recommendations correlates with better outcomes and longer survival.

“Everyone with prostate cancer should have access to care that is based on current and reliable evidence,” said Robert W. Carlson, MD, Chief Executive Officer, NCCN. “These updated translations—along with all of our other translated and adapted resources—help us to define and advance high-quality, high-value, patient-centered cancer care globally, so patients everywhere can live better lives.”

Prostate cancer is the second most commonly occurring cancer in men, impacting more than a million people worldwide every year.[1] In 2020, the NCCN Guidelines® for Prostate Cancer were downloaded more than 200,000 times by people outside of the United States. Approximately 47 percent of registered users for NCCN.org are located outside the U.S., with Brazil, Spain, and Mexico among the top ten countries represented.

“NCCN Guidelines are incredibly helpful resources in the work we do to ensure cancer care across Latin America meets the highest standards,” said Diogo Bastos, MD, and Andrey Soares, MD, Chair and Scientific Director of the Genitourinary Group of The Latin American Cooperative Oncology Group (LACOG). The organization has worked with NCCN in the past to develop Latin American editions of the NCCN Guidelines for Breast Cancer, Colon Cancer, Non-Small Cell Lung Cancer, Prostate Cancer, Multiple Myeloma, and Rectal Cancer, and co-hosted a webinar on “Management of Prostate Cancer for Latin America” earlier this year. “We appreciate all of NCCN’s efforts to make sure these gold-standard recommendations are accessible to non-English speakers and applicable for varying circumstances.”

NCCN also publishes NCCN Guidelines for Patients®, containing the same treatment information in non-medical terms, intended for patients and caregivers. The NCCN Guidelines for Patients: Prostate Cancer were found to be among the most trustworthy sources of information online according to a recent international study. These patient guidelines have been divided into two books, covering early and advanced prostate cancer; both have been translated into Spanish and Portuguese as well.

NCCN collaborates with organizations across the globe on resources based on the NCCN Guidelines that account for local accessibility, consideration of metabolic differences in populations, and regional regulatory variation. They can be downloaded free-of-charge for non-commercial use at NCCN.org/global or via the Virtual Library of NCCN Guidelines App. Learn more and join the conversation with the hashtag #NCCNGlobal.


[1] Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global Cancer Statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin, in press. The online GLOBOCAN 2018 database is accessible at http://gco.iarc.fr/, as part of IARC’s Global Cancer Observatory.

About the National Comprehensive Cancer Network

The National Comprehensive Cancer Network® (NCCN®) is a not-for-profit alliance of leading cancer centers devoted to patient care, research, and education. NCCN is dedicated to improving and facilitating quality, effective, efficient, and accessible cancer care so patients can live better lives. The NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) provide transparent, evidence-based, expert consensus recommendations for cancer treatment, prevention, and supportive services; they are the recognized standard for clinical direction and policy in cancer management and the most thorough and frequently-updated clinical practice guidelines available in any area of medicine. The NCCN Guidelines for Patients® provide expert cancer treatment information to inform and empower patients and caregivers, through support from the NCCN Foundation®. NCCN also advances continuing educationglobal initiativespolicy, and research collaboration and publication in oncology. Visit NCCN.org for more information and follow NCCN on Facebook @NCCNorg, Instagram @NCCNorg, and Twitter @NCCN.

Please see LPBI Group’s efforts in medical text translation and Natural Language Processing of Medical Text at

Read Full Post »

Novartis uses a ‘dimmer switch’ medication to fine-tune gene therapy candidates

Reporter: Amandeep Kaur, BSc., MSc.

Using viral vectors, lipid nanoparticles, and other technologies, significant progress has been achieved in refining the delivery of gene treatments. However, modifications to the cargo itself are still needed to increase safety and efficacy by better controlling gene expression.

To that end, researchers at Children’s Hospital of Philadelphia (CHOP) have created a “dimmer switch” system that employs Novartis’ investigational Huntington’s disease medicine branaplam (LMI070) as a regulator to fine-tune the quantity of proteins generated from a gene therapy.

According to a new study published in Nature, the Xon system altered quantities of erythropoietin—which is used to treat anaemia associated with chronic renal disease—delivered to mice using viral vectors. The method has previously been licenced by Novartis, the maker of the Zolgensma gene therapy for spinal muscular atrophy.

The Xon system depends on a process known as “alternative splicing,” in which RNA is spliced to include or exclude specific exons of a gene, allowing the gene to code for multiple proteins. The team used branaplam, a small-molecule RNA-splicing modulator, for this platform. The medication was created to improve SMN2 gene splicing in order to cure spinal muscular atrophy. Novartis shifted its research to try the medication against Huntington’s disease after a trial failure.

A gene therapy’s payload remains dormant until oral branaplam is given, according to Xon. The medicine activates the expression of the therapy’s functional gene by causing it to splice in the desired way. Scientists from CHOP and the Novartis Institutes for BioMedical Research put the dimmer switch to the exam in an Epo gene therapy carried through adeno-associated viral vectors. The usage of branaplam increased mice Epo levels in the blood and hematocrit levels (the proportion of red blood cells to whole blood) by 60% to 70%, according to the researchers. The researchers fed the rodents branaplam again as their hematocrit decreased to baseline levels. The therapy reinduced Epo to levels similar to those seen in the initial studies, according to the researchers.

The researchers also demonstrated that the Xon system could be used to regulate progranulin expression, which is utilised to treat PGRN-deficient frontotemporal dementia and neuronal ceroid lipofuscinosis. The scientists emphasised that gene therapy requires a small treatment window to be both safe and effective.

In a statement, Beverly Davidson, Ph.D., the study’s senior author, said, “The dose of a medicine can define how high you want expression to be, and then the system can automatically ‘dim down’ at a pace corresponding to the half-life of the protein.”

“We may imagine scenarios in which a medication is used only once, such as to control the expression of foreign proteins required for gene editing, or only on a limited basis. Because the splicing modulators we examined are administered orally, compliance to control protein expression from viral vectors including Xon-based cassettes should be high.”

In gene-modifying medicines, scientists have tried a variety of approaches to alter gene expression. For example, methyl groups were utilised as a switch to turn on or off expression of genes in the gene-editing system CRISPR by a team of researchers from the Massachusetts Institute of Technology and the University of California, San Francisco.

Auxolytic, a biotech company founded by Stanford University academics, has described how knocking down a gene called UMPS could render T-cell therapies ineffective by depriving T cells of the nutrition uridine. Xon could also be tailored to work with cancer CAR-T cell therapy, according to the CHOP-Novartis researchers. The dimmer switch could help prevent cell depletion by halting CAR expression, according to the researchers. According to the researchers, such a tuneable switch could help CRISPR-based treatments by providing “a short burst” of production of CRISPR effector proteins to prevent undesirable off-target editing.

Source: https://www.fiercebiotech.com/research/novartis-fine-tunes-gene-therapy-a-huntington-s-disease-candidate-as-a-dimmer-switch?mkt_tok=Mjk0LU1RRi0wNTYAAAF-q1ives09mmSQhXDd_jhF0M11KBMt0K23Iru3ZMcZFf-vcFQwMMCxTOiWM-jHaEvtyGOM_ds_Cw6NuB9B0fr79a3Opgh32TjXaB-snz54d2xU_fw

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Gene Therapy could be a Boon to Alzheimer’s disease (AD): A first-in-human clinical trial proposed

Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/03/22/gene-therapy-could-be-a-boon-to-alzheimers-disease-ad-a-first-in-human-clinical-trial-proposed/

Top Industrialization Challenges of Gene Therapy Manufacturing

Guest Authors: Dr. Mark Szczypka and Clive Glover

https://pharmaceuticalintelligence.com/2021/03/29/top-industrialization-challenges-of-gene-therapy-manufacturing/

Dysregulation of ncRNAs in association with Neurodegenerative Disorders

Curator: Amandeep Kaur

https://pharmaceuticalintelligence.com/2021/01/11/dysregulation-of-ncrnas-in-association-with-neurodegenerative-disorders/

Cancer treatment using CRISPR-based Genome Editing System 

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2021/01/09/59906/

CRISPR-Cas9 and the Power of Butterfly Gene Editing

Reporter: Madison Davis

https://pharmaceuticalintelligence.com/2020/08/23/crispr-cas9-and-the-power-of-butterfly-gene-editing/

Gene Editing for Exon 51: Why CRISPR Snipping might be better than Exon Skipping for DMD

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/01/23/gene-editing-for-exon-51-why-crispr-snipping-might-be-better-than-exon-skipping-for-dmd/

Gene Editing: The Role of Oligonucleotide Chips

Curators: Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/01/07/gene-editing-the-role-of-oligonucleotide-chips/

Cause of Alzheimer’s Discovered: protein SIRT6 role in DNA repair process – low levels enable DNA damage accumulation

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2017/06/15/cause-of-alzheimers-discovered-protein-sirt6-role-in-dna-repair-process-low-levels-enable-dna-damage-accumulation/

Delineating a Role for CRISPR-Cas9 in Pharmaceutical Targeting

Author & Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/08/30/delineating-a-role-for-crispr-cas9-in-pharmaceutical-targeting/

Brain Science

Larry H Bernstein, MD, FCAP, Curator

https://pharmaceuticalintelligence.com/2015/11/03/brain-science/

Read Full Post »

Emergence of a new SARS-CoV-2 variant from GR clade with a novel S glycoprotein mutation V1230L in West Bengal, India

Authors: Rakesh Sarkar, Ritubrita Saha, Pratik Mallick, Ranjana Sharma, Amandeep Kaur, Shanta Dutta, Mamta Chawla-Sarkar

Reporter and Original Article Co-Author: Amandeep Kaur, B.Sc. , M.Sc.

Abstract
Since its inception in late 2019, SARS-CoV-2 has evolved resulting in emergence of various variants in different countries. These variants have spread worldwide resulting in devastating second wave of COVID-19 pandemic in many countries including India since the beginning of 2021. To control this pandemic continuous mutational surveillance and genomic epidemiology of circulating strains is very important. In this study, we performed mutational analysis of the protein coding genes of SARS-CoV-2 strains (n=2000) collected during January 2021 to March 2021. Our data revealed the emergence of a new variant in West Bengal, India, which is characterized by the presence of 11 co-existing mutations including D614G, P681H and V1230L in S-glycoprotein. This new variant was identified in 70 out of 412 sequences submitted from West Bengal. Interestingly, among these 70 sequences, 16 sequences also harbored E484K in the S glycoprotein. Phylogenetic analysis revealed strains of this new variant emerged from GR clade (B.1.1) and formed a new cluster. We propose to name this variant as GRL or lineage B.1.1/S:V1230L due to the presence of V1230L in S glycoprotein along with GR clade specific mutations. Co-occurrence of P681H, previously observed in UK variant, and E484K, previously observed in South African variant and California variant, demonstrates the convergent evolution of SARS-CoV-2 mutation. V1230L, present within the transmembrane domain of S2 subunit of S glycoprotein, has not yet been reported from any country. Substitution of valine with more hydrophobic amino acid leucine at position 1230 of the transmembrane domain, having role in S protein binding to the viral envelope, could strengthen the interaction of S protein with the viral envelope and also increase the deposition of S protein to the viral envelope, and thus positively regulate virus infection. P618H and E484K mutation have already been demonstrated in favor of increased infectivity and immune invasion respectively. Therefore, the new variant having G614G, P618H, P1230L and E484K is expected to have better infectivity, transmissibility and immune invasion characteristics, which may pose additional threat along with B.1.617 in the ongoing COVID-19 pandemic in India.

Reference: Sarkar, R. et al. (2021) Emergence of a new SARS-CoV-2 variant from GR clade with a novel S glycoprotein mutation V1230L in West Bengal, India. medRxiv. https://doi.org/10.1101/2021.05.24.21257705https://www.medrxiv.org/content/10.1101/2021.05.24.21257705v1

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Fighting Chaos with Care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur

https://pharmaceuticalintelligence.com/2021/04/13/fighting-chaos-with-care/

T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/30/t-cells-recognize-recent-sars-cov-2-variants/

Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/12/need-for-global-response-to-sars-cov-2-viral-variants/

Identification of Novel genes in human that fight COVID-19 infection

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/04/19/identification-of-novel-genes-in-human-that-fight-covid-19-infection/

Mechanism of Thrombosis with AstraZeneca and J & J Vaccines: Expert Opinion by Kate Chander Chiang & Ajay Gupta, MD

Reporter & Curator: Dr. Ajay Gupta, MD

https://pharmaceuticalintelligence.com/2021/04/14/mechanism-of-thrombosis-with-astrazeneca-and-j-j-vaccines-expert-opinion-by-kate-chander-chiang-ajay-gupta-md/

Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »

Multiple Major Scientific Journals Will Fully Adopt Open Access Under Plan S

Curator: Stephen J. Williams, PhD

More university library systems have been pressuring major scientific publishing houses to adopt an open access strategy in order to reduce the library system’s budgetary burdens.  In fact some major universities like the California system of universities (University of California and other publicly funded universities in the state as well as Oxford University in the UK, even MIT have decided to become their own publishing houses in a concerted effort to fight back against soaring journal subscription costs as well as the costs burdening individual scientists and laboratories (some of the charges to publish one paper can run as high as $8000.00 USD while the journal still retains all the rights of distribution of the information).  Therefore more and more universities, as well as concerted efforts by the European Union and the US government are mandating that scientific literature be published in an open access format.

The results of this pressure are evident now as major journals like Nature, JBC, and others have plans to go fully open access in 2021.  Below is a listing and news reports of some of these journals plans to undertake a full Open Access Format.

 

Nature to join open-access Plan S, publisher says

09 APRIL 2020 UPDATE 14 APRIL 2020

Springer Nature says it commits to offering researchers a route to publishing open access in Nature and most Nature-branded journals from 2021.

Richard Van Noorden

After a change in the rules of the bold open-access (OA) initiative known as Plan S, publisher Springer Nature said on 8 April that many of its non-OA journals — including Nature — were now committed to joining the plan, pending discussion of further technical details.

This means that Nature and other Nature-branded journals that publish original research will now look to offer an immediate OA route after January 2021 to scientists who want it, or whose funders require it, a spokesperson says. (Nature is editorially independent of its publisher, Springer Nature.)

“We are delighted that Springer Nature is committed to transitioning its journals to full OA,” said Robert Kiley, head of open research at the London-based biomedical funder Wellcome, and the interim coordinator for Coalition S, a group of research funders that launched Plan S in 2018.

But Lisa Hinchliffe, a librarian at the University of Illinois at Urbana–Champaign, says the changed rules show that publishers have successfully pushed back against Plan S, softening its guidelines and expectations — in particular in the case of hybrid journals, which publish some content openly and keep other papers behind paywalls. “The coalition continues to take actions that rehabilitate hybrid journals into compliance rather than taking the hard line of unacceptability originally promulgated,” she says.

 

 

 

 

What is Plan S?

The goal of Plan S is to make scientific and scholarly works free to read as soon as they are published. So far, 17 national funders, mostly in Europe, have joined the initiative, as have the World Health Organization and two of the world’s largest private biomedical funders — the Bill & Melinda Gates Foundation and Wellcome. The European Commission will also implement an OA policy that is aligned with Plan S. Together, this covers around 7% of scientific articles worldwide, according to one estimate. A 2019 report published by the publishing-services firm Clarivate Analytics suggested that 35% of the research content published in Nature in 2017 acknowledged a Plan S funder (see ‘Plan S papers’).

PLAN S PAPERS

Journal Total papers in 2017 % acknowledging Plan S funder
Nature 290 35%
Science 235 31%
Proc. Natl Acad. Sci. USA 639 20%

Source: The Plan S footprint: Implications for the scholarly publishing landscape (Institute for Scientific Information, 2019)

 

Source: https://www.nature.com/articles/d41586-020-01066-5

Opening ASBMB publications freely to all

 

Lila M. Gierasch, Editor-in-Chief, Journal of Biological Chemistry

Nicholas O. Davidson

Kerry-Anne Rye, Editors-in-Chief, Journal of Lipid Research and 

Alma L. Burlingame, Editor-in-Chief, Molecular and Cellular Proteomics

 

We are extremely excited to announce on behalf of the American Society for Biochemistry and Molecular Biology (ASBMB) that the Journal of Biological Chemistry (JBC), Molecular & Cellular Proteomics (MCP), and the Journal of Lipid Research (JLR) will be published as fully open-access journals beginning in January 2021. This is a landmark decision that will have huge impact for readers and authors. As many of you know, many researchers have called for journals to become open access to facilitate scientific progress, and many funding agencies across the globe are either already requiring or considering a requirement that all scientific publications based on research they support be published in open-access journals. The ASBMB journals have long supported open access, making the accepted author versions of manuscripts immediately and permanently available, allowing authors to opt in to the immediate open publication of the final version of their paper, and endorsing the goals of the larger open-access movement (1). However, we are no longer satisfied with these measures. To live up to our goals as a scientific society, we want to freely distribute the scientific advances published in JBC, MCP, and JLR as widely and quickly as possible to support the scientific community. How better can we facilitate the dissemination of new information than to make our scientific content freely open to all?

For ASBMB journals and others who have contemplated or made the transition to publishing all content open access, achieving this milestone generally requires new financial mechanisms. In the case of the ASBMB journals, the transition to open access is being made possible by a new partnership with Elsevier, whose established capabilities and economies of scale make the costs associated with open-access publication manageable for the ASBMB (2). However, we want to be clear: The ethos of ASBMB journals will not change as a consequence of this new alliance. The journals remain society journals: The journals are owned by the society, and all scientific oversight for the journals will remain with ASBMB and its chosen editors. Peer review will continue to be done by scientists reviewing the work of scientists, carried out by editorial board members and external referees on behalf of the ASBMB journal leadership. There will be no intervention in this process by the publisher.

Although we will be saying “goodbye” to many years of self-publishing (115 in the case of JBC), we are certain that we are taking this big step for all the right reasons. The goal for JBC, MCP, and JLR has always been and will remain to help scientists advance their work by rapidly and effectively disseminating their results to their colleagues and facilitating the discovery of new findings (13), and open access is only one of many innovations and improvements in science publishing that could help the ASBMB journals achieve this goal. We have been held back from fully exploring these options because of the challenges of “keeping the trains running” with self-publication. In addition to allowing ASBMB to offer all the content in its journals to all readers freely and without barriers, the new partnership with Elsevier opens many doors for ASBMB publications, from new technology for manuscript handling and production, to facilitating reader discovery of content, to deploying powerful analytics to link content within and across publications, to new opportunities to improve our peer review mechanisms. We have all dreamed of implementing these innovations and enhancements (45) but have not had the resources or infrastructure needed.

A critical aspect of moving to open access is how this decision impacts the cost to authors. Like most publishers that have made this transition, we have been extremely worried that achieving open-access publishing would place too big a financial burden on our authors. We are pleased to report the article-processing charges (APCs) to publish in ASBMB journals will be on the low end within the range of open-access fees: $2,000 for members and $2,500 for nonmembers. While slightly higher than the cost an author incurs now if the open-access option is not chosen, these APCs are lower than the current charges for open access on our existing platform.

References

1.↵ Gierasch, L. M., Davidson, N. O., Rye, K.-A., and Burlingame, A. L. (2019) For the sake of science. J. Biol. Chem. 294, 2976 FREE Full Text

2.↵ Gierasch, L. M. (2017) On the costs of scientific publishing. J. Biol. Chem. 292, 16395–16396 FREE Full Text

3.↵ Gierasch, L. M. (2020) Faster publication advances your science: The three R’s. J. Biol. Chem. 295, 672 FREE Full Text

4.↵ Gierasch, L. M. (2017) JBC is on a mission to facilitate scientific discovery. J. Biol. Chem. 292, 6853–6854 FREE Full Text

5.↵ Gierasch, L. M. (2017) JBC’s New Year’s resolutions: Check them off! J. Biol. Chem. 292, 21705–21706 FREE Full Text

 

Source: https://www.jbc.org/content/295/22/7814.short?ssource=mfr&rss=1

 

Open access publishing under Plan S to start in 2021

BMJ

2019; 365 doi: https://doi.org/10.1136/bmj.l2382 (Published 31 May 2019)Cite this as: BMJ 2019;365:l2382

From 2021, all research funded by public or private grants should be published in open access journals, according to a group of funding agencies called coALition S.1

The plan is the final version of a draft that was put to public consultation last year and attracted 344 responses from institutions, almost half of them from the UK.2 The responses have been considered and some changes made to the new system called Plan S, a briefing at the Science Media Centre in London was told on 29 May.

The main change has been to delay implementation for a year, to 1 January 2021, to allow more time for those involved—researchers, funders, institutions, publishers, and repositories—to make the necessary changes, said John-Arne Røttingen, chief executive of the Research Council of Norway.

“All research contracts signed after that date should include the obligation to publish in an open access journal,” he said. T……

(Please Note in a huge bit of irony this article is NOT Open Access and behind a paywall…. Yes an article about an announcement to go Open Access is not Open Access)

Source: https://www.bmj.com/content/365/bmj.l2382.full

 

 

Plan S

From Wikipedia, the free encyclopedia

Jump to navigationJump to search

Not to be confused with S-Plan.

Plan S is an initiative for open-access science publishing launched in 2018[1][2] by “cOAlition S”,[3] a consortium of national research agencies and funders from twelve European countries. The plan requires scientists and researchers who benefit from state-funded research organisations and institutions to publish their work in open repositories or in journals that are available to all by 2021.[4] The “S” stands for “shock”.[5]

Principles of the plan[edit]

The plan is structured around ten principles.[3] The key principle states that by 2021, research funded by public or private grants must be published in open-access journals or platforms, or made immediately available in open access repositories without an embargo. The ten principles are:

  1. authors should retain copyrighton their publications, which must be published under an open license such as Creative Commons;
  2. the members of the coalition should establish robust criteria and requirements for compliant open access journals and platforms;
  3. they should also provide incentives for the creation of compliant open access journals and platforms if they do not yet exist;
  4. publication fees should be covered by the funders or universities, not individual researchers;
  5. such publication fees should be standardized and capped;
  6. universities, research organizations, and libraries should align their policies and strategies;
  7. for books and monographs, the timeline may be extended beyond 2021;
  8. open archives and repositories are acknowledged for their importance;
  9. hybrid open-access journalsare not compliant with the key principle;
  10. members of the coalition should monitor and sanction non-compliance.

Member organisations

Organisations in the coalition behind Plan S include:[14]

International organizations that are members:

Plan S is also supported by:

 

Other articles on Open Access on this Open Access Journal Include:

MIT, guided by open access principles, ends Elsevier negotiations, an act followed by other University Systems in the US and in Europe

 

Open Access e-Scientific Publishing: Elected among 2018 Nature’s 10 Top Influencers – ROBERT-JAN SMITS: A bureaucrat launched a drive to transform science publishing

 

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

 

Mozilla Science Lab Promotes Data Reproduction Through Open Access: Report from 9/10/2015 Online Meeting

 

Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals

 

The Fatal Self Distraction of the Academic Publishing Industry: The Solution of the Open Access Online Scientific Journals
PeerJ Model for Open Access Scientific Journal
“Open Access Publishing” is becoming the mainstream model: “Academic Publishing” has changed Irrevocably
Open-Access Publishing in Genomics

 

 

 

 

 

 

 

 

 

 

 

 

Read Full Post »

The Castleman Disease Research Network publishes Phase 1 Results of Drug Repurposing Database for COVID-19

Reporter: Stephen J. Williams, PhD.

 

From CNN at https://www.cnn.com/2020/06/27/health/coronavirus-treatment-fajgenbaum-drug-review-scn-wellness/index.html

Updated 8:17 AM ET, Sat June 27, 2020

(CNN)Every morning, Dr. David Fajgenbaum takes three life-saving pills. He wakes up his 21-month-old daughter Amelia to help feed her. He usually grabs some Greek yogurt to eat quickly before sitting down in his home office. Then he spends most of the next 14 hours leading dozens of fellow researchers and volunteers in a systematic review of all the drugs that physicians and researchers have used so far to treat Covid-19. His team has already pored over more than 8,000 papers on how to treat coronavirus patients.

The 35-year-old associate professor at the University of Pennsylvania Perelman School of Medicine leads the school’s Center for Cytokine Storm Treatment & Laboratory. For the last few years, he has dedicated his life to studying Castleman disease, a rare condition that nearly claimed his life. Against epic odds, he found a drug that saved his own life six years ago, by creating a collaborative method for organizing medical research that could be applicable to thousands of human diseases. But after seeing how the same types of flares of immune-signaling cells, called cytokine storms, kill both Castleman and Covid-19 patients alike, his lab has devoted nearly all of its resources to aiding doctors fighting the pandemic.

A global repository for Covid-19 treatment data

Researchers working with his lab have reviewed published data on more than 150 drugs doctors around the world have to treat nearly 50,000 patients diagnosed with Covid-19. They’ve made their analysis public in a database called the Covid-19 Registry of Off-label & New Agents (or CORONA for short).
It’s a central repository of all available data in scientific journals on all the therapies used so far to curb the pandemic. This information can help doctors treat patients and tell researchers how to build clinical trials.The team’s process resembles that of the coordination Fajgenbaum used as a medical student to discover that he could repurpose Sirolimus, an immunosuppressant drug approved for kidney transplant patients, to prevent his body from producing deadly flares of immune-signaling cells called cytokines.The 13 members of Fajgenbaum’s lab recruited dozens of other scientific colleagues to join their coronavirus effort. And what this group is finding has ramifications for scientists globally.
This effort by Dr. Fajgenbaum’s lab and the resultant collaborative effort shows the power and speed at which a coordinated open science effort can achieve goals. Below is the description of the phased efforts planned and completed from the CORONA website.

CORONA (COvid19 Registry of Off-label & New Agents)

Drug Repurposing for COVID-19

Our overarching vision:  A world where data on all treatments that have been used against COVID19 are maintained in a central repository and analyzed so that physicians currently treating COVID19 patients know what treatments are most likely to help their patients and so that clinical trials can be appropriately prioritized.

 

Phase 1: COMPLETED

Our team reviewed 2500+ papers & extracted data on over 9,000 COVID19 patients. We found 115 repurposed drugs that have been used to treat COVID19 patients and analyzed data on which ones seem most promising for clinical trials. This data is open source and can be used by physicians to treat patients and prioritize drugs for trials. The CDCN will keep this database updated as a resource for this global fight. Repurposed drugs give us the best chance to help COVID19 as quickly as possible! As disease hunters who have identified and repurposed drugs for Castleman disease, we’re applying our ChasingMyCure approach to COVID19.

Read our systematic literature review published in Infectious Diseases and Therapy at the following link: Treatments Administered to the First 9152 Reported Cases of COVID-19: A Systematic Review

From Fajgenbaum, D.C., Khor, J.S., Gorzewski, A. et al. Treatments Administered to the First 9152 Reported Cases of COVID-19: A Systematic Review. Infect Dis Ther (2020). https://doi.org/10.1007/s40121-020-00303-8

The following is the Abstract and link to the metastudy.  This study was a systematic review of literature with strict inclusion criteria.  Data was curated from these published studies and a total of 9152 patients were evaluated for treatment regimens for COVID19 complications and clinical response was curated for therapies in these curated studies.  Main insights from this study were as follows:

Key Summary Points

Why carry out this study?
  • Data on drugs that have been used to treat COVID-19 worldwide are currently spread throughout disparate publications.
  • We performed a systematic review of the literature to identify drugs that have been tried in COVID-19 patients and to explore clinically meaningful response time.
What was learned from the study?
  • We identified 115 uniquely referenced treatments administered to COVID-19 patients. Antivirals were the most frequently administered class; combination lopinavir/ritonavir was the most frequently used treatment.
  • This study presents the latest status of off-label and experimental treatments for COVID-19. Studies such as this are important for all diseases, especially those that do not currently have definitive evidence from randomized controlled trials or approved therapies.

Treatments Administered to the First 9152 Reported Cases of COVID-19: A Systematic Review

Abstract

The emergence of SARS-CoV-2/2019 novel coronavirus (COVID-19) has created a global pandemic with no approved treatments or vaccines. Many treatments have already been administered to COVID-19 patients but have not been systematically evaluated. We performed a systematic literature review to identify all treatments reported to be administered to COVID-19 patients and to assess time to clinically meaningful response for treatments with sufficient data. We searched PubMed, BioRxiv, MedRxiv, and ChinaXiv for articles reporting treatments for COVID-19 patients published between 1 December 2019 and 27 March 2020. Data were analyzed descriptively. Of the 2706 articles identified, 155 studies met the inclusion criteria, comprising 9152 patients. The cohort was 45.4% female and 98.3% hospitalized, and mean (SD) age was 44.4 years (SD 21.0). The most frequently administered drug classes were antivirals, antibiotics, and corticosteroids, and of the 115 reported drugs, the most frequently administered was combination lopinavir/ritonavir, which was associated with a time to clinically meaningful response (complete symptom resolution or hospital discharge) of 11.7 (1.09) days. There were insufficient data to compare across treatments. Many treatments have been administered to the first 9152 reported cases of COVID-19. These data serve as the basis for an open-source registry of all reported treatments given to COVID-19 patients at www.CDCN.org/CORONA. Further work is needed to prioritize drugs for investigation in well-controlled clinical trials and treatment protocols.

Read the Press Release from PennMedicine at the following link: PennMedicine Press Release

Phase 2: Continue to update CORONA

Our team continues to work diligently to maintain an updated listing of all treatments reported to be used in COVID19 patients from papers in PubMed. We are also re-analyzing publicly available COVID19 single cell transcriptomic data alongside our iMCD data to search for novel insights and therapeutic targets.

You can visit the following link to access a database viewer built and managed by Matt Chadsey, owner of Nonlinear Ventures.

If you are a physician treating COVID19 patients, please visit the FDA’s CURE ID app to report de-identified information about drugs you’ve used to treat COVID19 in just a couple minutes.

For more information on COVID19 on this Open Access Journal please see our Coronavirus Portal at

https://pharmaceuticalintelligence.com/coronavirus-portal/

Read Full Post »

Live Notes, Real Time Conference Coverage AACR 2020 #AACR20: Tuesday June 23, 2020 Noon-2:45 Educational Sessions

Live Notes, Real Time Conference Coverage AACR 2020: Tuesday June 23, 2020 Noon-2:45 Educational Sessions

Reporter: Stephen J. Williams, PhD

Follow Live in Real Time using

#AACR20

@pharma_BI

@AACR

Register for FREE at https://www.aacr.org/

 

Presidential Address

Elaine R Mardis, William N Hait

DETAILS

Welcome and introduction

William N Hait

 

Improving diagnostic yield in pediatric cancer precision medicine

Elaine R Mardis
  • Advent of genomics have revolutionized how we diagnose and treat lung cancer
  • We are currently needing to understand the driver mutations and variants where we can personalize therapy
  • PD-L1 and other checkpoint therapy have not really been used in pediatric cancers even though CAR-T have been successful
  • The incidence rates and mortality rates of pediatric cancers are rising
  • Large scale study of over 700 pediatric cancers show cancers driven by epigenetic drivers or fusion proteins. Need for transcriptomics.  Also study demonstrated that we have underestimated germ line mutations and hereditary factors.
  • They put together a database to nominate patients on their IGM Cancer protocol. Involves genetic counseling and obtaining germ line samples to determine hereditary factors.  RNA and protein are evaluated as well as exome sequencing. RNASeq and Archer Dx test to identify driver fusions
  • PECAN curated database from St. Jude used to determine driver mutations. They use multiple databases and overlap within these databases and knowledge base to determine or weed out false positives
  • They have used these studies to understand the immune infiltrate into recurrent cancers (CytoCure)
  • They found 40 germline cancer predisposition genes, 47 driver somatic fusion proteins, 81 potential actionable targets, 106 CNV, 196 meaningful somatic driver mutations

 

 

Tuesday, June 23

12:00 PM – 12:30 PM EDT

Awards and Lectures

NCI Director’s Address

Norman E Sharpless, Elaine R Mardis

DETAILS

Introduction: Elaine Mardis

 

NCI Director Address: Norman E Sharpless
  • They are functioning well at NCI with respect to grant reviews, research, and general functions in spite of the COVID pandemic and the massive demonstrations on also focusing on the disparities which occur in cancer research field and cancer care
  • There are ongoing efforts at NCI to make a positive difference in racial injustice, diversity in the cancer workforce, and for patients as well
  • Need a diverse workforce across the cancer research and care spectrum
  • Data show that areas where the clinicians are successful in putting African Americans on clinical trials are areas (geographic and site specific) where health disparities are narrowing
  • Grants through NCI new SeroNet for COVID-19 serologic testing funded by two RFAs through NIAD (RFA-CA-30-038 and RFA-CA-20-039) and will close on July 22, 2020

 

Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Immunology, Tumor Biology, Experimental and Molecular Therapeutics, Molecular and Cellular Biology/Genetics

Tumor Immunology and Immunotherapy for Nonimmunologists: Innovation and Discovery in Immune-Oncology

This educational session will update cancer researchers and clinicians about the latest developments in the detailed understanding of the types and roles of immune cells in tumors. It will summarize current knowledge about the types of T cells, natural killer cells, B cells, and myeloid cells in tumors and discuss current knowledge about the roles these cells play in the antitumor immune response. The session will feature some of the most promising up-and-coming cancer immunologists who will inform about their latest strategies to harness the immune system to promote more effective therapies.

Judith A Varner, Yuliya Pylayeva-Gupta

 

Introduction

Judith A Varner
New techniques reveal critical roles of myeloid cells in tumor development and progression
  • Different type of cells are becoming targets for immune checkpoint like myeloid cells
  • In T cell excluded or desert tumors T cells are held at periphery so myeloid cells can infiltrate though so macrophages might be effective in these immune t cell naïve tumors, macrophages are most abundant types of immune cells in tumors
  • CXCLs are potential targets
  • PI3K delta inhibitors,
  • Reduce the infiltrate of myeloid tumor suppressor cells like macrophages
  • When should we give myeloid or T cell therapy is the issue
Judith A Varner
Novel strategies to harness T-cell biology for cancer therapy
Positive and negative roles of B cells in cancer
Yuliya Pylayeva-Gupta
New approaches in cancer immunotherapy: Programming bacteria to induce systemic antitumor immunity

 

 

Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Cancer Chemistry

Chemistry to the Clinic: Part 2: Irreversible Inhibitors as Potential Anticancer Agents

There are numerous examples of highly successful covalent drugs such as aspirin and penicillin that have been in use for a long period of time. Despite historical success, there was a period of reluctance among many to purse covalent drugs based on concerns about toxicity. With advances in understanding features of a well-designed covalent drug, new techniques to discover and characterize covalent inhibitors, and clinical success of new covalent cancer drugs in recent years, there is renewed interest in covalent compounds. This session will provide a broad look at covalent probe compounds and drug development, including a historical perspective, examination of warheads and electrophilic amino acids, the role of chemoproteomics, and case studies.

Benjamin F Cravatt, Richard A. Ward, Sara J Buhrlage

 

Discovering and optimizing covalent small-molecule ligands by chemical proteomics

Benjamin F Cravatt
  • Multiple approaches are being investigated to find new covalent inhibitors such as: 1) cysteine reactivity mapping, 2) mapping cysteine ligandability, 3) and functional screening in phenotypic assays for electrophilic compounds
  • Using fluorescent activity probes in proteomic screens; have broad useability in the proteome but can be specific
  • They screened quiescent versus stimulated T cells to determine reactive cysteines in a phenotypic screen and analyzed by MS proteomics (cysteine reactivity profiling); can quantitate 15000 to 20,000 reactive cysteines
  • Isocitrate dehydrogenase 1 and adapter protein LCP-1 are two examples of changes in reactive cysteines they have seen using this method
  • They use scout molecules to target ligands or proteins with reactive cysteines
  • For phenotypic screens they first use a cytotoxic assay to screen out toxic compounds which just kill cells without causing T cell activation (like IL10 secretion)
  • INTERESTINGLY coupling these MS reactive cysteine screens with phenotypic screens you can find NONCANONICAL mechanisms of many of these target proteins (many of the compounds found targets which were not predicted or known)

Electrophilic warheads and nucleophilic amino acids: A chemical and computational perspective on covalent modifier

The covalent targeting of cysteine residues in drug discovery and its application to the discovery of Osimertinib

Richard A. Ward
  • Cysteine activation: thiolate form of cysteine is a strong nucleophile
  • Thiolate form preferred in polar environment
  • Activation can be assisted by neighboring residues; pKA will have an effect on deprotonation
  • pKas of cysteine vary in EGFR
  • cysteine that are too reactive give toxicity while not reactive enough are ineffective

 

Accelerating drug discovery with lysine-targeted covalent probes

 

Tuesday, June 23

12:45 PM – 2:15 PM EDT

Virtual Educational Session

Molecular and Cellular Biology/Genetics

Virtual Educational Session

Tumor Biology, Immunology

Metabolism and Tumor Microenvironment

This Educational Session aims to guide discussion on the heterogeneous cells and metabolism in the tumor microenvironment. It is now clear that the diversity of cells in tumors each require distinct metabolic programs to survive and proliferate. Tumors, however, are genetically programmed for high rates of metabolism and can present a metabolically hostile environment in which nutrient competition and hypoxia can limit antitumor immunity.

Jeffrey C Rathmell, Lydia Lynch, Mara H Sherman, Greg M Delgoffe

 

T-cell metabolism and metabolic reprogramming antitumor immunity

Jeffrey C Rathmell

Introduction

Jeffrey C Rathmell

Metabolic functions of cancer-associated fibroblasts

Mara H Sherman

Tumor microenvironment metabolism and its effects on antitumor immunity and immunotherapeutic response

Greg M Delgoffe
  • Multiple metabolites, reactive oxygen species within the tumor microenvironment; is there heterogeneity within the TME metabolome which can predict their ability to be immunosensitive
  • Took melanoma cells and looked at metabolism using Seahorse (glycolysis): and there was vast heterogeneity in melanoma tumor cells; some just do oxphos and no glycolytic metabolism (inverse Warburg)
  • As they profiled whole tumors they could separate out the metabolism of each cell type within the tumor and could look at T cells versus stromal CAFs or tumor cells and characterized cells as indolent or metabolic
  • T cells from hyerglycolytic tumors were fine but from high glycolysis the T cells were more indolent
  • When knock down glucose transporter the cells become more glycolytic
  • If patient had high oxidative metabolism had low PDL1 sensitivity
  • Showed this result in head and neck cancer as well
  • Metformin a complex 1 inhibitor which is not as toxic as most mito oxphos inhibitors the T cells have less hypoxia and can remodel the TME and stimulate the immune response
  • Metformin now in clinical trials
  • T cells though seem metabolically restricted; T cells that infiltrate tumors are low mitochondrial phosph cells
  • T cells from tumors have defective mitochondria or little respiratory capacity
  • They have some preliminary findings that metabolic inhibitors may help with CAR-T therapy

Obesity, lipids and suppression of anti-tumor immunity

Lydia Lynch
  • Hypothesis: obesity causes issues with anti tumor immunity
  • Less NK cells in obese people; also produce less IFN gamma
  • RNASeq on NOD mice; granzymes and perforins at top of list of obese downregulated
  • Upregulated genes that were upregulated involved in lipid metabolism
  • All were PPAR target genes
  • NK cells from obese patients takes up palmitate and this reduces their glycolysis but OXPHOS also reduced; they think increased FFA basically overloads mitochondria
  • PPAR alpha gamma activation mimics obesity

 

 

Tuesday, June 23

12:45 PM – 2:45 PM EDT

Virtual Educational Session

Clinical Research Excluding Trials

The Evolving Role of the Pathologist in Cancer Research

Long recognized for their role in cancer diagnosis and prognostication, pathologists are beginning to leverage a variety of digital imaging technologies and computational tools to improve both clinical practice and cancer research. Remarkably, the emergence of artificial intelligence (AI) and machine learning algorithms for analyzing pathology specimens is poised to not only augment the resolution and accuracy of clinical diagnosis, but also fundamentally transform the role of the pathologist in cancer science and precision oncology. This session will discuss what pathologists are currently able to achieve with these new technologies, present their challenges and barriers, and overview their future possibilities in cancer diagnosis and research. The session will also include discussions of what is practical and doable in the clinic for diagnostic and clinical oncology in comparison to technologies and approaches primarily utilized to accelerate cancer research.

 

Jorge S Reis-Filho, Thomas J Fuchs, David L Rimm, Jayanta Debnath

DETAILS

Tuesday, June 23

12:45 PM – 2:45 PM EDT

 

High-dimensional imaging technologies in cancer research

David L Rimm

  • Using old methods and new methods; so cell counting you use to find the cells then phenotype; with quantification like with Aqua use densitometry of positive signal to determine a threshold to determine presence of a cell for counting
  • Hiplex versus multiplex imaging where you have ten channels to measure by cycling of flour on antibody (can get up to 20plex)
  • Hiplex can be coupled with Mass spectrometry (Imaging Mass spectrometry, based on heavy metal tags on mAbs)
  • However it will still take a trained pathologist to define regions of interest or field of desired view

 

Introduction

Jayanta Debnath

Challenges and barriers of implementing AI tools for cancer diagnostics

Jorge S Reis-Filho

Implementing robust digital pathology workflows into clinical practice and cancer research

Jayanta Debnath

Invited Speaker

Thomas J Fuchs
  • Founder of spinout of Memorial Sloan Kettering
  • Separates AI from computational algothimic
  • Dealing with not just machines but integrating human intelligence
  • Making decision for the patients must involve human decision making as well
  • How do we get experts to do these decisions faster
  • AI in pathology: what is difficult? =è sandbox scenarios where machines are great,; curated datasets; human decision support systems or maps; or try to predict nature
  • 1) learn rules made by humans; human to human scenario 2)constrained nature 3)unconstrained nature like images and or behavior 4) predict nature response to nature response to itself
  • In sandbox scenario the rules are set in stone and machines are great like chess playing
  • In second scenario can train computer to predict what a human would predict
  • So third scenario is like driving cars
  • System on constrained nature or constrained dataset will take a long time for commuter to get to decision
  • Fourth category is long term data collection project
  • He is finding it is still finding it is still is difficult to predict nature so going from clinical finding to prognosis still does not have good predictability with AI alone; need for human involvement
  • End to end partnering (EPL) is a new way where humans can get more involved with the algorithm and assist with the problem of constrained data
  • An example of a workflow for pathology would be as follows from Campanella et al 2019 Nature Medicine: obtain digital images (they digitized a million slides), train a massive data set with highthroughput computing (needed a lot of time and big software developing effort), and then train it using input be the best expert pathologists (nature to human and unconstrained because no data curation done)
  • Led to first clinically grade machine learning system (Camelyon16 was the challenge for detecting metastatic cells in lymph tissue; tested on 12,000 patients from 45 countries)
  • The first big hurdle was moving from manually annotated slides (which was a big bottleneck) to automatically extracted data from path reports).
  • Now problem is in prediction: How can we bridge the gap from predicting humans to predicting nature?
  • With an AI system pathologist drastically improved the ability to detect very small lesions

 

Virtual Educational Session

Epidemiology

Cancer Increases in Younger Populations: Where Are They Coming from?

Incidence rates of several cancers (e.g., colorectal, pancreatic, and breast cancers) are rising in younger populations, which contrasts with either declining or more slowly rising incidence in older populations. Early-onset cancers are also more aggressive and have different tumor characteristics than those in older populations. Evidence on risk factors and contributors to early-onset cancers is emerging. In this Educational Session, the trends and burden, potential causes, risk factors, and tumor characteristics of early-onset cancers will be covered. Presenters will focus on colorectal and breast cancer, which are among the most common causes of cancer deaths in younger people. Potential mechanisms of early-onset cancers and racial/ethnic differences will also be discussed.

Stacey A. Fedewa, Xavier Llor, Pepper Jo Schedin, Yin Cao

Cancers that are and are not increasing in younger populations

Stacey A. Fedewa

 

  • Early onset cancers, pediatric cancers and colon cancers are increasing in younger adults
  • Younger people are more likely to be uninsured and these are there most productive years so it is a horrible life event for a young adult to be diagnosed with cancer. They will have more financial hardship and most (70%) of the young adults with cancer have had financial difficulties.  It is very hard for women as they are on their childbearing years so additional stress
  • Types of early onset cancer varies by age as well as geographic locations. For example in 20s thyroid cancer is more common but in 30s it is breast cancer.  Colorectal and testicular most common in US.
  • SCC is decreasing by adenocarcinoma of the cervix is increasing in women’s 40s, potentially due to changing sexual behaviors
  • Breast cancer is increasing in younger women: maybe etiologic distinct like triple negative and larger racial disparities in younger African American women
  • Increased obesity among younger people is becoming a factor in this increasing incidence of early onset cancers

 

 

Other Articles on this Open Access  Online Journal on Cancer Conferences and Conference Coverage in Real Time Include

Press Coverage

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Symposium: New Drugs on the Horizon Part 3 12:30-1:25 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on NCI Activities: COVID-19 and Cancer Research 5:20 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Evaluating Cancer Genomics from Normal Tissues Through Metastatic Disease 3:50 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Novel Targets and Therapies 2:35 PM

 

Read Full Post »

Crowdsourcing Difficult-to-Collect Epidemiological Data in Pandemics: Lessons from Ebola to the current COVID-19 Pandemic

 

Curator: Stephen J. Williams, Ph.D.

 

At the onset of the COVID-19 pandemic, epidemiological data from the origin of the Sars-Cov2 outbreak, notably from the Wuhan region in China, was sparse.  In fact, official individual patient data rarely become available early on in an outbreak, when that data is needed most. Epidemiological data was just emerging from China as countries like Italy, Spain, and the United States started to experience a rapid emergence of the outbreak in their respective countries.  China, made of 31 geographical provinces, is a vast and complex country, with both large urban and rural areas.

 

 

 

As a result of this geographical diversity and differences in healthcare coverage across the country, epidemiological data can be challenging.  For instance, cancer incidence data for regions and whole country is difficult to calculate as there are not many regional cancer data collection efforts, contrasted with the cancer statistics collected in the United States, which is meticulously collected by cancer registries in each region, state and municipality.  Therefore, countries like China must depend on hospital record data and autopsy reports in order to back-extrapolate cancer incidence data.  This is the case in some developed countries like Italy where cancer registry is administered by a local government and may not be as extensive (for example in the Napoli region of Italy).

 

 

 

 

 

 

Population density China by province. Source https://www.unicef.cn/en/figure-13-population-density-province-2017

 

 

 

Epidemiologists, in areas in which data collection may be challenging, are relying on alternate means of data collection such as using devices connected to the internet-of-things such as mobile devices, or in some cases, social media is becoming useful to obtain health related data.  Such as effort to acquire pharmacovigilance data, patient engagement, and oral chemotherapeutic adherence using the social media site Twitter has been discussed in earlier posts: (see below)

Twitter is Becoming a Powerful Tool in Science and Medicine at https://pharmaceuticalintelligence.com/2014/11/06/twitter-is-becoming-a-powerful-tool-in-science-and-medicine/

 

 

 

 

 

Now epidemiologists are finding crowd-sourced data from social media and social networks becoming useful in collecting COVID-19 related data in those countries where health data collection efforts may be sub-optimal.  In a recent paper in The Lancet Digital Health [1], authors Kaiyuan Sun, Jenny Chen, and Cecile Viboud present data from the COVID-19 outbreak in China using information collected over social network sites as well as public news outlets and find strong correlations with later-released government statistics, showing the usefulness in such social and crowd-sourcing strategies to collect pertinent time-sensitive data.  In particular, the authors aim was to investigate this strategy of data collection to reduce the time delays between infection and detection, isolation and reporting of cases.

The paper is summarized below:

Kaiyuan Sun, PhD Jenny Chen, BScn Cécile Viboud, PhD . (2020).  Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study.  The Lancet: Digital Health; Volume 2, Issue 4, E201-E208.

Summary

Background

As the outbreak of coronavirus disease 2019 (COVID-19) progresses, epidemiological data are needed to guide situational awareness and intervention strategies. Here we describe efforts to compile and disseminate epidemiological information on COVID-19 from news media and social networks.

Methods

In this population-level observational study, we searched DXY.cn, a health-care-oriented social network that is currently streaming news reports on COVID-19 from local and national Chinese health agencies. We compiled a list of individual patients with COVID-19 and daily province-level case counts between Jan 13 and Jan 31, 2020, in China. We also compiled a list of internationally exported cases of COVID-19 from global news media sources (Kyodo News, The Straits Times, and CNN), national governments, and health authorities. We assessed trends in the epidemiology of COVID-19 and studied the outbreak progression across China, assessing delays between symptom onset, seeking care at a hospital or clinic, and reporting, before and after Jan 18, 2020, as awareness of the outbreak increased. All data were made publicly available in real time.

Findings

We collected data for 507 patients with COVID-19 reported between Jan 13 and Jan 31, 2020, including 364 from mainland China and 143 from outside of China. 281 (55%) patients were male and the median age was 46 years (IQR 35–60). Few patients (13 [3%]) were younger than 15 years and the age profile of Chinese patients adjusted for baseline demographics confirmed a deficit of infections among children. Across the analysed period, delays between symptom onset and seeking care at a hospital or clinic were longer in Hubei province than in other provinces in mainland China and internationally. In mainland China, these delays decreased from 5 days before Jan 18, 2020, to 2 days thereafter until Jan 31, 2020 (p=0·0009). Although our sample captures only 507 (5·2%) of 9826 patients with COVID-19 reported by official sources during the analysed period, our data align with an official report published by Chinese authorities on Jan 28, 2020.

Interpretation

News reports and social media can help reconstruct the progression of an outbreak and provide detailed patient-level data in the context of a health emergency. The availability of a central physician-oriented social network facilitated the compilation of publicly available COVID-19 data in China. As the outbreak progresses, social media and news reports will probably capture a diminishing fraction of COVID-19 cases globally due to reporting fatigue and overwhelmed health-care systems. In the early stages of an outbreak, availability of public datasets is important to encourage analytical efforts by independent teams and provide robust evidence to guide interventions.

A Few notes on Methodology:

  • The authors used crowd-sourced reports from DXY.cn, a social network for Chinese physicians, health-care professionals, pharmacies and health-care facilities. This online platform provides real time coverage of the COVID-19 outbreak in China
  • More data was curated from news media, television and includes time-stamped information on COVID-19 cases
  • These reports are publicly available, de-identified patient data
  • No patient consent was needed and no ethics approval was required
  • Data was collected between January 20, 2020 and January 31,2020
  • Sex, age, province of identification, travel history, dates of symptom development was collected
  • Additional data was collected for other international sites of the pandemic including Cambodia, Canada, France, Germany, Hong Kong, India, Italy, Japan, Malaysia, Nepal, Russia, Singapore, UK, and USA
  • All patients in database had laboratory confirmation of infection

 

Results

  • 507 patient data was collected with 153 visited and 152 resident of Wuhan
  • Reported cases were skewed toward males however the overall population curve is skewed toward males in China
  • Most cases (26%) were from Beijing (urban area) while an equal amount were from rural areas combined (Shaanzi and Yunnan)
  • Age distribution of COVID cases were skewed toward older age groups with median age of 45 HOWEVER there were surprisingly a statistically high amount of cases less than 5 years of age
  • Outbreak progression based on the crowd-sourced patient line was consistent with the data published by the China Center for Disease Control
  • Median reporting delay in the authors crowd-sourcing data was 5 days
  • Crowd-sourced data was able to detect apparent rapid growth of newly reported cases during the collection period in several provinces outside of Hubei province, which is consistent with local government data

The following graphs show age distribution for China in 2017 and predicted for 2050.

projected age distribution China 2050. Source https://chinapower.csis.org/aging-problem/

 

 

 

 

 

 

 

 

 

 

 

 

The authors have previously used this curation of news methodology to analyze the Ebola outbreak[2].

A further use of the crowd-sourced database was availability of travel histories for patients returning from Wuhan and onset of symptoms, allowing for estimation of incubation periods.

The following published literature has also used these datasets:

Backer JA, Klinkenberg D, Wallinga J: Incubation period of 2019 novel coronavirus (2019-nCoV) infections among travellers from Wuhan, China, 20-28 January 2020. Euro surveillance : bulletin Europeen sur les maladies transmissibles = European communicable disease bulletin 2020, 25(5).

Lauer SA, Grantz KH, Bi Q, Jones FK, Zheng Q, Meredith HR, Azman AS, Reich NG, Lessler J: The Incubation Period of Coronavirus Disease 2019 (COVID-19) From Publicly Reported Confirmed Cases: Estimation and Application. Annals of internal medicine 2020, 172(9):577-582.

Li Q, Guan X, Wu P, Wang X, Zhou L, Tong Y, Ren R, Leung KSM, Lau EHY, Wong JY et al: Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus-Infected Pneumonia. The New England journal of medicine 2020, 382(13):1199-1207.

Dataset is available on the Laboratory for the Modeling of Biological and Socio-technical systems website of Northeastern University at https://www.mobs-lab.org/.

References

  1. Sun K, Chen J, Viboud C: Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study. The Lancet Digital health 2020, 2(4):e201-e208.
  2. Cleaton JM, Viboud C, Simonsen L, Hurtado AM, Chowell G: Characterizing Ebola Transmission Patterns Based on Internet News Reports. Clinical infectious diseases : an official publication of the Infectious Diseases Society of America 2016, 62(1):24-31.

Read Full Post »

Older Posts »

%d bloggers like this: