Feeds:
Posts
Comments

Archive for the ‘Open Access Journals’ Category

How to Create a Twitter Space for @pharma_BI for Live Broadcasts

Right now, Twitter Spaces are available on Android and iOS operating systems ONLY.  For use on a PC desktop you must install an ANDROID EMULATOR.  This means best to set up the Twitter Space using your PHONE APP not on the desktop or laptop computer.  Right now, even though there is the ability to record a Twitter Space, that recording is not easily able to be embedded in WordPress as a tweet is (or chain of tweets).  However you can download the recording (takes a day or two) and convert to mpeg using a program like Audacity to convert into an audio format conducible to WordPress.

A while ago I had put a post where I link to a Twitter Space I created for a class on Dissemination of Scientific Discoveries.  The post

“Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?”

can be seen at

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

 

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 

 

About Twitter Spaces

 

Spaces is a way to have live audio conversations on Twitter. Anyone can join, listen, and speak in a Space on Twitter for iOS and Android. Currently you can listen in a Space on web.

Quick links

How to use Spaces
Spaces FAQ
Spaces Feedback Community
Community Spaces

 

 

 

 

 

 

 

 

 

 

 

How to use Spaces

Instructions for:

How do you start a Space?

 

 

 

Step 1

The creator of a Space is the host. As a host on iOS, you can start a Space by long pressing on the Tweet Composer  from your Home timeline and and then selecting the Spaces  icon.

You can also start a Space by selecting the Spaces tab on the bottom of your timeline.

Step 2

Spaces are public, so anyone can join as a listener, including people who don’t follow you. Listeners can be directly invited into a Space by DMing them a link to the Space, Tweeting out a link, or sharing a link elsewhere.

Step 3

Up to 13 people (including the host and 2 co-hosts) can speak in a Space at any given time. When creating a new Space, you will see options to Name your Space and Start your Space.

Step 4

To schedule a Space, select Schedule for later. Choose the date and time you’d like your Space to go live.

Step 5

Once the Space has started, the host can send requests to listeners to become co-hosts or speakers by selecting the people icon  and adding co-hosts or speakers, or selecting a person’s profile picture within a Space and adding them as a co-host or speaker. Listeners can request permission to speak from the host by selecting the Request icon below the microphone.

Step 6

When creating a Space, the host will join with their mic off and be the only speaker in the Space. When ready, select Start your Space.

Step 7

Allow mic access (speaking ability) to speakers by toggling Allow mic access to on.

Step 8

Get started chatting in your Space.

Step 9

As a host, make sure to Tweet out the link to your Space so other people can join. Select the  icon to Share via a Tweet.

 

Spaces FAQ

Where is Spaces available?

Anyone can join, listen, and speak in a Space on Twitter for iOS and Android. Currently, starting a Space on web is not possible, but anyone can join and listen in a Space.

Who can start a Space?

People on Twitter for iOS and Android can start a Space.

Who can see my Space?

For now, all Spaces are public like Tweets, which means they can be accessed by anyone. They will automatically appear at the top of your Home timeline, and each Space has a link that can be shared publicly. Since Spaces are publicly accessible by anyone, it may be possible for people to listen to a Space without being listed as a guest in the Space.

We make certain information about Spaces available through the Twitter Developer Platform, such as the title of a Space, the hosts and speakers, and whether it is scheduled, in progress, or complete. For a more detailed list of the information about Spaces we make available via the Twitter API, check out our Spaces endpoints documentation.

Can other people see my presence while I am listening or speaking in a Space?

Since all Spaces are public, your presence and activity in a Space is also public. If you are logged into your Twitter account when you are in a Space, you will be visible to everyone in the Space as well as to others, including people who follow you, people who peek into the Space without entering, and developers accessing information about the Space using the Twitter API.

If you are listening in a Space, your profile icon will appear with a purple pill at the top of your followers’ Home timelines. You have the option to change this in your settings.

Instructions for:

Manage who can see your Spaces listening activity

Step 1

On the left nav menu, select the more  icon and go to Settings and privacy.

Step 2

Under Settings, navigate to Privacy and safety.

Step 3

Under Your Twitter activity, go to Spaces.

Step 4

Choose if you want to Allow followers to see which Spaces you’re listening to by toggling this on or off.

Your followers will always see at the top of their Home timelines what Spaces you’re speaking in.

What does it mean that Spaces are public? Can anyone listen in a Space?

Spaces can be listened to by anyone on the Internet. This is part of a broader feature of Spaces that lets anyone listen to Spaces regardless of whether or not they are logged in to a Twitter account (or even have a Twitter account). Because of this, listener counts may not match the actual number of listeners, nor will the profile photos of all listeners necessarily be displayed in a Space.

How do I invite people to join a Space?

Invite people to join a Space by sending an invite via DM, Tweeting the link out to your Home timeline, or copying the invite link to send it out.

Who can join my Space?

For now, all Spaces are public and anyone can join any Space as a listener. If the listener has a user account, you can block their account. If you create a Space or are a speaker in a Space, your followers will see it at the top of their timeline.

Who can speak in my Space?

By default, your Space will always be set to Only people you invite to speak. You can also modify the Speaker permissions once your Space has been created. Select the  icon, then select Adjust settings to see the options for speaker permissions, which include EveryonePeople you follow, and the default Only people you invite to speak. These permissions are only saved for this particular Space, so any Space you create in the future will use the default setting.

Once your Space has started, you can send requests to listeners to become speakers or co-hosts by selecting the  icon and adding speakers or selecting a person’s profile picture within a Space and adding them as a co-host or speaker. Listeners can request to speak from the host.

Hosts can also invite other people outside of the Space to speak via DM.

How does co-hosting work?

Up to 2 people can become co-hosts and speak in a Space in addition to the 11 speakers (including the primary host) at one time. Co-host status can be lost if the co-host leaves the Space. A co-host can remove their own co-host status to become a Listener again.

Hosts can transfer primary admin rights to another co-host. If the original host drops from Space, the first co-host added will become the primary admin. The admin is responsible for promoting and facilitating a healthy conversation in the Space in line with the Twitter Rules.

Once a co-host is added to a Space, any accounts they’ve blocked on Twitter who are in the Space will be removed from the Space.

Can I schedule a Space?

Hosts can schedule a Space up to 30 days in advance and up to 10 scheduled Spaces. Hosts can still create impromptu Spaces in the meantime, and those won’t count toward the maximum 10 scheduled Spaces.

Before you create your Space, select the scheduler  icon and pick the date and time you’d like to schedule your Space to go live. As your scheduled start time approaches, you will receive push and in-app notifications reminding you to start your Space on time. If you don’t have notifications turned on, follow the in-app steps on About notifications on mobile devices to enable them for Spaces. Scheduled Spaces are public and people can set reminders to be notified when your scheduled Space begins.

How do I edit my scheduled Space(s)?

Follow the steps below to edit any of your scheduled Spaces.

Instructions for:

Manage your scheduled Spaces

Step 1

From your timeline, navigate to and long press on the . Or, navigate to the Spaces Tab  at the bottom of your timeline.

Step 2

Select the Spaces  icon.

Step 3

To manage your scheduled Spaces, select the scheduler  icon at the top.

Step 4

You’ll see the Spaces that you have scheduled.

Step 5

Navigate to the more  icon of the Space you want to manage. You can edit, share, or cancel the Space.

If you are editing your Space, make sure to select “Save changes” after making edits.

How do I get notified about a scheduled Space?

Guests can sign up for reminder notifications from a scheduled Space card in a Tweet. When the host starts the scheduled Space, the interested guests get notified via push and in-app notifications.

Can I record a Space?

Hosts can record Spaces they create for replay. When creating a Space, toggle on Record Space.

While recording, a recording symbol will appear at the top to indicate that the Space is being recorded by the host. Once the Space ends, you will see how many people attended the Space along with a link to share out via a Tweet. Under Notifications, you can also View details to Tweet the recording. Under host settings, you will have the option to choose where to start your recording with Edit start time. This allows you to cut out any dead air time that might occur at the beginning of a Space.

If you choose to record your Space, once the live Space ends, your recording will be immediately and publicly available for anyone to listen to whenever they want. You can always end a recording to make it no longer publicly available on Twitter by deleting your recording via the more  icon on the recording itself. Unless you delete your recording, it will remain available for replay after the live Space has ended.* As with live Spaces, Twitter will retain audio copies for 30 after they end to review for violations of the Twitter Rules. If a violation is found, Twitter may retain audio copies for up to 120 days in total. For more information on downloading Spaces, please see below FAQ, “What happens after a Space ends and is the data retained anywhere?

Co-hosts and speakers who enter a Space that is being recorded will see a recording symbol (REC). Listeners will also see the recording symbol, but they will not be visible in the recording.

Recordings will show the host, co-host(s), and speakers from the live Space.

*Note: Hosts on iOS 9.15+ and Android 9.46+ will be able to record Spaces that last indefinitely. For hosts on older app versions, recording will only be available for 30 days. For Spaces that are recorded indefinitely, Twitter will retain a copy for as long as the Space is replayable on Twitter, but for no less than 30 days after the live Space ended.

 

What is clipping?

Clipping is a new feature we’re currently testing and gradually rolling out that lets a limited group of hosts, speakers, and listeners capture 30 seconds of audio from any live or recorded Space and share it through a Tweet if the host has not disabled the clipping function. To start clipping a Space, follow the instructions below to capture the prior 30 seconds of audio from that Space. There is no limit to the number of clips that participants in a Space can create.

When you enter the Space as a co-host or speaker, you will be informed that the Space is clippable through a tool tip notification above the clipping  icon.

Note: Currently, creating a clip is available only on iOS and Android, while playing a clip is available on all platforms to everyone.

Instructions for:

Host instructions: How to turn off clipping

 

When you start your Space, you’ll receive a notification about what a clip is and how to turn it off, as clipping is on by default. You can turn off clipping at any time. To turn it off, follow the instructions below.

Step 1

Navigate to the more  icon.

Step 2

Select Adjust settings .

Step 3

Under Clips, toggle Allow clips off.

Instructions for:

Host and speaker instructions: How to create a clipping

Step 1

In a recorded or live Space that is recorded, navigate to the clipping  icon. Please note that, for live Spaces, unless the clipping function is disabled, clips will be publicly available on your Twitter profile after your live Space has ended even though the Space itself will no longer be available.

Step 2

On the Create clip pop-up, go to Next.

Step 3

Preview the Tweet and add a comment if you’d like, similarly to a Quote Tweet.

Step 4

Select Tweet to post it to your timeline.

Why is my clip not displaying captions?

What controls do hosts have over existing clips?

What controls do clip creators have over clips they’ve created?

Other controls over clips: how to report, block, or mute

What controls do I have over my Space?

The host and co-host(s) of a Space have control over who can speak. They can mute any Speaker, but it is up to the individual to unmute themselves if they receive speaking privileges. Hosts and co-hosts can also remove,  report, and block others in the Space.

Speakers and listeners can report and block others in the Space, or can report the Space. If you block a participant in the Space, you will also block that person’s account on Twitter. If the person you blocked joins as a listener, they will appear in the participant list with a Blocked label under their account name. If the person you blocked joins as a speaker, they will also appear in the participant list with a Blocked label under their account name and you will see an in-app notification stating, “An account you blocked has joined as a speaker.” If you are entering a Space that already has a blocked account as a speaker, you will also see a warning before joining the Space stating, “You have blocked 1 person who is speaking.”

If you are hosting or co-hosting a Space, people you’ve blocked can’t join and, if you’re added as a co-host during a Space, anyone in the Space who you blocked will be removed from the Space.

What are my responsibilities as a Host or Co-Host of a Space?

As a Host, you are responsible for promoting and supporting a healthy conversation in your Space and to use your tools to ensure that the Twitter Rules are followed. The following tools are available for you to use if a participant in the Space is being offensive or disruptive:

  • Revoke speaking privileges of other users if they are being offensive or disruptive to you or others
  • Block, remove or report the user.

Here are some guidelines to follow as a Host or Co-Host:

  • Always follow the Twitter Rulesin the Space you host or co-host. This also applies to the title of your Space which should not include abusive slurs, threats, or any other rule-violating content.
  • Do not encourage behavior or content that violates the Twitter Rules.
  • Do not abuse or misuse your hosting tools, such as arbitrarily revoking speaking privileges or removing users, or use Spaces to carry out activities that break our rules such as following schemes.

How can I block someone in a Space?

How can I mute a speaker in a Space?

How can I see people in my Space?

Hosts, speakers, and listeners can select the  icon to see people in a Space. Since Spaces are publicly accessible by anyone, it may also be possible for an unknown number of logged-out people to listen to a Space’s audio without being listed as a guest in the Space.

How can I report a Space?

How can I report a person in a Space?

Can Twitter suspend my Space while it’s live?

How many people can speak in a Space?

How many people can listen in a Space?

 

What happens after a Space ends and is the data retained anywhere?

Hosts can choose to record a Space prior to starting it. Hosts may download copies of their recorded Spaces for as long as we have them by using the Your Twitter Data download tool.

For unrecorded Spaces, Twitter retains copies of audio from recorded Spaces for 30 days after a Space ends to review for violations of the Twitter Rules. If a Space is found to contain a violation, we extend the time we maintain a copy for an additional 90 days (a total of 120 days after a Space ends) to allow people to appeal if they believe there was a mistake. Twitter also uses Spaces content and data for analytics and research to improve the service.

Links to Spaces that are shared out (e.g., via Tweet or DM) also contain some information about the Space, including the description, the identity of the hosts and others in the Space, as well as the Space’s current state (e.g., scheduled, live, or ended). We make this and other information about Spaces available through the Twitter Developer Platform. For a detailed list of the information about Spaces we make available, check out our Spaces endpoints documentation.

For full details on what data we retain, visit our Privacy Policy.

Who can end a Space?

Does Spaces work for accounts with protected Tweets?

Following the Twitter Rules in Spaces

 

Spaces Feedback Community

We’re opening up the conversation and turning it over to the people who are participating in Spaces. This Community is a dedicated place for us to connect with you on all things Spaces, whether it’s feedback around features, ideas for improvement, or any general thoughts.

Who can join?

Anyone on Spaces can join, whether you are a host, speaker, or listener.

How do I join the Community?

You can request to join the Twitter Spaces Feedback Community here. By requesting to join, you are agreeing to our Community rules.

Learn more about Communities on Twitter.

 

Community Spaces

As a Community admin or moderator, you can create and host a Space for your Community members to join.

Note:

Currently, creating Community Spaces is only available to some admins and moderators using the Twitter for iOS and Twitter for Android apps.

Instructions for:

Admins & moderators: How to create a Space

Step 1

Navigate to the Community landing page.

Step 2

Long press on the Tweet Composer  and select the Spaces  icon.

Step 3

Select Spaces and begin creating your Space by adding in a title, toggling on record Space (optional), and adding relevant topics.

Step 4

Invite admins, moderators, and other people to be a part of your Space.

Members: How to find a Community Space

If a Community Space is live, you will see the Spacebar populate at the top of your Home timeline. To enter the Space and begin listening, select the live Space in the Spacebar.

Community Spaces FAQ

What are Community Spaces?

 

 

 

 

 

 

 

 

 

Spaces Social Narrative


A social narrative is a simple story that describes social situations and social behaviors for accessibility.

Twitter Spaces allows me to join or host live audio-only conversations with anyone.

Joining a Space

  1. When I join a Twitter Space, that means I’ll be a listener. I can join any Space on Twitter, even those hosted by people I don’t know or follow.
  2. I can join a Space by selecting a profile photo with a purple, pulsing outline at the top of my timeline, selecting a link from someone’s Tweet, or a link in a Direct Message (DM).
  3. Once I’m in a Space, I can seethe profile photos and names of some people in the Space, including myself.
  4. I can hearone or multiple people talking at the same time. If it’s too loud or overwhelming, I can turn down my volume.
  5. As a listener, I am not able to speak. If I want to say something, I can send a request to the host. The host might not approve my request though.
  6. If the host accepts my request, I will become a speaker. It may take a few seconds to connect my microphone, so I’ll have to wait.
  7. Now I can unmute myself and speak. Everyone in the Space will be able to hear me.
  8. When someone says something I want to react to, I can choosean emoji to show everyone how I feel. I will be able to see when other people react as well.
  9. I can leave the Space at any time. After I leave, or when the host ends the Space, I’ll go back to my timeline.

Hosting a Space

  1. When I start a Space, that means I’ll be the host. Anyone can join my Space, even people I don’t know and people I don’t follow.
  2. Once I start my space, it may take a few seconds to be connected, so I’ll have to wait.
  3. Now I’m in my Space and I can seemy profile photo. If other logged-in, people have joined, I will be able to see their profile photos, too.
  4. I will start out muted, which is what the microphone with a slash through it means. I can mute and unmute myself, and anyone in my Space, at any time.
  5. I can invitepeople to join my Space by sending them a Direct Message (DM), sharing the link in a Tweet, and by copying the link and sharing it somewhere else like in an email.
  6. Up to 10 other people can have speaking privileges in my Space at the same time, and I can choosewho speaks and who doesn’t. People can also request to speak, and I can choose to approve their request or not.

 

Read Full Post »

Science Has A Systemic Problem, Not an Innovation Problem

Curator: Stephen J. Williams, Ph.D.

    A recent email, asking me to submit a survey, got me thinking about the malaise that scientists and industry professionals frequently bemoan: that innovation has been stymied for some reason and all sorts of convuluted processes must be altered to spur this mythical void of great new discoveries…..  and it got me thinking about our current state of science, and what is the perceived issue… and if this desert of innovation actually exists or is more a fundamental problem which we have created.

The email was from an NIH committee asking for opinions on recreating the grant review process …. now this on the same day someone complained to me about a shoddy and perplexing grant review they received.

The following email, which was sent out to multiple researchers, involved in either NIH grant review on both sides, as well as those who had been involved in previous questionnaires and studies on grant review and bias.  The email asked for researchers to fill out a survey on the grant review process, and how to best change it to increase innovation of ideas as well as inclusivity.  In recent years, there have been multiple survey requests on these matters, with multiple confusing procedural changes to grant format and content requirements, adding more administrative burden to scientists.

The email from Center for Scientific Review (one of the divisions a grant will go to before review {they set up review study sections and decide what section a grant should be  assigned to} was as follows:

Update on Simplifying Review Criteria: A Request for Information

https://www.csr.nih.gov/reviewmatters/2022/12/08/update-on-simplifying-review-criteria-a-request-for-information/

NIH has issued a request for information (RFI) seeking feedback on revising and simplifying the peer review framework for research project grant applications. The goal of this effort is to facilitate the mission of scientific peer review – identification of the strongest, highest-impact research. The proposed changes will allow peer reviewers to focus on scientific merit by evaluating 1) the scientific impact, research rigor, and feasibility of the proposed research without the distraction of administrative questions and 2) whether or not appropriate expertise and resources are available to conduct the research, thus mitigating the undue influence of the reputation of the institution or investigator.

Currently, applications for research project grants (RPGs, such as R01s, R03s, R15s, R21s, R34s) are evaluated based on five scored criteria: Significance, Investigators, Innovation, Approach, and Environment (derived from NIH peer review regulations 42 C.F.R. Part 52h.8; see Definitions of Criteria and Considerations for Research Project Grant Critiques for more detail) and a number of additional review criteria such as Human Subject Protections.

NIH gathered input from the community to identify potential revisions to the review framework. Given longstanding and often-heard concerns from diverse groups, CSR decided to form two working groups to the CSR Advisory Council—one on non-clinical trials and one on clinical trials. To inform these groups, CSR published a Review Matters blog, which was cross-posted on the Office of Extramural Research blog, Open Mike. The blog received more than 9,000 views by unique individuals and over 400 comments. Interim recommendations were presented to the CSR Advisory Council in a public forum (March 2020 videoslides; March 2021 videoslides). Final recommendations from the CSRAC (report) were considered by the major extramural committees of the NIH that included leadership from across NIH institutes and centers. Additional background information can be found here. This process produced many modifications and the final proposal presented below. Discussions are underway to incorporate consideration of a Plan for Enhancing Diverse Perspectives (PEDP) and rigorous review of clinical trials RPGs (~10% of RPGs are clinical trials) within the proposed framework.

Simplified Review Criteria

NIH proposes to reorganize the five review criteria into three factors, with Factors 1 and 2 receiving a numerical score. Reviewers will be instructed to consider all three factors (Factors 1, 2 and 3) in arriving at their Overall Impact Score (scored 1-9), reflecting the overall scientific and technical merit of the application.

  • Factor 1: Importance of the Research (Significance, Innovation), numerical score (1-9)
  • Factor 2: Rigor and Feasibility (Approach), numerical score (1-9)
  • Factor 3: Expertise and Resources (Investigator, Environment), assessed and considered in the Overall Impact Score, but not individually scored

Within Factor 3 (Expertise and Resources), Investigator and Environment will be assessed in the context of the research proposed. Investigator(s) will be rated as “fully capable” or “additional expertise/capability needed”. Environment will be rated as “appropriate” or “additional resources needed.” If a need for additional expertise or resources is identified, written justification must be provided. Detailed descriptions of the three factors can be found here.

Now looking at some of the Comments were very illuminating:

I strongly support streamlining the five current main review criteria into three, and the present five additional criteria into two. This will bring clarity to applicants and reduce the workload on both applicants and reviewers. Blinding reviewers to the applicants’ identities and institutions would be a helpful next step, and would do much to reduce the “rich-getting-richer” / “good ole girls and good ole boys” / “big science” elitism that plagues the present review system, wherein pedigree and connections often outweigh substance and creativity.

I support the proposed changes. The shift away from “innovation” will help reduce the tendency to create hype around a proposed research direction. The shift away from Investigator and Environment assessments will help reduce bias toward already funded investigators in large well-known institutions.

As a reviewer for 5 years, I believe that the proposed changes are a step in the right direction, refocusing the review on whether the science SHOULD be done and whether it CAN BE DONE WELL, while eliminating burdensome and unhelpful sections of review that are better handled administratively. I particularly believe that the de-emphasis of innovation (which typically focuses on technical innovation) will improve evaluation of the overall science, and de-emphasis of review of minor technical details will, if implemented correctly, reduce the “downward pull” on scores for approach. The above comments reference blinded reviews, but I did not see this in the proposed recommendations. I do not believe this is a good idea for several reasons: 1) Blinding of the applicant and institution is not likely feasible for many of the reasons others have described (e.g., self-referencing of prior work), 2) Blinding would eliminate the potential to review investigators’ biosketches and budget justifications, which are critically important in review, 3) Making review blinded would make determination of conflicts of interest harder to identify and avoid, 4) Evaluation of “Investigator and Environment” would be nearly impossible.

Most of the Comments were in favor of the proposed changes, however many admitted that it adds additional confusion on top of many administrative changes to formats and content of grant sections.

Being a Stephen Covey devotee, and just have listened to  The Four Principles of Execution, it became more apparent that issues that hinder many great ideas coming into fruition, especially in science, is a result of these systemic or problems in the process, not at the level of individual researchers or small companies trying to get their innovations funded or noticed.  In summary, Dr. Covey states most issues related to the success of any initiative is NOT in the strategic planning, but in the failure to adhere to a few EXECUTION principles.  Primary to these failures of strategic plans is lack of accounting of what Dr. Covey calls the ‘whirlwind’, or those important but recurring tasks that take us away from achieving the wildly important goals.  In addition, lack of  determining lead and lag measures of success hinder such plans.

In this case a lag measure in INNOVATION.  It appears we have created such a whirlwind and focus on lag measures that we are incapable of translating great discoveries into INNOVATION.

In the following post, I will focus on issues relating to Open Access, publishing and dissemination of scientific discovery may be costing us TIME to INNOVATION.  And it appears that there are systemic reasons why we appear stuck in a rut, so to speak.

The first indication is from a paper published by Johan Chu and James Evans in 2021 in PNAS:

 

Slowed canonical progress in large fields of science

Chu JSG, Evans JA. Slowed canonical progress in large fields of science. Proc Natl Acad Sci U S A. 2021 Oct 12;118(41):e2021636118. doi: 10.1073/pnas.2021636118. PMID: 34607941; PMCID: PMC8522281

 

Abstract

In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.

So the Summary of this paper is

  • The authors examined 1.8 billion citations among 90 million papers over 241 subjects
  • found the corpus of papers do not lead to turnover of new ideas in a field, but rather the ossification or entrenchment of canonical (or older ideas)
  • this is mainly due to older paper cited more frequently than new papers with new ideas, potentially because authors are trying to get their own papers cited more frequently for funding and exposure purposes
  • The authors suggest that “fundamental progress may be stymied if quantitative growth of scientific endeavors is not balanced by structures fostering disruptive scholarship and focusing attention of novel ideas”

The authors note that, in most cases, science policy reinforces this “more is better” philosophy”,  where metrics of publication productivity are either number of publications or impact measured by citation rankings.  However, using an analysis of citation changes occurring in large versus smaller fields, it becomes apparent that this process is favoring the older, more established papers and a recirculating of older canonical ideas.

“Rather than resulting in faster turnover of field paradigms, the massive amounts of new publications entrenches the ideas of top-cited papers.”  New ideas are pushed down to the bottom of the citation list and potentially lost in the literature.  The authors suggest that this problem will intensify as the “annual mass” of new publications in each field grows, especially in large fields.  This issue is exacerbated by the deluge on new online ‘open access’ journals, in which authors would focus on citing the more highly cited literature. 

We maybe at a critical junction, where if many papers are published in a short time, new ideas will not be considered as carefully as the older ideas.  In addition,

with proliferation of journals and the blurring of journal hierarchies due to online articles-level access can exacerbate this problem

As a counterpoint, the authors do note that even though many molecular biology highly cited articles were done in 1976, there has been extremely much innovation since then however it may take a lot more in experiments and money to gain the level of citations that those papers produced, and hence a lower scientific productivity.

This issue is seen in the field of economics as well

Ellison, Glenn. “Is peer review in decline?” Economic Inquiry, vol. 49, no. 3, July 2011, pp. 635+. Gale Academic OneFile, link.gale.com/apps/doc/A261386330/AONE?u=temple_main&sid=bookmark-AONE&xid=f5891002. Accessed 12 Dec. 2022.

Abstract

Over the past decade, there has been a decline in the fraction of papers in top economics journals written by economists from the highest-ranked economics departments. This paper documents this fact and uses additional data on publications and citations to assess various potential explanations. Several observations are consistent with the hypothesis that the Internet improves the ability of high-profile authors to disseminate their research without going through the traditional peer-review process. (JEL A14, 030)

The facts part of this paper documents two main facts:

1. Economists in top-ranked departments now publish very few papers in top field journals. There is a marked decline in such publications between the early 1990s and early 2000s.

2. Comparing the early 2000s with the early 1990s, there is a decline in both the absolute number of papers and the share of papers in the top general interest journals written by Harvard economics department faculty.

Although the second fact just concerns one department, I see it as potentially important to understanding what is happening because it comes at a time when Harvard is widely regarded (I believe correctly) as having ascended to the top position in the profession.

The “decline-of-peer-review” theory I allude to in the title is that the necessity of going through the peer-review process has lessened for high-status authors: in the old days peer-reviewed journals were by far the most effective means of reaching readers, whereas with the growth of the Internet high-status authors can now post papers online and exploit their reputation to attract readers.

Many alternate explanations are possible. I focus on four theories: the decline-in-peer-review theory and three alternatives.

1. The trends could be a consequence of top-school authors’ being crowded out of the top journals by other researchers. Several such stories have an optimistic message, for example, there is more talent entering the profession, old pro-elite biases are being broken down, more schools are encouraging faculty to do cutting-edge research, and the Internet is enabling more cutting-edge research by breaking down informational barriers that had hampered researchers outside the top schools. (2)

2. The trends could be a consequence of the growth of revisions at economics journals discussed in Ellison (2002a, 2002b). In this more pessimistic theory, highly productive researchers must abandon some projects and/or seek out faster outlets to conserve the time now required to publish their most important works.

3. The trends could simply reflect that field journals have declined in quality in some relative sense and become a less attractive place to publish. This theory is meant to encompass also the rise of new journals, which is not obviously desirable or undesirable.

The majority of this paper is devoted to examining various data sources that provide additional details about how economics publishing has changed over the past decade. These are intended both to sharpen understanding of the facts to be explained and to provide tests of auxiliary predictions of the theories. Two main sources of information are used: data on publications and data on citations. The publication data include department-level counts of publications in various additional journals, an individual-level dataset containing records of publications in a subset of journals for thousands of economists, and a very small dataset containing complete data on a few authors’ publication records. The citation data include citations at the paper level for 9,000 published papers and less well-matched data that is used to construct measures of citations to authors’ unpublished works, to departments as a whole, and to various journals.

Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship

Josh Angrist, Pierre Azoulay, Glenn Ellison, Ryan Hill, Susan Feng Lu. Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship.

JOURNAL OF ECONOMIC LITERATURE

VOL. 58, NO. 1, MARCH 2020

(pp. 3-52)

So if innovation is there but it may be buried under the massive amount of heavily cited older literature, do we see evidence of this in other fields like medicine?

Why Isn’t Innovation Helping Reduce Health Care Costs?

 
 

National health care expenditures (NHEs) in the United States continue to grow at rates outpacing the broader economy: Inflation- and population-adjusted NHEs have increased 1.6 percent faster than the gross domestic product (GDP) between 1990 and 2018. US national health expenditure growth as a share of GDP far outpaces comparable nations in the Organization for Economic Cooperation and Development (17.2 versus 8.9 percent).

Multiple recent analyses have proposed that growth in the prices and intensity of US health care services—rather than in utilization rates or demographic characteristics—is responsible for the disproportionate increases in NHEs relative to global counterparts. The consequences of ever-rising costs amid ubiquitous underinsurance in the US include price-induced deferral of care leading to excess morbidity relative to comparable nations.

These patterns exist despite a robust innovation ecosystem in US health care—implying that novel technologies, in isolation, are insufficient to bend the health care cost curve. Indeed, studies have documented that novel technologies directly increase expenditure growth.

Why is our prolific innovation ecosystem not helping reduce costs? The core issue relates to its apparent failure to enhance net productivity—the relative output generated per unit resource required. In this post, we decompose the concept of innovation to highlight situations in which inventions may not increase net productivity. We begin by describing how this issue has taken on increased urgency amid resource constraints magnified by the COVID-19 pandemic. In turn, we describe incentives for the pervasiveness of productivity-diminishing innovations. Finally, we provide recommendations to promote opportunities for low-cost innovation.

 

 

Net Productivity During The COVID-19 Pandemic

The issue of productivity-enhancing innovation is timely, as health care systems have been overwhelmed by COVID-19. Hospitals in Italy, New York City, and elsewhere have lacked adequate capital resources to care for patients with the disease, sufficient liquidity to invest in sorely needed resources, and enough staff to perform all of the necessary tasks.

The critical constraint in these settings is not technology: In fact, the most advanced technology required to routinely treat COVID-19—the mechanical ventilator—was invented nearly 100 years ago in response to polio (the so-called iron lung). Rather, the bottleneck relates to the total financial and human resources required to use the technology—the denominator of net productivity. The clinical implementation of ventilators has been illustrative: Health care workers are still required to operate ventilators on a nearly one-to-one basis, just like in the mid-twentieth century. 

High levels of resources required for implementation of health care technologies constrain the scalability of patient care—such as during respiratory disease outbreaks such as COVID-19. Thus, research to reduce health care costs is the same kind of research we urgently require to promote health care access for patients with COVID-19.

Types Of Innovation And Their Relationship To Expenditure Growth

The widespread use of novel medical technologies has been highlighted as a central driver of NHE growth in the US. We believe that the continued expansion of health care costs is largely the result of innovation that tends to have low productivity (exhibit 1). We argue that these archetypes—novel widgets tacked on to existing workflows to reinforce traditional care models—are exactly the wrong properties to reduce NHEs at the systemic level.

Exhibit 1: Relative productivity of innovation subtypes

Source: Authors’ analysis.

Content Versus Process Innovation

Content (also called technical) innovation refers to the creation of new widgets, such as biochemical agents, diagnostic tools, or therapeutic interventions. Contemporary examples of content innovation include specialty pharmaceuticalsmolecular diagnostics, and advanced interventions and imaging.

These may be contrasted with process innovations, which address the organized sequences of activities that implement content. Classically, these include clinical pathways and protocols. They can address the delivery of care for acute conditions, such as central line infections, sepsis, or natural disasters. Alternatively, they can target chronic conditions through initiatives such as team-based management of hypertension and hospital-at-home models for geriatric care. Other processes include hiring staffdelegating labor, and supply chain management.

Performance-Enhancing Versus Cost-Reducing Innovation

Performance-enhancing innovations frequently create incremental outcome gains in diagnostic characteristics, such as sensitivity or specificity, or in therapeutic characteristics, such as biomarkers for disease status. Their performance gains often lead to higher prices compared to existing alternatives.  

Performance-enhancing innovations can be compared to “non-inferior” innovations capable of achieving outcomes approximating those of existing alternatives, but at reduced cost. Industries outside of medicine, such as the computing industry, have relied heavily on the ability to reduce costs while retaining performance.

In health care though, this pattern of innovation is rare. Since passage of the 2010 “Biosimilars” Act aimed at stimulating non-inferior innovation and competition in therapeutics markets, only 17 agents have been approved, and only seven have made it to market. More than three-quarters of all drugs receiving new patents between 2005 and 2015 were “reissues,” meaning they had already been approved, and the new patent reflected changes to the previously approved formula. Meanwhile, the costs of approved drugs have increased over time, at rates between 4 percent and 7 percent annually.

Moreover, the preponderance of performance-enhancing diagnostic and therapeutic innovations tend to address narrow patient cohorts (such as rare diseases or cancer subtypes), with limited clear clinical utility in broader populations. For example, the recently approved eculizimab is a monoclonal antibody approved for paroxysmal nocturnal hemoglobinuria—which effects 1 in 10 million individuals. At the time of its launch, eculizimab was priced at more than $400,000 per year, making it the most expensive drug in modern history. For clinical populations with no available alternatives, drugs such as eculizimab may be cost-effective, pending society’s willingness to pay, and morally desirable, given a society’s values. But such drugs are certainly not cost-reducing.

Additive Versus Substitutive Innovation

Additive innovations are those that append to preexisting workflows, while substitutive innovations reconfigure preexisting workflows. In this way, additive innovations increase the use of precedent services, whereas substitutive innovations decrease precedent service use.

For example, previous analyses have found that novel imaging modalities are additive innovations, as they tend not to diminish use of preexisting modalities. Similarly, novel procedures tend to incompletely replace traditional procedures. In the case of therapeutics and devices, off-label uses in disease groups outside of the approved indication(s) can prompt innovation that is additive. This is especially true, given that off-label prescriptions classically occur after approved methods are exhausted.

Eculizimab once again provides an illustrative example. As of February 2019, the drug had been used for 39 indications (it had been approved for three of those, by that time), 69 percent of which lacked any form of evidence of real-world effectiveness. Meanwhile, the drug generated nearly $4 billion in sales in 2019. Again, these expenditures may be something for which society chooses to pay—but they are nonetheless additive, rather than substitutive.

Sustaining Versus Disruptive Innovation

Competitive market theory suggests that incumbents and disruptors innovate differently. Incumbents seek sustaining innovations capable of perpetuating their dominance, whereas disruptors pursue innovations capable of redefining traditional business models.

In health care, while disruptive innovations hold the potential to reduce overall health expenditures, often they run counter to the capabilities of market incumbents. For example, telemedicine can deliver care asynchronously, remotely, and virtually, but large-scale brick-and-mortar medical facilities invest enormous capital in the delivery of synchronous, in-house, in-person care (incentivized by facility fees).

The connection between incumbent business models and the innovation pipeline is particularly relevant given that 58 percent of total funding for biomedical research in the US is now derived from private entities, compared with 46 percent a decade prior. It follows that the growing influence of eminent private organizations may favor innovations supporting their market dominance—rather than innovations that are societally optimal.

Incentives And Repercussions Of High-Cost Innovation

Taken together, these observations suggest that innovation in health care is preferentially designed for revenue expansion rather than for cost reduction. While offering incremental improvements in patient outcomes, therefore creating theoretical value for society, these innovations rarely deliver incremental reductions in short- or long-term costs at the health system level.

For example, content-based, performance-enhancing, additive, sustaining innovations tend to add layers of complexity to the health care system—which in turn require additional administration to manage. The net result is employment growth in excess of outcome improvement, leading to productivity losses. This gap leads to continuously increasing overall expenditures in turn passed along to payers and consumers.

Nonetheless, high-cost innovations are incentivized across health care stakeholders (exhibit 2). From the supply side of innovation, for academic researchers, “breakthrough” and “groundbreaking” innovations constitute the basis for career advancement via funding and tenure. This is despite stakeholders’ frequent inability to generalize early successes to become cost-effective in the clinical setting. As previously discussed, the increasing influence of private entities in setting the medical research agenda is also likely to stimulate innovation benefitting single stakeholders rather than the system.

Exhibit 2: Incentives promoting low-value innovation

Source: Authors’ analysis adapted from Hofmann BM. Too much technology. BMJ. 2015 Feb 16.

From the demand side of innovation (providers and health systems), a combined allure (to provide “cutting-edge” patient care), imperative (to leave “no stone unturned” in patient care), and profit-motive (to amplify fee-for-service reimbursements) spur participation in a “technological arms-race.” The status quo thus remains as Clay Christensen has written: “Our major health care institutions…together overshoot the level of care actually needed or used by the vast majority of patients.”

Christensen’s observations have been validated during the COVID-19 epidemic, as treatment of the disease requires predominantly century-old technology. By continually adopting innovation that routinely overshoots the needs of most patients, layer by layer, health care institutions are accruing costs that quickly become the burden of society writ large.

Recommendations To Reduce The Costs Of Health Care Innovation

Henry Aaron wrote in 2002 that “…the forces that have driven up costs are, if anything, intensifying. The staggering fecundity of biomedical research is increasing…[and] always raises expenditures.” With NHEs spiraling ever-higher, urgency to “bend the cost curve” is mounting. Yet, since much biomedical innovation targets the “flat of the [productivity] curve,” alternative forms of innovation are necessary.

The shortcomings in net productivity revealed by the COVID-19 pandemic highlight the urgent need for redesign of health care delivery in this country, and reevaluation of the innovation needed to support it. Specifically, efforts supporting process redesign are critical to promote cost-reducing, substitutive innovations that can inaugurate new and disruptive business models.

Process redesign rarely involves novel gizmos, so much as rejiggering the wiring of, and connections between, existing gadgets. It targets operational changes capable of streamlining workflows, rather than technical advancements that complicate them. As described above, precisely these sorts of “frugal innovations” have led to productivity improvements yielding lower costs in other high-technology industries, such as the computing industry.

Shrank and colleagues recently estimated that nearly one-third of NHEs—almost $1 trillion—were due to preventable waste. Four of the six categories of waste enumerated by the authors—failure in care delivery, failure in care coordination, low-value care, and administrative complexity—represent ripe targets for process innovation, accounting for $610 billion in waste annually, according to Shrank.

Health systems adopting process redesign methods such as continuous improvement and value-based management have exhibited outcome enhancement and expense reduction simultaneously. Internal processes addressed have included supply chain reconfiguration, operational redesign, outlier reconciliation, and resource standardization.

Despite the potential of process innovation, focus on this area (often bundled into “health services” or “quality improvement” research) occupies only a minute fraction of wallet- or mind-share in the biomedical research landscape, accounting for 0.3 percent of research dollars in medicine. This may be due to a variety of barriers beyond minimal funding. One set of barriers is academic, relating to negative perceptions around rigor and a lack of outlets in which to publish quality improvement research. To achieve health care cost containment over the long term, this dimension of innovation must be destigmatized relative to more traditional manners of innovation by the funders and institutions determining the conditions of the research ecosystem.

Another set of barriers is financial: Innovations yielding cost reduction are less “reimbursable” than are innovations fashioned for revenue expansion. This is especially the case in a fee-for-service system where reimbursement is tethered to cost, which creates perverse incentives for health care institutions to overlook cost increases. However, institutions investing in low-cost innovation will be well-positioned in a rapidly approaching future of value-based care—in which the solvency of health care institutions will rely upon their ability to provide economically efficient care.

Innovating For Cost Control Necessitates Frugality Over Novelty

Restraining US NHEs represents a critical step toward health promotion. Innovation for innovation’s sake—that is content-based, incrementally effective, additive, and sustaining—is unlikely to constrain continually expanding NHEs.

In contrast, process innovation offers opportunities to reduce costs while maintaining high standards of patient care. As COVID-19 stress-tests health care systems across the world, the importance of cost control and productivity amplification for patient care has become apparent.

As such, frugality, rather than novelty, may hold the key to health care cost containment. Redesigning the innovation agenda to stem the tide of ever-rising NHEs is an essential strategy to promote widespread access to care—as well as high-value preventive care—in this country. In the words of investors across Silicon Valley: Cost-reducing innovation is no longer a “nice-to-have,” but a “need-to-have” for the future of health and overall well-being this country.

So Do We Need A New Way of Disseminating Scientific Information?  Can Curation Help?

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!



Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.



The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

We have discussed this in other posts such as

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

and

Curation Methodology – Digital Communication Technology to mitigate Published Information Explosion and Obsolescence in Medicine and Life Sciences

For years the pharmaceutical industry has toyed with the idea of making innovation networks and innovation hubs

It has been the main focus of whole conferences

Tales from the Translational Frontier – Four Unique Approaches to Turning Novel Biology into Investable Innovations @BIOConvention #BIO2018

However it still seems these strategies have not worked

Is it because we did not have an Execution plan? Or we did not understand the lead measures for success?

Other Related Articles on this Open Access Scientific Journal Include:

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

Analysis of Utilizing LPBI Group’s Scientific Curation Platform as an Educational Tool: New Paradigm for Student Engagement

Global Alliance for Genomics and Health Issues Guidelines for Data Siloing and Sharing

Multiple Major Scientific Journals Will Fully Adopt Open Access Under Plan S

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

Read Full Post »

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

Novartis uses a ‘dimmer switch’ medication to fine-tune gene therapy candidates

Reporter: Amandeep Kaur, BSc., MSc.

Using viral vectors, lipid nanoparticles, and other technologies, significant progress has been achieved in refining the delivery of gene treatments. However, modifications to the cargo itself are still needed to increase safety and efficacy by better controlling gene expression.

To that end, researchers at Children’s Hospital of Philadelphia (CHOP) have created a “dimmer switch” system that employs Novartis’ investigational Huntington’s disease medicine branaplam (LMI070) as a regulator to fine-tune the quantity of proteins generated from a gene therapy.

According to a new study published in Nature, the Xon system altered quantities of erythropoietin—which is used to treat anaemia associated with chronic renal disease—delivered to mice using viral vectors. The method has previously been licenced by Novartis, the maker of the Zolgensma gene therapy for spinal muscular atrophy.

The Xon system depends on a process known as “alternative splicing,” in which RNA is spliced to include or exclude specific exons of a gene, allowing the gene to code for multiple proteins. The team used branaplam, a small-molecule RNA-splicing modulator, for this platform. The medication was created to improve SMN2 gene splicing in order to cure spinal muscular atrophy. Novartis shifted its research to try the medication against Huntington’s disease after a trial failure.

A gene therapy’s payload remains dormant until oral branaplam is given, according to Xon. The medicine activates the expression of the therapy’s functional gene by causing it to splice in the desired way. Scientists from CHOP and the Novartis Institutes for BioMedical Research put the dimmer switch to the exam in an Epo gene therapy carried through adeno-associated viral vectors. The usage of branaplam increased mice Epo levels in the blood and hematocrit levels (the proportion of red blood cells to whole blood) by 60% to 70%, according to the researchers. The researchers fed the rodents branaplam again as their hematocrit decreased to baseline levels. The therapy reinduced Epo to levels similar to those seen in the initial studies, according to the researchers.

The researchers also demonstrated that the Xon system could be used to regulate progranulin expression, which is utilised to treat PGRN-deficient frontotemporal dementia and neuronal ceroid lipofuscinosis. The scientists emphasised that gene therapy requires a small treatment window to be both safe and effective.

In a statement, Beverly Davidson, Ph.D., the study’s senior author, said, “The dose of a medicine can define how high you want expression to be, and then the system can automatically ‘dim down’ at a pace corresponding to the half-life of the protein.”

“We may imagine scenarios in which a medication is used only once, such as to control the expression of foreign proteins required for gene editing, or only on a limited basis. Because the splicing modulators we examined are administered orally, compliance to control protein expression from viral vectors including Xon-based cassettes should be high.”

In gene-modifying medicines, scientists have tried a variety of approaches to alter gene expression. For example, methyl groups were utilised as a switch to turn on or off expression of genes in the gene-editing system CRISPR by a team of researchers from the Massachusetts Institute of Technology and the University of California, San Francisco.

Auxolytic, a biotech company founded by Stanford University academics, has described how knocking down a gene called UMPS could render T-cell therapies ineffective by depriving T cells of the nutrition uridine. Xon could also be tailored to work with cancer CAR-T cell therapy, according to the CHOP-Novartis researchers. The dimmer switch could help prevent cell depletion by halting CAR expression, according to the researchers. According to the researchers, such a tuneable switch could help CRISPR-based treatments by providing “a short burst” of production of CRISPR effector proteins to prevent undesirable off-target editing.

Source: https://www.fiercebiotech.com/research/novartis-fine-tunes-gene-therapy-a-huntington-s-disease-candidate-as-a-dimmer-switch?mkt_tok=Mjk0LU1RRi0wNTYAAAF-q1ives09mmSQhXDd_jhF0M11KBMt0K23Iru3ZMcZFf-vcFQwMMCxTOiWM-jHaEvtyGOM_ds_Cw6NuB9B0fr79a3Opgh32TjXaB-snz54d2xU_fw

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Gene Therapy could be a Boon to Alzheimer’s disease (AD): A first-in-human clinical trial proposed

Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/03/22/gene-therapy-could-be-a-boon-to-alzheimers-disease-ad-a-first-in-human-clinical-trial-proposed/

Top Industrialization Challenges of Gene Therapy Manufacturing

Guest Authors: Dr. Mark Szczypka and Clive Glover

https://pharmaceuticalintelligence.com/2021/03/29/top-industrialization-challenges-of-gene-therapy-manufacturing/

Dysregulation of ncRNAs in association with Neurodegenerative Disorders

Curator: Amandeep Kaur

https://pharmaceuticalintelligence.com/2021/01/11/dysregulation-of-ncrnas-in-association-with-neurodegenerative-disorders/

Cancer treatment using CRISPR-based Genome Editing System 

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2021/01/09/59906/

CRISPR-Cas9 and the Power of Butterfly Gene Editing

Reporter: Madison Davis

https://pharmaceuticalintelligence.com/2020/08/23/crispr-cas9-and-the-power-of-butterfly-gene-editing/

Gene Editing for Exon 51: Why CRISPR Snipping might be better than Exon Skipping for DMD

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/01/23/gene-editing-for-exon-51-why-crispr-snipping-might-be-better-than-exon-skipping-for-dmd/

Gene Editing: The Role of Oligonucleotide Chips

Curators: Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/01/07/gene-editing-the-role-of-oligonucleotide-chips/

Cause of Alzheimer’s Discovered: protein SIRT6 role in DNA repair process – low levels enable DNA damage accumulation

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2017/06/15/cause-of-alzheimers-discovered-protein-sirt6-role-in-dna-repair-process-low-levels-enable-dna-damage-accumulation/

Delineating a Role for CRISPR-Cas9 in Pharmaceutical Targeting

Author & Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/08/30/delineating-a-role-for-crispr-cas9-in-pharmaceutical-targeting/

Brain Science

Larry H Bernstein, MD, FCAP, Curator

https://pharmaceuticalintelligence.com/2015/11/03/brain-science/

Read Full Post »

Emergence of a new SARS-CoV-2 variant from GR clade with a novel S glycoprotein mutation V1230L in West Bengal, India

Authors: Rakesh Sarkar, Ritubrita Saha, Pratik Mallick, Ranjana Sharma, Amandeep Kaur, Shanta Dutta, Mamta Chawla-Sarkar

Reporter and Original Article Co-Author: Amandeep Kaur, B.Sc. , M.Sc.

Abstract
Since its inception in late 2019, SARS-CoV-2 has evolved resulting in emergence of various variants in different countries. These variants have spread worldwide resulting in devastating second wave of COVID-19 pandemic in many countries including India since the beginning of 2021. To control this pandemic continuous mutational surveillance and genomic epidemiology of circulating strains is very important. In this study, we performed mutational analysis of the protein coding genes of SARS-CoV-2 strains (n=2000) collected during January 2021 to March 2021. Our data revealed the emergence of a new variant in West Bengal, India, which is characterized by the presence of 11 co-existing mutations including D614G, P681H and V1230L in S-glycoprotein. This new variant was identified in 70 out of 412 sequences submitted from West Bengal. Interestingly, among these 70 sequences, 16 sequences also harbored E484K in the S glycoprotein. Phylogenetic analysis revealed strains of this new variant emerged from GR clade (B.1.1) and formed a new cluster. We propose to name this variant as GRL or lineage B.1.1/S:V1230L due to the presence of V1230L in S glycoprotein along with GR clade specific mutations. Co-occurrence of P681H, previously observed in UK variant, and E484K, previously observed in South African variant and California variant, demonstrates the convergent evolution of SARS-CoV-2 mutation. V1230L, present within the transmembrane domain of S2 subunit of S glycoprotein, has not yet been reported from any country. Substitution of valine with more hydrophobic amino acid leucine at position 1230 of the transmembrane domain, having role in S protein binding to the viral envelope, could strengthen the interaction of S protein with the viral envelope and also increase the deposition of S protein to the viral envelope, and thus positively regulate virus infection. P618H and E484K mutation have already been demonstrated in favor of increased infectivity and immune invasion respectively. Therefore, the new variant having G614G, P618H, P1230L and E484K is expected to have better infectivity, transmissibility and immune invasion characteristics, which may pose additional threat along with B.1.617 in the ongoing COVID-19 pandemic in India.

Reference: Sarkar, R. et al. (2021) Emergence of a new SARS-CoV-2 variant from GR clade with a novel S glycoprotein mutation V1230L in West Bengal, India. medRxiv. https://doi.org/10.1101/2021.05.24.21257705https://www.medrxiv.org/content/10.1101/2021.05.24.21257705v1

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Fighting Chaos with Care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur

https://pharmaceuticalintelligence.com/2021/04/13/fighting-chaos-with-care/

T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/30/t-cells-recognize-recent-sars-cov-2-variants/

Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/12/need-for-global-response-to-sars-cov-2-viral-variants/

Identification of Novel genes in human that fight COVID-19 infection

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/04/19/identification-of-novel-genes-in-human-that-fight-covid-19-infection/

Mechanism of Thrombosis with AstraZeneca and J & J Vaccines: Expert Opinion by Kate Chander Chiang & Ajay Gupta, MD

Reporter & Curator: Dr. Ajay Gupta, MD

https://pharmaceuticalintelligence.com/2021/04/14/mechanism-of-thrombosis-with-astrazeneca-and-j-j-vaccines-expert-opinion-by-kate-chander-chiang-ajay-gupta-md/

Read Full Post »

Multiple Major Scientific Journals Will Fully Adopt Open Access Under Plan S

Curator: Stephen J. Williams, PhD

More university library systems have been pressuring major scientific publishing houses to adopt an open access strategy in order to reduce the library system’s budgetary burdens.  In fact some major universities like the California system of universities (University of California and other publicly funded universities in the state as well as Oxford University in the UK, even MIT have decided to become their own publishing houses in a concerted effort to fight back against soaring journal subscription costs as well as the costs burdening individual scientists and laboratories (some of the charges to publish one paper can run as high as $8000.00 USD while the journal still retains all the rights of distribution of the information).  Therefore more and more universities, as well as concerted efforts by the European Union and the US government are mandating that scientific literature be published in an open access format.

The results of this pressure are evident now as major journals like Nature, JBC, and others have plans to go fully open access in 2021.  Below is a listing and news reports of some of these journals plans to undertake a full Open Access Format.

 

Nature to join open-access Plan S, publisher says

09 APRIL 2020 UPDATE 14 APRIL 2020

Springer Nature says it commits to offering researchers a route to publishing open access in Nature and most Nature-branded journals from 2021.

Richard Van Noorden

After a change in the rules of the bold open-access (OA) initiative known as Plan S, publisher Springer Nature said on 8 April that many of its non-OA journals — including Nature — were now committed to joining the plan, pending discussion of further technical details.

This means that Nature and other Nature-branded journals that publish original research will now look to offer an immediate OA route after January 2021 to scientists who want it, or whose funders require it, a spokesperson says. (Nature is editorially independent of its publisher, Springer Nature.)

“We are delighted that Springer Nature is committed to transitioning its journals to full OA,” said Robert Kiley, head of open research at the London-based biomedical funder Wellcome, and the interim coordinator for Coalition S, a group of research funders that launched Plan S in 2018.

But Lisa Hinchliffe, a librarian at the University of Illinois at Urbana–Champaign, says the changed rules show that publishers have successfully pushed back against Plan S, softening its guidelines and expectations — in particular in the case of hybrid journals, which publish some content openly and keep other papers behind paywalls. “The coalition continues to take actions that rehabilitate hybrid journals into compliance rather than taking the hard line of unacceptability originally promulgated,” she says.

 

 

 

 

What is Plan S?

The goal of Plan S is to make scientific and scholarly works free to read as soon as they are published. So far, 17 national funders, mostly in Europe, have joined the initiative, as have the World Health Organization and two of the world’s largest private biomedical funders — the Bill & Melinda Gates Foundation and Wellcome. The European Commission will also implement an OA policy that is aligned with Plan S. Together, this covers around 7% of scientific articles worldwide, according to one estimate. A 2019 report published by the publishing-services firm Clarivate Analytics suggested that 35% of the research content published in Nature in 2017 acknowledged a Plan S funder (see ‘Plan S papers’).

PLAN S PAPERS

Journal Total papers in 2017 % acknowledging Plan S funder
Nature 290 35%
Science 235 31%
Proc. Natl Acad. Sci. USA 639 20%

Source: The Plan S footprint: Implications for the scholarly publishing landscape (Institute for Scientific Information, 2019)

 

Source: https://www.nature.com/articles/d41586-020-01066-5

Opening ASBMB publications freely to all

 

Lila M. Gierasch, Editor-in-Chief, Journal of Biological Chemistry

Nicholas O. Davidson

Kerry-Anne Rye, Editors-in-Chief, Journal of Lipid Research and 

Alma L. Burlingame, Editor-in-Chief, Molecular and Cellular Proteomics

 

We are extremely excited to announce on behalf of the American Society for Biochemistry and Molecular Biology (ASBMB) that the Journal of Biological Chemistry (JBC), Molecular & Cellular Proteomics (MCP), and the Journal of Lipid Research (JLR) will be published as fully open-access journals beginning in January 2021. This is a landmark decision that will have huge impact for readers and authors. As many of you know, many researchers have called for journals to become open access to facilitate scientific progress, and many funding agencies across the globe are either already requiring or considering a requirement that all scientific publications based on research they support be published in open-access journals. The ASBMB journals have long supported open access, making the accepted author versions of manuscripts immediately and permanently available, allowing authors to opt in to the immediate open publication of the final version of their paper, and endorsing the goals of the larger open-access movement (1). However, we are no longer satisfied with these measures. To live up to our goals as a scientific society, we want to freely distribute the scientific advances published in JBC, MCP, and JLR as widely and quickly as possible to support the scientific community. How better can we facilitate the dissemination of new information than to make our scientific content freely open to all?

For ASBMB journals and others who have contemplated or made the transition to publishing all content open access, achieving this milestone generally requires new financial mechanisms. In the case of the ASBMB journals, the transition to open access is being made possible by a new partnership with Elsevier, whose established capabilities and economies of scale make the costs associated with open-access publication manageable for the ASBMB (2). However, we want to be clear: The ethos of ASBMB journals will not change as a consequence of this new alliance. The journals remain society journals: The journals are owned by the society, and all scientific oversight for the journals will remain with ASBMB and its chosen editors. Peer review will continue to be done by scientists reviewing the work of scientists, carried out by editorial board members and external referees on behalf of the ASBMB journal leadership. There will be no intervention in this process by the publisher.

Although we will be saying “goodbye” to many years of self-publishing (115 in the case of JBC), we are certain that we are taking this big step for all the right reasons. The goal for JBC, MCP, and JLR has always been and will remain to help scientists advance their work by rapidly and effectively disseminating their results to their colleagues and facilitating the discovery of new findings (13), and open access is only one of many innovations and improvements in science publishing that could help the ASBMB journals achieve this goal. We have been held back from fully exploring these options because of the challenges of “keeping the trains running” with self-publication. In addition to allowing ASBMB to offer all the content in its journals to all readers freely and without barriers, the new partnership with Elsevier opens many doors for ASBMB publications, from new technology for manuscript handling and production, to facilitating reader discovery of content, to deploying powerful analytics to link content within and across publications, to new opportunities to improve our peer review mechanisms. We have all dreamed of implementing these innovations and enhancements (45) but have not had the resources or infrastructure needed.

A critical aspect of moving to open access is how this decision impacts the cost to authors. Like most publishers that have made this transition, we have been extremely worried that achieving open-access publishing would place too big a financial burden on our authors. We are pleased to report the article-processing charges (APCs) to publish in ASBMB journals will be on the low end within the range of open-access fees: $2,000 for members and $2,500 for nonmembers. While slightly higher than the cost an author incurs now if the open-access option is not chosen, these APCs are lower than the current charges for open access on our existing platform.

References

1.↵ Gierasch, L. M., Davidson, N. O., Rye, K.-A., and Burlingame, A. L. (2019) For the sake of science. J. Biol. Chem. 294, 2976 FREE Full Text

2.↵ Gierasch, L. M. (2017) On the costs of scientific publishing. J. Biol. Chem. 292, 16395–16396 FREE Full Text

3.↵ Gierasch, L. M. (2020) Faster publication advances your science: The three R’s. J. Biol. Chem. 295, 672 FREE Full Text

4.↵ Gierasch, L. M. (2017) JBC is on a mission to facilitate scientific discovery. J. Biol. Chem. 292, 6853–6854 FREE Full Text

5.↵ Gierasch, L. M. (2017) JBC’s New Year’s resolutions: Check them off! J. Biol. Chem. 292, 21705–21706 FREE Full Text

 

Source: https://www.jbc.org/content/295/22/7814.short?ssource=mfr&rss=1

 

Open access publishing under Plan S to start in 2021

BMJ

2019; 365 doi: https://doi.org/10.1136/bmj.l2382 (Published 31 May 2019)Cite this as: BMJ 2019;365:l2382

From 2021, all research funded by public or private grants should be published in open access journals, according to a group of funding agencies called coALition S.1

The plan is the final version of a draft that was put to public consultation last year and attracted 344 responses from institutions, almost half of them from the UK.2 The responses have been considered and some changes made to the new system called Plan S, a briefing at the Science Media Centre in London was told on 29 May.

The main change has been to delay implementation for a year, to 1 January 2021, to allow more time for those involved—researchers, funders, institutions, publishers, and repositories—to make the necessary changes, said John-Arne Røttingen, chief executive of the Research Council of Norway.

“All research contracts signed after that date should include the obligation to publish in an open access journal,” he said. T……

(Please Note in a huge bit of irony this article is NOT Open Access and behind a paywall…. Yes an article about an announcement to go Open Access is not Open Access)

Source: https://www.bmj.com/content/365/bmj.l2382.full

 

 

Plan S

From Wikipedia, the free encyclopedia

Jump to navigationJump to search

Not to be confused with S-Plan.

Plan S is an initiative for open-access science publishing launched in 2018[1][2] by “cOAlition S”,[3] a consortium of national research agencies and funders from twelve European countries. The plan requires scientists and researchers who benefit from state-funded research organisations and institutions to publish their work in open repositories or in journals that are available to all by 2021.[4] The “S” stands for “shock”.[5]

Principles of the plan[edit]

The plan is structured around ten principles.[3] The key principle states that by 2021, research funded by public or private grants must be published in open-access journals or platforms, or made immediately available in open access repositories without an embargo. The ten principles are:

  1. authors should retain copyrighton their publications, which must be published under an open license such as Creative Commons;
  2. the members of the coalition should establish robust criteria and requirements for compliant open access journals and platforms;
  3. they should also provide incentives for the creation of compliant open access journals and platforms if they do not yet exist;
  4. publication fees should be covered by the funders or universities, not individual researchers;
  5. such publication fees should be standardized and capped;
  6. universities, research organizations, and libraries should align their policies and strategies;
  7. for books and monographs, the timeline may be extended beyond 2021;
  8. open archives and repositories are acknowledged for their importance;
  9. hybrid open-access journalsare not compliant with the key principle;
  10. members of the coalition should monitor and sanction non-compliance.

Member organisations

Organisations in the coalition behind Plan S include:[14]

International organizations that are members:

Plan S is also supported by:

 

Other articles on Open Access on this Open Access Journal Include:

MIT, guided by open access principles, ends Elsevier negotiations, an act followed by other University Systems in the US and in Europe

 

Open Access e-Scientific Publishing: Elected among 2018 Nature’s 10 Top Influencers – ROBERT-JAN SMITS: A bureaucrat launched a drive to transform science publishing

 

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

 

Mozilla Science Lab Promotes Data Reproduction Through Open Access: Report from 9/10/2015 Online Meeting

 

Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals

 

The Fatal Self Distraction of the Academic Publishing Industry: The Solution of the Open Access Online Scientific Journals
PeerJ Model for Open Access Scientific Journal
“Open Access Publishing” is becoming the mainstream model: “Academic Publishing” has changed Irrevocably
Open-Access Publishing in Genomics

 

 

 

 

 

 

 

 

 

 

 

 

Read Full Post »

The Castleman Disease Research Network publishes Phase 1 Results of Drug Repurposing Database for COVID-19

Reporter: Stephen J. Williams, PhD.

 

From CNN at https://www.cnn.com/2020/06/27/health/coronavirus-treatment-fajgenbaum-drug-review-scn-wellness/index.html

Updated 8:17 AM ET, Sat June 27, 2020

(CNN)Every morning, Dr. David Fajgenbaum takes three life-saving pills. He wakes up his 21-month-old daughter Amelia to help feed her. He usually grabs some Greek yogurt to eat quickly before sitting down in his home office. Then he spends most of the next 14 hours leading dozens of fellow researchers and volunteers in a systematic review of all the drugs that physicians and researchers have used so far to treat Covid-19. His team has already pored over more than 8,000 papers on how to treat coronavirus patients.

The 35-year-old associate professor at the University of Pennsylvania Perelman School of Medicine leads the school’s Center for Cytokine Storm Treatment & Laboratory. For the last few years, he has dedicated his life to studying Castleman disease, a rare condition that nearly claimed his life. Against epic odds, he found a drug that saved his own life six years ago, by creating a collaborative method for organizing medical research that could be applicable to thousands of human diseases. But after seeing how the same types of flares of immune-signaling cells, called cytokine storms, kill both Castleman and Covid-19 patients alike, his lab has devoted nearly all of its resources to aiding doctors fighting the pandemic.

A global repository for Covid-19 treatment data

Researchers working with his lab have reviewed published data on more than 150 drugs doctors around the world have to treat nearly 50,000 patients diagnosed with Covid-19. They’ve made their analysis public in a database called the Covid-19 Registry of Off-label & New Agents (or CORONA for short).
It’s a central repository of all available data in scientific journals on all the therapies used so far to curb the pandemic. This information can help doctors treat patients and tell researchers how to build clinical trials.The team’s process resembles that of the coordination Fajgenbaum used as a medical student to discover that he could repurpose Sirolimus, an immunosuppressant drug approved for kidney transplant patients, to prevent his body from producing deadly flares of immune-signaling cells called cytokines.The 13 members of Fajgenbaum’s lab recruited dozens of other scientific colleagues to join their coronavirus effort. And what this group is finding has ramifications for scientists globally.
This effort by Dr. Fajgenbaum’s lab and the resultant collaborative effort shows the power and speed at which a coordinated open science effort can achieve goals. Below is the description of the phased efforts planned and completed from the CORONA website.

CORONA (COvid19 Registry of Off-label & New Agents)

Drug Repurposing for COVID-19

Our overarching vision:  A world where data on all treatments that have been used against COVID19 are maintained in a central repository and analyzed so that physicians currently treating COVID19 patients know what treatments are most likely to help their patients and so that clinical trials can be appropriately prioritized.

 

Phase 1: COMPLETED

Our team reviewed 2500+ papers & extracted data on over 9,000 COVID19 patients. We found 115 repurposed drugs that have been used to treat COVID19 patients and analyzed data on which ones seem most promising for clinical trials. This data is open source and can be used by physicians to treat patients and prioritize drugs for trials. The CDCN will keep this database updated as a resource for this global fight. Repurposed drugs give us the best chance to help COVID19 as quickly as possible! As disease hunters who have identified and repurposed drugs for Castleman disease, we’re applying our ChasingMyCure approach to COVID19.

Read our systematic literature review published in Infectious Diseases and Therapy at the following link: Treatments Administered to the First 9152 Reported Cases of COVID-19: A Systematic Review

From Fajgenbaum, D.C., Khor, J.S., Gorzewski, A. et al. Treatments Administered to the First 9152 Reported Cases of COVID-19: A Systematic Review. Infect Dis Ther (2020). https://doi.org/10.1007/s40121-020-00303-8

The following is the Abstract and link to the metastudy.  This study was a systematic review of literature with strict inclusion criteria.  Data was curated from these published studies and a total of 9152 patients were evaluated for treatment regimens for COVID19 complications and clinical response was curated for therapies in these curated studies.  Main insights from this study were as follows:

Key Summary Points

Why carry out this study?
  • Data on drugs that have been used to treat COVID-19 worldwide are currently spread throughout disparate publications.
  • We performed a systematic review of the literature to identify drugs that have been tried in COVID-19 patients and to explore clinically meaningful response time.
What was learned from the study?
  • We identified 115 uniquely referenced treatments administered to COVID-19 patients. Antivirals were the most frequently administered class; combination lopinavir/ritonavir was the most frequently used treatment.
  • This study presents the latest status of off-label and experimental treatments for COVID-19. Studies such as this are important for all diseases, especially those that do not currently have definitive evidence from randomized controlled trials or approved therapies.

Treatments Administered to the First 9152 Reported Cases of COVID-19: A Systematic Review

Abstract

The emergence of SARS-CoV-2/2019 novel coronavirus (COVID-19) has created a global pandemic with no approved treatments or vaccines. Many treatments have already been administered to COVID-19 patients but have not been systematically evaluated. We performed a systematic literature review to identify all treatments reported to be administered to COVID-19 patients and to assess time to clinically meaningful response for treatments with sufficient data. We searched PubMed, BioRxiv, MedRxiv, and ChinaXiv for articles reporting treatments for COVID-19 patients published between 1 December 2019 and 27 March 2020. Data were analyzed descriptively. Of the 2706 articles identified, 155 studies met the inclusion criteria, comprising 9152 patients. The cohort was 45.4% female and 98.3% hospitalized, and mean (SD) age was 44.4 years (SD 21.0). The most frequently administered drug classes were antivirals, antibiotics, and corticosteroids, and of the 115 reported drugs, the most frequently administered was combination lopinavir/ritonavir, which was associated with a time to clinically meaningful response (complete symptom resolution or hospital discharge) of 11.7 (1.09) days. There were insufficient data to compare across treatments. Many treatments have been administered to the first 9152 reported cases of COVID-19. These data serve as the basis for an open-source registry of all reported treatments given to COVID-19 patients at www.CDCN.org/CORONA. Further work is needed to prioritize drugs for investigation in well-controlled clinical trials and treatment protocols.

Read the Press Release from PennMedicine at the following link: PennMedicine Press Release

Phase 2: Continue to update CORONA

Our team continues to work diligently to maintain an updated listing of all treatments reported to be used in COVID19 patients from papers in PubMed. We are also re-analyzing publicly available COVID19 single cell transcriptomic data alongside our iMCD data to search for novel insights and therapeutic targets.

You can visit the following link to access a database viewer built and managed by Matt Chadsey, owner of Nonlinear Ventures.

If you are a physician treating COVID19 patients, please visit the FDA’s CURE ID app to report de-identified information about drugs you’ve used to treat COVID19 in just a couple minutes.

For more information on COVID19 on this Open Access Journal please see our Coronavirus Portal at

https://pharmaceuticalintelligence.com/coronavirus-portal/

Read Full Post »

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 27, 2020 Minisymposium on Signaling in Cancer 11:45am-1:30 pm

Reporter: Stephen J. Williams, PhD.

SESSION VMS.MCB01.01 – Emerging Signaling Vulnerabilities in Cancer
April 27, 2020, 11:45 AM – 1:30 PM
Virtual Meeting: All Session Times Are U.S. EDT
DESCRIPTION

All session times are U.S. Eastern Daylight Time (EDT). Access to AACR Virtual Annual Meeting I sessions are free with registration. Register now at http://www.aacr.org/virtualam2020

Session Type

Virtual Minisymposium

Track(s)

Molecular and Cellular Biology/Genetics

16 Presentations
11:45 AM – 1:30 PM
– Chairperson

J. Silvio Gutkind. UCSD Moores Cancer Center, La Jolla, CA

11:45 AM – 1:30 PM
– Chairperson

  • in 80’s and 90’s signaling focused on defects and also oncogene addiction.  Now the field is switching to finding vulnerabilities in signaling cascades in cancer

Adrienne D. Cox. University of North Carolina at Chapel Hill, Chapel Hill, NC

11:45 AM – 11:55 AM
– Introduction

J. Silvio Gutkind. UCSD Moores Cancer Center, La Jolla, CA

11:55 AM – 12:05 PM
1085 – Interrogating the RAS interactome identifies EFR3A as a novel enhancer of RAS oncogenesis

Hema Adhikari, Walaa Kattan, John F. Hancock, Christopher M. Counter. Duke University, Durham, NC, University of Texas MD Anderson Cancer Center, Houston, TX

Abstract: Activating mutations in one of the three RAS genes (HRAS, NRAS, and KRAS) are detected in as much as a third of all human cancers. As oncogenic RAS mediates it tumorigenic signaling through protein-protein interactions primarily at the plasma membrane, we sought to document the protein networks engaged by each RAS isoform to identify new vulnerabilities for future therapeutic development. To this end, we determined interactomes of oncogenic HRAS, NRAS, and KRAS by BirA-mediated proximity labeling. This analysis identified roughly ** proteins shared among multiple interactomes, as well as a smaller subset unique to a single RAS oncoprotein. To identify those interactome components promoting RAS oncogenesis, we created and screened sgRNA library targeting the interactomes for genes modifying oncogenic HRAS-, NRAS-, or KRAS-mediated transformation. This analysis identified the protein EFR3A as not only a common component of all three RAS interactomes, but when inactivated, uniformly reduced the growth of cells transformed by any of the three RAS isoforms. EFR3A recruits a complex containing the druggable phosphatidylinositol (Ptdlns) 4 kinase alpha (PI4KA) to the plasma membrane to generate the Ptdlns species PI4P. We show that EFR3A sgRNA reduced multiple RAS effector signaling pathways, suggesting that EFR3A acts at the level of the oncoprotein itself. As lipids play a critical role in the membrane localization of RAS, we tested and found that EFR3A sgRNA reduced not only the occupancy of RAS at the plasma membrane, but also the nanoclustering necessary for signaling. Furthermore, the loss of oncogenic RAS signaling induced by EFR3A sgRNA was rescued by targeting PI4K to the plasma membrane. Taken together, these data support a model whereby EFR3A recruits PI4K to oncogenic RAS to promote plasma membrane localization and nonclustering, and in turn, signaling and transformation. To investigate the therapeutic potential of this new RAS enhancer, we show that EFR3A sgRNA reduced oncogenic KRAS signaling and transformed growth in a panel of pancreatic ductal adenocarcinoma (PDAC) cell lines. Encouraged by these results we are exploring whether genetically inactivating the kinase activity of PI4KA inhibits oncogenic signaling and transformation in PDAC cell lines. If true, pharmacologically targeting PI4K may hold promise as a way to enhance the anti-neoplastic activity of drugs targeting oncogenic RAS or its effectors.

@DukeU

@DukeMedSchool

@MDAndersonNews

  • different isoforms of ras mutations exist differentially in various tumor types e.g. nras vs kras
  • the C terminal end serve as hotspots of mutations and probably isoform specific functions
  • they determined the interactomes of nras and kras and determined how many candidates are ras specific
  • they overlayed results from proteomic and CRSPR screen; EFR3a was a potential target that stuck out
  • using TCGA patients with higher EFR3a had poorer prognosis
  • EFR3a promotes Ras signaling; and required for RAS driven tumor growth (in RAS addicted tumors?)
  • EGFR3a promotes clustering of oncogenic RAS at plasma membrane

 

12:05 PM – 12:10 PM
– Discussion

12:10 PM – 12:20 PM
1086 – Downstream kinase signaling is dictated by specific KRAS mutations; Konstantin Budagyan, Jonathan Chernoff. Drexel University College of Medicine, Philadelphia, PA, Fox Chase Cancer Center, Philadelphia, PA @FoxChaseCancer

Abstract: Oncogenic KRAS mutations are common in colorectal cancer (CRC), found in ~50% of tumors, and are associated with poor prognosis and resistance to therapy. There is substantial diversity of KRAS alleles observed in CRC. Importantly, emerging clinical and experimental analysis of relatively common KRAS mutations at amino acids G12, G13, A146, and Q61 suggest that each mutation differently influences the clinical properties of a disease and response to therapy. For example, KRAS G12 mutations confer resistance to EGFR-targeted therapy, while G13D mutations do not. Although there is clinical evidence to suggest biological differences between mutant KRAS alleles, it is not yet known what drives these differences and whether they can be exploited for allele-specific therapy. We hypothesized that different KRAS mutants elicit variable alterations in downstream signaling pathways. To investigate this hypothesis, we created a novel system by which we can model KRAS mutants in isogenic mouse colon epithelial cell lines. To generate the cell lines, we developed an assay using fluorescent co-selection for CRISPR-driven genome editing. This assay involves simultaneous introduction of single-guide RNAs (sgRNAs) to two different endogenous loci resulting in double-editing events. We first introduced Cas9 and blue fluorescent protein (BFP) into mouse colon epithelial cell line containing heterozygous KRAS G12D mutation. We then used sgRNAs targeting BFP and the mutant G12D KRAS allele along with homology-directed repair (HDR) templates for a GFP gene and a KRAS mutant allele of our choice. Cells that successfully undergo HDR are GFP-positive and contain the desired KRAS mutation. Therefore, selection for GFP-positive cells allows us to identify those with phenotypically silent KRAS edits. Ultimately, this method allows us to toggle between different mutant alleles while preserving the wild-type allele, all in an isogenic background. Using this method, we have generated cell lines with endogenous heterozygous KRAS mutations commonly seen in CRC (G12D, G12V, G12C, G12R, G13D). In order to elucidate cellular signaling pathway differences between the KRAS mutants, we screened the mutated cell lines using a small-molecule library of ~160 protein kinase inhibitors. We found that there are mutation-specific differences in drug sensitivity profiles. These observations suggest that KRAS mutants drive specific cellular signaling pathways, and that further exploration of these pathways may prove to be valuable for identification of novel therapeutic opportunities in CRC.

  • Flourescent coselection of KRAS edits by CRSPR screen in a colorectal cancer line; a cell that is competent to undergo HR can undergo combination multiple KRAS
  • target only mutant allele while leaving wild type intact;
  • it was KRAS editing event in APC  +/- mouse cell line
  • this enabled a screen for kinase inhibitors that decreased tumor growth in isogenic cell lines; PKC alpha and beta 1 inhibitors, also CDK4 inhibitors inhibited cell growth
  • questions about heterogeneity in KRAS clones; they looked at off target guides and looked at effects in screens; then they used top two clones that did not have off target;  questions about 3D culture- they have not done that; Question ? dependency on AKT activity? perhaps the G12E has different downstream effectors

 

12:20 PM – 12:25 PM
– Discussion

12:25 PM – 12:35 PM
1087 – NF1 regulates the RAS-related GTPases, RRAS and RRAS2, independent of RAS activity; Jillian M. Silva, Lizzeth Canche, Frank McCormick. University of California, San Francisco, San Francisco, CA @UCSFMedicine

Abstract: Neurofibromin, which is encoded by the neurofibromatosis type 1 (NF1) gene, is a tumor suppressor that acts as a RAS-GTPase activating protein (RAS-GAP) to stimulate the intrinsic GTPase activity of RAS as well as the closely related RAS subfamily members, RRAS, RRAS2, and MRAS. This results in the conversion of the active GTP-bound form of RAS into the inactive GDP-bound state leading to the downregulation of several RAS downstream effector pathways, most notably MAPK signaling. While the region of NF1 that regulates RAS activity represents only a small fraction of the entire protein, a large extent of the NF1 structural domains and their corresponding mechanistic functions remain uncharacterized despite the fact there is a high frequency of NF1 mutations in several different types of cancer. Thus, we wanted to elucidate the underlying biochemical and signaling functions of NF1 that are unrelated to the regulation of RAS and how loss of these functions contributes to the pathogenesis of cancer. To accomplish this objective, we used CRISPR-Cas9 methods to knockout NF1 in an isogenic “RASless” MEF model system, which is devoid of the major oncogenic RAS isoforms (HRAS, KRAS, and NRAS) and reconstituted with the KRAS4b wild-type or mutant KRASG12C or KRASG12D isoform. Loss of NF1 led to elevated RAS-GTP levels, however, this increase was not as profound as the levels in KRAS-mutated cells or provided a proliferative advantage. Although ablation of NF1 resulted in sustained activation of MAPK signaling, it also unexpectedly, resulted in a robust increase in AKT phosphorylation compared to KRAS-mutated cells. Surprisingly, loss of NF1 in KRAS4b wild-type and KRAS-mutated cells potently suppressed the RAS-related GTPases, RRAS and RRAS2, with modest effects on MRAS, at both the transcript and protein levels. A Clariom™D transcriptome microarray analysis revealed a significant downregulation in the NF-κB target genes, insulin-like growth factor binding protein 2 (IGFBP2), argininosuccinate synthetase 1 (ASS1), and DUSP1, in both the NF1 knockout KRAS4b wild-type and KRAS-mutated cells. Moreover, NF1Null melanoma cells also displayed a potent suppression of RRAS and RRAS2 as well as these NF-κB transcription factors. Since RRAS and RRAS2 both contain the same NF-κB transcription factor binding sites, we hypothesize that IGFBP2, ASS1, and/or DUSP1 may contribute to the NF1-mediated regulation of these RAS-related GTPases. More importantly, this study provides the first evidence of at least one novel RAS-independent function of NF1 to regulate the RAS-related subfamily members, RRAS and RRAS2, in a manner exclusive of its RAS-GTPase activity and this may provide insight into new potential biomarkers and molecular targets for treating patients with mutations in NF1.
  • NF1 and SPRED work together to signal from RTK cKIT through RAS
  • NF1 knockout cells had higher KRAS and had increased cell proliferation
  • NF1 -/-  or SPRED loss had increased ERK phosphorylation and some increase in AKT activity compared to parental cells
  • they used isogenic cell lines devoid of all RAS isoforms and then reconstituted with specific RAS WT or mutants
  • NF1 and SPRED KO both reduce RRAS expression; in an AKT independent mannner
  • NF1 SPRED KO cells have almost no IGFBP2 protein expression and SNAIL so maybe affecting EMT?
  • this effect is independent of its RAS GTPAse activity (noncanonical)

12:35 PM – 12:40 PM
– Discussion

12:40 PM – 12:50 PM
1088 – Elucidating the regulation of delayed-early gene targets of sustained MAPK signaling; Kali J. Dale, Martin McMahon. University of Utah, Salt Lake City, UT, Huntsman Cancer Institute, Salt Lake City, UT

Abstract: RAS and its downstream effector, BRAF, are commonly mutated proto-oncogenes in many types of human cancer. Mutationally activated RAS or BRAF signal through the MEK→ERK MAP kinase (MAPK) pathway to regulate key cancer cell hallmarks such as cell division cycle progression, reduced programmed cell death, and enhanced cell motility. Amongst the list of RAS/RAF-regulated genes are those encoding integrins, alpha-beta heterodimeric transmembrane proteins that regulate cell adhesion to the extracellular matrix. Altered integrin expression has been linked to the acquisition of more aggressive behavior by melanoma, lung, and breast cancer cells leading to diminished survival of cancer patients. We have previously documented the ability of the RAS-activated MAPK pathway to induce the expression of ITGB3 encoding integrin β3 in several different cell types. RAS/RAF-mediated induction of ITGB3 mRNA requires sustained, high-level activation of RAF→MEK→ERK signaling mediated by oncogene activation and is classified as “delayed-early”, in that it is sensitive to the protein synthesis inhibitor cycloheximide. However, to date, the regulatory mechanisms that allow for induced ITGB3 downstream of sustained, high-level activation of MAPK signaling remains obscure. We have identified over 300 DEGs, including those expressing additional cell surface proteins, that display similar regulatory characteristics as ITGB3. We use integrin β3 as a model to test our hypothesis that there is a different mechanism of regulation for delayed-early genes (DEG) compared to the canonical regulation of Immediate-Early genes. There are three regions in the chromatin upstream of the ITGB3 that become more accessible during RAF activation. We are relating the chromatin changes seen during RAF activation to active enhancer histone marks. To elucidate the essential genes of this regulation process, we are employing the use of a genome-wide CRISPR knockout screen. The work presented from this abstract will help elucidate the regulatory properties of oncogenic progression in BRAF mutated cancers that could lead to the identification of biomarkers.

12:50 PM – 12:55 PM
– Discussion

12:55 PM – 1:05 PM
1090 – Regulation of PTEN translation by PI3K signaling maintains pathway homeostasis

Radha Mukherjee, Kiran Gireesan Vanaja, Jacob A. Boyer, Juan Qiu, Xiaoping Chen, Elisa De Stanchina, Sarat Chandarlapaty, Andre Levchenko, Neal Rosen. Memorial Sloan Kettering Cancer Center, New York, NY, Yale University, West Haven, CT, Memorial Sloan Kettering Cancer Center, New York, NY, Memorial Sloan Kettering Cancer Center, New York, NY @sloan_kettering

Abstract: The PI3K pathway is a key regulator of metabolism, cell proliferation and migration and some of its components (e.g. PIK3CA and PTEN) are frequently altered in cancer by genetic events that deregulate its output. However, PI3K signaling is not usually the primary driver of these tumors and inhibitors of components of the pathway have only modest antitumor effects. We now show that both physiologic and oncogenic activation of the PI3K signaling by growth factors and an activating hotspot PIK3CA mutation respectively, cause an increase in the expression of the lipid phosphatase PTEN, thus limiting the duration of the signal and the output of the pathway in tumors. Pharmacologic and physiologic inhibition of the pathway by HER2/PI3K/AKT/mTOR inhibitors and nutrient starvation respectively reduce PTEN, thus buffering the effects of inhibition and contributing to the rebound in pathway activity that occurs in tumors. This regulation is found to be a feature of multiple types of cancer, non-cancer cell line and PDX models thereby highlighting its role as a key conserved feedback loop within the PI3K signaling network, both in vitro and in vivo. Regulation of expression is due to mTOR/4EBP1 dependent control of PTEN translation and is lost when 4EBP1 is knocked out. Translational regulation of PTEN is therefore a major homeostatic regulator of physiologic PI3K signaling and plays a role in reducing the output of oncogenic mutants that deregulate the pathway and the antitumor activity of PI3K pathway inhibitors.

  • mTOR can be a potent regulator of PTEN and therefore a major issue when developing PI3K inhibitors

1:05 PM – 1:10 PM
– Discussion

1:10 PM – 1:20 PM
1091 – BI-3406 and BI 1701963: Potent and selective SOS1::KRAS inhibitors induce regressions in combination with MEK inhibitors or irinotecan

Daniel Gerlach, Michael Gmachl, Juergen Ramharter, Jessica Teh, Szu-Chin Fu, Francesca Trapani, Dirk Kessler, Klaus Rumpel, Dana-Adriana Botesteanu, Peter Ettmayer, Heribert Arnhof, Thomas Gerstberger, Christiane Kofink, Tobias Wunberg, Christopher P. Vellano, Timothy P. Heffernan, Joseph R. Marszalek, Mark Pearson, Darryl B. McConnell, Norbert Kraut, Marco H. Hofmann. Boehringer Ingelheim RCV GmbH & Co KG, Vienna, Austria, The University of Texas MD Anderson Cancer Center, Houston, TX, The University of Texas MD Anderson Cancer Center, Houston, TX, Boehringer Ingelheim RCV GmbH & Co KG, Vienna, Austria

  • there is rational for developing an SOS1 inhibitor (GEF); BI3406 shows better PK and PD as a candidate
  • most sensitive cell lines to inhibitor carry KRAS mutation; NRAS or BRAF mutations are not sensititve
  • KRAS mutation defines sensitivity so they created KRAS mut isogenic cell lines
  • found best to co inhibit SOS and MEK as observed plasticity with only SOS
  • dual combination in lung NSCLC pancreatic showed enhanced efficacy compared to monotherapy
  • SOS1 inhibition plus irinotecan enhances DNA double strand breaks; no increased DNA damage in normal stroma but preferentially in tumor cells
  • these SOS1 had broad activity against KRAS mutant models;
  • phase 1 started in 2019;

@Boehringer

1:20 PM – 1:25 PM
– Discussion

1:25 PM – 1:30 PM
– Closing Remarks

Adrienne D. Cox. University of North Carolina at Chapel Hill, Chapel Hill, NC

Follow on Twitter at:

@pharma_BI

@AACR

@GenomeInstitute

@CureCancerNow

@UCLAJCCC

#AACR20

#AACR2020

#curecancernow

#pharmanews

Read Full Post »

In Data Science, A Pioneer Practitioner’s Portfolio of Algorithm-based Decision Support Systems for Operations Management in Several Industrial Verticals: Analytics Designer, Aviva Lev-Ari, PhD, RN

An overview of Data Science as a discipline is presented in

Data Science & Analytics: What do Data Scientists Do in 2020 and a Pioneer Practitioner’s Portfolio of Algorithm-based Decision Support Systems for Operations Management in Several Industrial Verticals

 

On this landscape about IT, The Internet, Analytics, Statistics, Big Data, Data Science and Artificial Intelligence, I am to tell stories on my own pioneering work in data science, Algorithm-based decision support systems design for different organizations in several sectors of the US economy:

Images on 12/7/2019

  • Startups:
  1. TimeØ Group – The leader in Digital Marketplaces Design
  2. Concept Five Technologies, Inc. – Commercialization of DoD funded technologies
  3. MDSS, Inc. – SAAS in Analytical Services
  4. LPBI Group – Pharmaceutical & Media
  • Top Tier Management Consulting: SRI International, Monitor Group;
  • OEM: Amdahl Corporation;
  • Top 6th System Integrator: Perot System Corporation;
  • FFRDC: MITRE Corporation.
  • Publishing industry: was Director of Research at McGraw-Hill/CTB.
  • Northeastern University, Researcher on Cardiovascular Pharmacotherapy at Bouve College of Health Sciences (Independent research guided by Professor of Pharmacology)

Type of institutions:

  • For-Profit corporations: Amdahl Corp, PSC, McGraw-Hill
  • For-Profit Top Tier Consulting: Monitor Company, Now Deloitte
  • Not-for-Profit Top Tier Consulting: SRI International
  • FFRDC: MITRE
  • Pharmaceutical & Media Start up in eScientific Publishing: LPBI Group:
  1. Developers of Curation methodology for e-Articles [N = 5,700],
  2. Developers of electronic Table of Contents for e-Books in Medicine [N = 16, https://lnkd.in/ekWGNqA] and
  3. Developers of Methodologies for real time press coverage and production of e-Proceedings of Biotech Conferences [N = 70].

 

Autobiographical Annotations: Tribute to My Professors

 

Pioneering implementations of analytics to business decision making: contributions to domain knowledge conceptualization, research design, methodology development, data modeling and statistical data analysis: Aviva Lev-Ari, UCB, PhD’83; HUJI MA’76

https://pharmaceuticalintelligence.com/2018/05/28/pioneering-implementations-of-analytics-to-business-decision-making-contributions-to-domain-knowledge-conceptualization-research-design-methodology-development-data-modeling-and-statistical-data-a/

Recollections of Years at UC, Berkeley, Part 1 and Part 2

  • Recollections: Part 1 – My days at Berkeley, 9/1978 – 12/1983 – About my doctoral advisor, Allan Pred, other professors and other peers

https://pharmaceuticalintelligence.com/2018/03/15/recollections-my-days-at-berkeley-9-1978-12-1983-about-my-doctoral-advisor-allan-pred-other-professors-and-other-peer/

  • Recollections: Part 2 – “While Rolling” is preceded by “While Enrolling” Autobiographical Alumna Recollections of Berkeley – Aviva Lev-Ari, PhD’83

https://pharmaceuticalintelligence.com/2018/05/24/recollections-part-2-while-rolling-is-preceded-by-while-enrolling-autobiographical-alumna-recollections-of-berkeley-aviva-lev-ari-phd83/

Accomplishments

The Digital Age Gave Rise to New Definitions – New Benchmarks were born on the World Wide Web for the Intangible Asset of Firm’s Reputation: Pay a Premium for buying e-Reputation

For @AVIVA1950, Founder, LPBI Group @pharma_BI: Twitter Analytics [Engagement Rate, Link Clicks, Retweets, Likes, Replies] & Tweet Highlights [Tweets, Impressions, Profile Visits, Mentions, New Followers] https://analytics.twitter.com/user/AVIVA1950/tweets

Thriving at the Survival Calls during Careers in the Digital Age – An AGE like no Other, also known as, DIGITAL

Professional Self Re-Invention: From Academia to Industry – Opportunities for PhDs in the Business Sector of the Economy

Reflections on a Four-phase Career: Aviva Lev-Ari, PhD, RNMarch 2018

Was prepared for publication in American Friends of the Hebrew University (AFHU), May 2018 Newsletter, Hebrew University’s HUJI Alumni Spotlight Section.

Aviva Lev-Ari’s profile was up on 5/3/2018 on AFHU website under the Alumni Spotlight at https://www.afhu.org/

On 5/11/2018, Excerpts were Published in AFHU e-news.

https://us10.campaign-archive.com/?u=5c25136c60d4dfc4d3bb36eee&id=757c5c3aae&e=d09d2b8d72

https://www.afhu.org/2018/05/03/aviva-lev-ari/

 

Read Full Post »

Funding Research by Lottery?: How Lucky Do You Feel After Submitting a Grant

Reporter: Stephen J. Williams, Ph.D.

A recent article in Nature: “Science Funders Gamble on Grant Lotteries” discusses an odd twist to the anxiety most researchers feel after submitting grants to an agency.  Now, along with the hours of fretting over details and verbiage in a grant application, it appears that not only great science, but the luck of the draw may be necessary to get your work funded.  The article, by David Adam, discusses the funding strategy of the Health Research Council of New Zealand, which since 2015, has implemented a strategy of awarding grants through random selection.  Although limited in scope and size (mainly these grants are on very highly speculative and potentially transformative research and awards are usually less that $150,000 NZD) was meant to promote the applicants in submitting more risky ideas that are usually submitted in traditional peer reviewed grants.

Random chance will create more openness to ideas that are not in the mainstream

–  Margit Osterloh, economist at University of Zurich

Margit also mentions that many mid-ranking applications which are never funded could benefit from such a lottery system.

The Swiss National Science Foundation (SSFS) is also experimenting with this idea of random selection.  The Health Research Council states the process in not entirely random.  A computer selects the projects at random based on a random number generator.  A panel then decides if they are a reasonably good and well written application.

Some researchers have felt this random process could help eliminate much bias that can be baked into the traditional peer review process.  However there are many who feel the current process of peer review panels are a necessary and rigorous step in the granting process, analyzing applications which would most likely have the best chances to succeed based on the rigor of the proposed science.

However Osterloh feels that the lottery idea produces a humbling effect. As Margit said

If you know you have got a grant or a publication which is selected partly randomly, then you will know very well you are not the king of the Universe

Humility in science: a refreshing idea.  However the lottery idea will not mean that scientists need not prepare a careful and well written application.  Applications that are ranked very low would not be in the lottery.  However, if one feels lucky, maybe the obscene hours of worrying about each sentence written, or that figures for preliminary data should be altered at the 11th hour before submission might be a thing of the past.

Of course if you are a lucky person.

 

 

Read Full Post »

Older Posts »

%d bloggers like this: