Feeds:
Posts
Comments

Posts Tagged ‘#science2_0’

How to Create a Twitter Space for @pharma_BI for Live Broadcasts

Right now, Twitter Spaces are available on Android and iOS operating systems ONLY.  For use on a PC desktop you must install an ANDROID EMULATOR.  This means best to set up the Twitter Space using your PHONE APP not on the desktop or laptop computer.  Right now, even though there is the ability to record a Twitter Space, that recording is not easily able to be embedded in WordPress as a tweet is (or chain of tweets).  However you can download the recording (takes a day or two) and convert to mpeg using a program like Audacity to convert into an audio format conducible to WordPress.

A while ago I had put a post where I link to a Twitter Space I created for a class on Dissemination of Scientific Discoveries.  The post

“Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?”

can be seen at

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

 

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 

 

About Twitter Spaces

 

Spaces is a way to have live audio conversations on Twitter. Anyone can join, listen, and speak in a Space on Twitter for iOS and Android. Currently you can listen in a Space on web.

Quick links

How to use Spaces
Spaces FAQ
Spaces Feedback Community
Community Spaces

 

 

 

 

 

 

 

 

 

 

 

How to use Spaces

Instructions for:

How do you start a Space?

 

 

 

Step 1

The creator of a Space is the host. As a host on iOS, you can start a Space by long pressing on the Tweet Composer  from your Home timeline and and then selecting the Spaces  icon.

You can also start a Space by selecting the Spaces tab on the bottom of your timeline.

Step 2

Spaces are public, so anyone can join as a listener, including people who don’t follow you. Listeners can be directly invited into a Space by DMing them a link to the Space, Tweeting out a link, or sharing a link elsewhere.

Step 3

Up to 13 people (including the host and 2 co-hosts) can speak in a Space at any given time. When creating a new Space, you will see options to Name your Space and Start your Space.

Step 4

To schedule a Space, select Schedule for later. Choose the date and time you’d like your Space to go live.

Step 5

Once the Space has started, the host can send requests to listeners to become co-hosts or speakers by selecting the people icon  and adding co-hosts or speakers, or selecting a person’s profile picture within a Space and adding them as a co-host or speaker. Listeners can request permission to speak from the host by selecting the Request icon below the microphone.

Step 6

When creating a Space, the host will join with their mic off and be the only speaker in the Space. When ready, select Start your Space.

Step 7

Allow mic access (speaking ability) to speakers by toggling Allow mic access to on.

Step 8

Get started chatting in your Space.

Step 9

As a host, make sure to Tweet out the link to your Space so other people can join. Select the  icon to Share via a Tweet.

 

Spaces FAQ

Where is Spaces available?

Anyone can join, listen, and speak in a Space on Twitter for iOS and Android. Currently, starting a Space on web is not possible, but anyone can join and listen in a Space.

Who can start a Space?

People on Twitter for iOS and Android can start a Space.

Who can see my Space?

For now, all Spaces are public like Tweets, which means they can be accessed by anyone. They will automatically appear at the top of your Home timeline, and each Space has a link that can be shared publicly. Since Spaces are publicly accessible by anyone, it may be possible for people to listen to a Space without being listed as a guest in the Space.

We make certain information about Spaces available through the Twitter Developer Platform, such as the title of a Space, the hosts and speakers, and whether it is scheduled, in progress, or complete. For a more detailed list of the information about Spaces we make available via the Twitter API, check out our Spaces endpoints documentation.

Can other people see my presence while I am listening or speaking in a Space?

Since all Spaces are public, your presence and activity in a Space is also public. If you are logged into your Twitter account when you are in a Space, you will be visible to everyone in the Space as well as to others, including people who follow you, people who peek into the Space without entering, and developers accessing information about the Space using the Twitter API.

If you are listening in a Space, your profile icon will appear with a purple pill at the top of your followers’ Home timelines. You have the option to change this in your settings.

Instructions for:

Manage who can see your Spaces listening activity

Step 1

On the left nav menu, select the more  icon and go to Settings and privacy.

Step 2

Under Settings, navigate to Privacy and safety.

Step 3

Under Your Twitter activity, go to Spaces.

Step 4

Choose if you want to Allow followers to see which Spaces you’re listening to by toggling this on or off.

Your followers will always see at the top of their Home timelines what Spaces you’re speaking in.

What does it mean that Spaces are public? Can anyone listen in a Space?

Spaces can be listened to by anyone on the Internet. This is part of a broader feature of Spaces that lets anyone listen to Spaces regardless of whether or not they are logged in to a Twitter account (or even have a Twitter account). Because of this, listener counts may not match the actual number of listeners, nor will the profile photos of all listeners necessarily be displayed in a Space.

How do I invite people to join a Space?

Invite people to join a Space by sending an invite via DM, Tweeting the link out to your Home timeline, or copying the invite link to send it out.

Who can join my Space?

For now, all Spaces are public and anyone can join any Space as a listener. If the listener has a user account, you can block their account. If you create a Space or are a speaker in a Space, your followers will see it at the top of their timeline.

Who can speak in my Space?

By default, your Space will always be set to Only people you invite to speak. You can also modify the Speaker permissions once your Space has been created. Select the  icon, then select Adjust settings to see the options for speaker permissions, which include EveryonePeople you follow, and the default Only people you invite to speak. These permissions are only saved for this particular Space, so any Space you create in the future will use the default setting.

Once your Space has started, you can send requests to listeners to become speakers or co-hosts by selecting the  icon and adding speakers or selecting a person’s profile picture within a Space and adding them as a co-host or speaker. Listeners can request to speak from the host.

Hosts can also invite other people outside of the Space to speak via DM.

How does co-hosting work?

Up to 2 people can become co-hosts and speak in a Space in addition to the 11 speakers (including the primary host) at one time. Co-host status can be lost if the co-host leaves the Space. A co-host can remove their own co-host status to become a Listener again.

Hosts can transfer primary admin rights to another co-host. If the original host drops from Space, the first co-host added will become the primary admin. The admin is responsible for promoting and facilitating a healthy conversation in the Space in line with the Twitter Rules.

Once a co-host is added to a Space, any accounts they’ve blocked on Twitter who are in the Space will be removed from the Space.

Can I schedule a Space?

Hosts can schedule a Space up to 30 days in advance and up to 10 scheduled Spaces. Hosts can still create impromptu Spaces in the meantime, and those won’t count toward the maximum 10 scheduled Spaces.

Before you create your Space, select the scheduler  icon and pick the date and time you’d like to schedule your Space to go live. As your scheduled start time approaches, you will receive push and in-app notifications reminding you to start your Space on time. If you don’t have notifications turned on, follow the in-app steps on About notifications on mobile devices to enable them for Spaces. Scheduled Spaces are public and people can set reminders to be notified when your scheduled Space begins.

How do I edit my scheduled Space(s)?

Follow the steps below to edit any of your scheduled Spaces.

Instructions for:

Manage your scheduled Spaces

Step 1

From your timeline, navigate to and long press on the . Or, navigate to the Spaces Tab  at the bottom of your timeline.

Step 2

Select the Spaces  icon.

Step 3

To manage your scheduled Spaces, select the scheduler  icon at the top.

Step 4

You’ll see the Spaces that you have scheduled.

Step 5

Navigate to the more  icon of the Space you want to manage. You can edit, share, or cancel the Space.

If you are editing your Space, make sure to select “Save changes” after making edits.

How do I get notified about a scheduled Space?

Guests can sign up for reminder notifications from a scheduled Space card in a Tweet. When the host starts the scheduled Space, the interested guests get notified via push and in-app notifications.

Can I record a Space?

Hosts can record Spaces they create for replay. When creating a Space, toggle on Record Space.

While recording, a recording symbol will appear at the top to indicate that the Space is being recorded by the host. Once the Space ends, you will see how many people attended the Space along with a link to share out via a Tweet. Under Notifications, you can also View details to Tweet the recording. Under host settings, you will have the option to choose where to start your recording with Edit start time. This allows you to cut out any dead air time that might occur at the beginning of a Space.

If you choose to record your Space, once the live Space ends, your recording will be immediately and publicly available for anyone to listen to whenever they want. You can always end a recording to make it no longer publicly available on Twitter by deleting your recording via the more  icon on the recording itself. Unless you delete your recording, it will remain available for replay after the live Space has ended.* As with live Spaces, Twitter will retain audio copies for 30 after they end to review for violations of the Twitter Rules. If a violation is found, Twitter may retain audio copies for up to 120 days in total. For more information on downloading Spaces, please see below FAQ, “What happens after a Space ends and is the data retained anywhere?

Co-hosts and speakers who enter a Space that is being recorded will see a recording symbol (REC). Listeners will also see the recording symbol, but they will not be visible in the recording.

Recordings will show the host, co-host(s), and speakers from the live Space.

*Note: Hosts on iOS 9.15+ and Android 9.46+ will be able to record Spaces that last indefinitely. For hosts on older app versions, recording will only be available for 30 days. For Spaces that are recorded indefinitely, Twitter will retain a copy for as long as the Space is replayable on Twitter, but for no less than 30 days after the live Space ended.

 

What is clipping?

Clipping is a new feature we’re currently testing and gradually rolling out that lets a limited group of hosts, speakers, and listeners capture 30 seconds of audio from any live or recorded Space and share it through a Tweet if the host has not disabled the clipping function. To start clipping a Space, follow the instructions below to capture the prior 30 seconds of audio from that Space. There is no limit to the number of clips that participants in a Space can create.

When you enter the Space as a co-host or speaker, you will be informed that the Space is clippable through a tool tip notification above the clipping  icon.

Note: Currently, creating a clip is available only on iOS and Android, while playing a clip is available on all platforms to everyone.

Instructions for:

Host instructions: How to turn off clipping

 

When you start your Space, you’ll receive a notification about what a clip is and how to turn it off, as clipping is on by default. You can turn off clipping at any time. To turn it off, follow the instructions below.

Step 1

Navigate to the more  icon.

Step 2

Select Adjust settings .

Step 3

Under Clips, toggle Allow clips off.

Instructions for:

Host and speaker instructions: How to create a clipping

Step 1

In a recorded or live Space that is recorded, navigate to the clipping  icon. Please note that, for live Spaces, unless the clipping function is disabled, clips will be publicly available on your Twitter profile after your live Space has ended even though the Space itself will no longer be available.

Step 2

On the Create clip pop-up, go to Next.

Step 3

Preview the Tweet and add a comment if you’d like, similarly to a Quote Tweet.

Step 4

Select Tweet to post it to your timeline.

Why is my clip not displaying captions?

What controls do hosts have over existing clips?

What controls do clip creators have over clips they’ve created?

Other controls over clips: how to report, block, or mute

What controls do I have over my Space?

The host and co-host(s) of a Space have control over who can speak. They can mute any Speaker, but it is up to the individual to unmute themselves if they receive speaking privileges. Hosts and co-hosts can also remove,  report, and block others in the Space.

Speakers and listeners can report and block others in the Space, or can report the Space. If you block a participant in the Space, you will also block that person’s account on Twitter. If the person you blocked joins as a listener, they will appear in the participant list with a Blocked label under their account name. If the person you blocked joins as a speaker, they will also appear in the participant list with a Blocked label under their account name and you will see an in-app notification stating, “An account you blocked has joined as a speaker.” If you are entering a Space that already has a blocked account as a speaker, you will also see a warning before joining the Space stating, “You have blocked 1 person who is speaking.”

If you are hosting or co-hosting a Space, people you’ve blocked can’t join and, if you’re added as a co-host during a Space, anyone in the Space who you blocked will be removed from the Space.

What are my responsibilities as a Host or Co-Host of a Space?

As a Host, you are responsible for promoting and supporting a healthy conversation in your Space and to use your tools to ensure that the Twitter Rules are followed. The following tools are available for you to use if a participant in the Space is being offensive or disruptive:

  • Revoke speaking privileges of other users if they are being offensive or disruptive to you or others
  • Block, remove or report the user.

Here are some guidelines to follow as a Host or Co-Host:

  • Always follow the Twitter Rulesin the Space you host or co-host. This also applies to the title of your Space which should not include abusive slurs, threats, or any other rule-violating content.
  • Do not encourage behavior or content that violates the Twitter Rules.
  • Do not abuse or misuse your hosting tools, such as arbitrarily revoking speaking privileges or removing users, or use Spaces to carry out activities that break our rules such as following schemes.

How can I block someone in a Space?

How can I mute a speaker in a Space?

How can I see people in my Space?

Hosts, speakers, and listeners can select the  icon to see people in a Space. Since Spaces are publicly accessible by anyone, it may also be possible for an unknown number of logged-out people to listen to a Space’s audio without being listed as a guest in the Space.

How can I report a Space?

How can I report a person in a Space?

Can Twitter suspend my Space while it’s live?

How many people can speak in a Space?

How many people can listen in a Space?

 

What happens after a Space ends and is the data retained anywhere?

Hosts can choose to record a Space prior to starting it. Hosts may download copies of their recorded Spaces for as long as we have them by using the Your Twitter Data download tool.

For unrecorded Spaces, Twitter retains copies of audio from recorded Spaces for 30 days after a Space ends to review for violations of the Twitter Rules. If a Space is found to contain a violation, we extend the time we maintain a copy for an additional 90 days (a total of 120 days after a Space ends) to allow people to appeal if they believe there was a mistake. Twitter also uses Spaces content and data for analytics and research to improve the service.

Links to Spaces that are shared out (e.g., via Tweet or DM) also contain some information about the Space, including the description, the identity of the hosts and others in the Space, as well as the Space’s current state (e.g., scheduled, live, or ended). We make this and other information about Spaces available through the Twitter Developer Platform. For a detailed list of the information about Spaces we make available, check out our Spaces endpoints documentation.

For full details on what data we retain, visit our Privacy Policy.

Who can end a Space?

Does Spaces work for accounts with protected Tweets?

Following the Twitter Rules in Spaces

 

Spaces Feedback Community

We’re opening up the conversation and turning it over to the people who are participating in Spaces. This Community is a dedicated place for us to connect with you on all things Spaces, whether it’s feedback around features, ideas for improvement, or any general thoughts.

Who can join?

Anyone on Spaces can join, whether you are a host, speaker, or listener.

How do I join the Community?

You can request to join the Twitter Spaces Feedback Community here. By requesting to join, you are agreeing to our Community rules.

Learn more about Communities on Twitter.

 

Community Spaces

As a Community admin or moderator, you can create and host a Space for your Community members to join.

Note:

Currently, creating Community Spaces is only available to some admins and moderators using the Twitter for iOS and Twitter for Android apps.

Instructions for:

Admins & moderators: How to create a Space

Step 1

Navigate to the Community landing page.

Step 2

Long press on the Tweet Composer  and select the Spaces  icon.

Step 3

Select Spaces and begin creating your Space by adding in a title, toggling on record Space (optional), and adding relevant topics.

Step 4

Invite admins, moderators, and other people to be a part of your Space.

Members: How to find a Community Space

If a Community Space is live, you will see the Spacebar populate at the top of your Home timeline. To enter the Space and begin listening, select the live Space in the Spacebar.

Community Spaces FAQ

What are Community Spaces?

 

 

 

 

 

 

 

 

 

Spaces Social Narrative


A social narrative is a simple story that describes social situations and social behaviors for accessibility.

Twitter Spaces allows me to join or host live audio-only conversations with anyone.

Joining a Space

  1. When I join a Twitter Space, that means I’ll be a listener. I can join any Space on Twitter, even those hosted by people I don’t know or follow.
  2. I can join a Space by selecting a profile photo with a purple, pulsing outline at the top of my timeline, selecting a link from someone’s Tweet, or a link in a Direct Message (DM).
  3. Once I’m in a Space, I can seethe profile photos and names of some people in the Space, including myself.
  4. I can hearone or multiple people talking at the same time. If it’s too loud or overwhelming, I can turn down my volume.
  5. As a listener, I am not able to speak. If I want to say something, I can send a request to the host. The host might not approve my request though.
  6. If the host accepts my request, I will become a speaker. It may take a few seconds to connect my microphone, so I’ll have to wait.
  7. Now I can unmute myself and speak. Everyone in the Space will be able to hear me.
  8. When someone says something I want to react to, I can choosean emoji to show everyone how I feel. I will be able to see when other people react as well.
  9. I can leave the Space at any time. After I leave, or when the host ends the Space, I’ll go back to my timeline.

Hosting a Space

  1. When I start a Space, that means I’ll be the host. Anyone can join my Space, even people I don’t know and people I don’t follow.
  2. Once I start my space, it may take a few seconds to be connected, so I’ll have to wait.
  3. Now I’m in my Space and I can seemy profile photo. If other logged-in, people have joined, I will be able to see their profile photos, too.
  4. I will start out muted, which is what the microphone with a slash through it means. I can mute and unmute myself, and anyone in my Space, at any time.
  5. I can invitepeople to join my Space by sending them a Direct Message (DM), sharing the link in a Tweet, and by copying the link and sharing it somewhere else like in an email.
  6. Up to 10 other people can have speaking privileges in my Space at the same time, and I can choosewho speaks and who doesn’t. People can also request to speak, and I can choose to approve their request or not.

 

Read Full Post »

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

Multiple Major Scientific Journals Will Fully Adopt Open Access Under Plan S

Curator: Stephen J. Williams, PhD

More university library systems have been pressuring major scientific publishing houses to adopt an open access strategy in order to reduce the library system’s budgetary burdens.  In fact some major universities like the California system of universities (University of California and other publicly funded universities in the state as well as Oxford University in the UK, even MIT have decided to become their own publishing houses in a concerted effort to fight back against soaring journal subscription costs as well as the costs burdening individual scientists and laboratories (some of the charges to publish one paper can run as high as $8000.00 USD while the journal still retains all the rights of distribution of the information).  Therefore more and more universities, as well as concerted efforts by the European Union and the US government are mandating that scientific literature be published in an open access format.

The results of this pressure are evident now as major journals like Nature, JBC, and others have plans to go fully open access in 2021.  Below is a listing and news reports of some of these journals plans to undertake a full Open Access Format.

 

Nature to join open-access Plan S, publisher says

09 APRIL 2020 UPDATE 14 APRIL 2020

Springer Nature says it commits to offering researchers a route to publishing open access in Nature and most Nature-branded journals from 2021.

Richard Van Noorden

After a change in the rules of the bold open-access (OA) initiative known as Plan S, publisher Springer Nature said on 8 April that many of its non-OA journals — including Nature — were now committed to joining the plan, pending discussion of further technical details.

This means that Nature and other Nature-branded journals that publish original research will now look to offer an immediate OA route after January 2021 to scientists who want it, or whose funders require it, a spokesperson says. (Nature is editorially independent of its publisher, Springer Nature.)

“We are delighted that Springer Nature is committed to transitioning its journals to full OA,” said Robert Kiley, head of open research at the London-based biomedical funder Wellcome, and the interim coordinator for Coalition S, a group of research funders that launched Plan S in 2018.

But Lisa Hinchliffe, a librarian at the University of Illinois at Urbana–Champaign, says the changed rules show that publishers have successfully pushed back against Plan S, softening its guidelines and expectations — in particular in the case of hybrid journals, which publish some content openly and keep other papers behind paywalls. “The coalition continues to take actions that rehabilitate hybrid journals into compliance rather than taking the hard line of unacceptability originally promulgated,” she says.

 

 

 

 

What is Plan S?

The goal of Plan S is to make scientific and scholarly works free to read as soon as they are published. So far, 17 national funders, mostly in Europe, have joined the initiative, as have the World Health Organization and two of the world’s largest private biomedical funders — the Bill & Melinda Gates Foundation and Wellcome. The European Commission will also implement an OA policy that is aligned with Plan S. Together, this covers around 7% of scientific articles worldwide, according to one estimate. A 2019 report published by the publishing-services firm Clarivate Analytics suggested that 35% of the research content published in Nature in 2017 acknowledged a Plan S funder (see ‘Plan S papers’).

PLAN S PAPERS

Journal Total papers in 2017 % acknowledging Plan S funder
Nature 290 35%
Science 235 31%
Proc. Natl Acad. Sci. USA 639 20%

Source: The Plan S footprint: Implications for the scholarly publishing landscape (Institute for Scientific Information, 2019)

 

Source: https://www.nature.com/articles/d41586-020-01066-5

Opening ASBMB publications freely to all

 

Lila M. Gierasch, Editor-in-Chief, Journal of Biological Chemistry

Nicholas O. Davidson

Kerry-Anne Rye, Editors-in-Chief, Journal of Lipid Research and 

Alma L. Burlingame, Editor-in-Chief, Molecular and Cellular Proteomics

 

We are extremely excited to announce on behalf of the American Society for Biochemistry and Molecular Biology (ASBMB) that the Journal of Biological Chemistry (JBC), Molecular & Cellular Proteomics (MCP), and the Journal of Lipid Research (JLR) will be published as fully open-access journals beginning in January 2021. This is a landmark decision that will have huge impact for readers and authors. As many of you know, many researchers have called for journals to become open access to facilitate scientific progress, and many funding agencies across the globe are either already requiring or considering a requirement that all scientific publications based on research they support be published in open-access journals. The ASBMB journals have long supported open access, making the accepted author versions of manuscripts immediately and permanently available, allowing authors to opt in to the immediate open publication of the final version of their paper, and endorsing the goals of the larger open-access movement (1). However, we are no longer satisfied with these measures. To live up to our goals as a scientific society, we want to freely distribute the scientific advances published in JBC, MCP, and JLR as widely and quickly as possible to support the scientific community. How better can we facilitate the dissemination of new information than to make our scientific content freely open to all?

For ASBMB journals and others who have contemplated or made the transition to publishing all content open access, achieving this milestone generally requires new financial mechanisms. In the case of the ASBMB journals, the transition to open access is being made possible by a new partnership with Elsevier, whose established capabilities and economies of scale make the costs associated with open-access publication manageable for the ASBMB (2). However, we want to be clear: The ethos of ASBMB journals will not change as a consequence of this new alliance. The journals remain society journals: The journals are owned by the society, and all scientific oversight for the journals will remain with ASBMB and its chosen editors. Peer review will continue to be done by scientists reviewing the work of scientists, carried out by editorial board members and external referees on behalf of the ASBMB journal leadership. There will be no intervention in this process by the publisher.

Although we will be saying “goodbye” to many years of self-publishing (115 in the case of JBC), we are certain that we are taking this big step for all the right reasons. The goal for JBC, MCP, and JLR has always been and will remain to help scientists advance their work by rapidly and effectively disseminating their results to their colleagues and facilitating the discovery of new findings (13), and open access is only one of many innovations and improvements in science publishing that could help the ASBMB journals achieve this goal. We have been held back from fully exploring these options because of the challenges of “keeping the trains running” with self-publication. In addition to allowing ASBMB to offer all the content in its journals to all readers freely and without barriers, the new partnership with Elsevier opens many doors for ASBMB publications, from new technology for manuscript handling and production, to facilitating reader discovery of content, to deploying powerful analytics to link content within and across publications, to new opportunities to improve our peer review mechanisms. We have all dreamed of implementing these innovations and enhancements (45) but have not had the resources or infrastructure needed.

A critical aspect of moving to open access is how this decision impacts the cost to authors. Like most publishers that have made this transition, we have been extremely worried that achieving open-access publishing would place too big a financial burden on our authors. We are pleased to report the article-processing charges (APCs) to publish in ASBMB journals will be on the low end within the range of open-access fees: $2,000 for members and $2,500 for nonmembers. While slightly higher than the cost an author incurs now if the open-access option is not chosen, these APCs are lower than the current charges for open access on our existing platform.

References

1.↵ Gierasch, L. M., Davidson, N. O., Rye, K.-A., and Burlingame, A. L. (2019) For the sake of science. J. Biol. Chem. 294, 2976 FREE Full Text

2.↵ Gierasch, L. M. (2017) On the costs of scientific publishing. J. Biol. Chem. 292, 16395–16396 FREE Full Text

3.↵ Gierasch, L. M. (2020) Faster publication advances your science: The three R’s. J. Biol. Chem. 295, 672 FREE Full Text

4.↵ Gierasch, L. M. (2017) JBC is on a mission to facilitate scientific discovery. J. Biol. Chem. 292, 6853–6854 FREE Full Text

5.↵ Gierasch, L. M. (2017) JBC’s New Year’s resolutions: Check them off! J. Biol. Chem. 292, 21705–21706 FREE Full Text

 

Source: https://www.jbc.org/content/295/22/7814.short?ssource=mfr&rss=1

 

Open access publishing under Plan S to start in 2021

BMJ

2019; 365 doi: https://doi.org/10.1136/bmj.l2382 (Published 31 May 2019)Cite this as: BMJ 2019;365:l2382

From 2021, all research funded by public or private grants should be published in open access journals, according to a group of funding agencies called coALition S.1

The plan is the final version of a draft that was put to public consultation last year and attracted 344 responses from institutions, almost half of them from the UK.2 The responses have been considered and some changes made to the new system called Plan S, a briefing at the Science Media Centre in London was told on 29 May.

The main change has been to delay implementation for a year, to 1 January 2021, to allow more time for those involved—researchers, funders, institutions, publishers, and repositories—to make the necessary changes, said John-Arne Røttingen, chief executive of the Research Council of Norway.

“All research contracts signed after that date should include the obligation to publish in an open access journal,” he said. T……

(Please Note in a huge bit of irony this article is NOT Open Access and behind a paywall…. Yes an article about an announcement to go Open Access is not Open Access)

Source: https://www.bmj.com/content/365/bmj.l2382.full

 

 

Plan S

From Wikipedia, the free encyclopedia

Jump to navigationJump to search

Not to be confused with S-Plan.

Plan S is an initiative for open-access science publishing launched in 2018[1][2] by “cOAlition S”,[3] a consortium of national research agencies and funders from twelve European countries. The plan requires scientists and researchers who benefit from state-funded research organisations and institutions to publish their work in open repositories or in journals that are available to all by 2021.[4] The “S” stands for “shock”.[5]

Principles of the plan[edit]

The plan is structured around ten principles.[3] The key principle states that by 2021, research funded by public or private grants must be published in open-access journals or platforms, or made immediately available in open access repositories without an embargo. The ten principles are:

  1. authors should retain copyrighton their publications, which must be published under an open license such as Creative Commons;
  2. the members of the coalition should establish robust criteria and requirements for compliant open access journals and platforms;
  3. they should also provide incentives for the creation of compliant open access journals and platforms if they do not yet exist;
  4. publication fees should be covered by the funders or universities, not individual researchers;
  5. such publication fees should be standardized and capped;
  6. universities, research organizations, and libraries should align their policies and strategies;
  7. for books and monographs, the timeline may be extended beyond 2021;
  8. open archives and repositories are acknowledged for their importance;
  9. hybrid open-access journalsare not compliant with the key principle;
  10. members of the coalition should monitor and sanction non-compliance.

Member organisations

Organisations in the coalition behind Plan S include:[14]

International organizations that are members:

Plan S is also supported by:

 

Other articles on Open Access on this Open Access Journal Include:

MIT, guided by open access principles, ends Elsevier negotiations, an act followed by other University Systems in the US and in Europe

 

Open Access e-Scientific Publishing: Elected among 2018 Nature’s 10 Top Influencers – ROBERT-JAN SMITS: A bureaucrat launched a drive to transform science publishing

 

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

 

Mozilla Science Lab Promotes Data Reproduction Through Open Access: Report from 9/10/2015 Online Meeting

 

Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals

 

The Fatal Self Distraction of the Academic Publishing Industry: The Solution of the Open Access Online Scientific Journals
PeerJ Model for Open Access Scientific Journal
“Open Access Publishing” is becoming the mainstream model: “Academic Publishing” has changed Irrevocably
Open-Access Publishing in Genomics

 

 

 

 

 

 

 

 

 

 

 

 

Read Full Post »

Is It Time for the Virtual Scientific Conference?: Coronavirus, Travel Restrictions, Conferences Cancelled

Curator: Stephen J. Williams, PhD.

UPDATED 3/12/2020

To many of us scientists, presenting and attending scientific meetings, especially international scientific conferences, are a crucial tool for disseminating and learning new trends and cutting edge findings occurring in our respective fields.  Large international meetings, like cancer focused meetings like AACR (held in the spring time), AAAS and ASCO not only highlight the past years great discoveries but are usually the first place where breakthroughs are made known to the scientific/medical community as well as the public.  In addition these conferences allow for scientists to learn some of the newest technologies crucial for their work in vendor exhibitions.

During the coronavirus pandemic, multiple cancellations of business travel, conferences, and even university based study abroad programs are being cancelled and these cancellations are now hitting the 2020 Spring and potentially summer scientific/medical conferences.  Indeed one such conference hosted by Amgen in Massachusetts was determined as an event where some attendees tested positive for the virus, and as such, now other attendees are being asked to self quarantine.

Today I received two emails on conference cancellations, one from Experimental Biology in California and another from The Cancer Letter, highlighting other conferences, including National Cancer Coalition Network (NCCN) meetings which had been canceled.

 

Experimental Biology - San Diego 2020 - April 4-7

Dear Stephen,

After thoughtful deliberations, the leaders of the Experimental Biology host societies have made the difficult but necessary decision to cancel Experimental Biology (EB) 2020 set to take place April 4–7 in San Diego, California. We know how much EB means to everyone, and we did not make this decision in haste. The health and safety of our members, attendees, their students, our staff, partners and our communities are our top priority.

As we have previously communicated via email, on experimentalbiology.org and elsewhere, EB leadership has been closely monitoring the spread of COVID-19 (coronavirus disease). Based on the latest guidance from public health officials, the travel bans implemented by different institutions and the state of emergency declared in California less than 48 hours ago, it became clear to us that canceling was the right course of action.

We thank you and the entire EB community for understanding the extreme difficulty of this decision and for your commitment to the success of this conference – from the thousands of attendees to the presenters, exhibitors and sponsors who shared their time, expertise, collaboration and leadership. We deeply appreciate your contributions to this community.

What Happens Next?

Everyone who has registered to attend the meeting will receive a full registration refund within the next 45 days. Once your registration cancellation is processed, you will receive confirmation in a separate email. You do not need to contact anyone at EB or your host society to initiate the process. Despite the cancellation of the meeting, we are pleased to tell you that we will publish abstracts in the April 2020 issue of The FASEB Journal as originally planned. Please remember to cancel any personal arrangements you’ve made, such as travel and housing reservations. 

We ask for patience as we evaluate our next steps, and we will alert you as additional information becomes available please see our FAQs for details.

And in The Cancer Letter

Coronavirus vs. oncology: Meeting cancellations, travel restrictions, fears about drug supply chain

By Alexandria Carolan

NOTE: An earlier version of this story was published March 4 on the web and was updated March 6 to include information about restricted travel for employees of cancer centers, meeting cancellations, potential disruptions to the drug supply chain, and funds allocated by U.S. Congress for combating the coronavirus.

Further updates will be posted as the story develops.

Forecasts of the inevitable spread of coronavirus can be difficult to ignore, especially at a time when many of us are making travel plans for this spring’s big cancer meetings.

The decision was made all the more difficult earlier this week, as cancer centers and at least one biotechnology company—Amgen—implemented travel bans that are expected to last through the end of March and beyond. The Cancer Letter was able to confirm such travel bans at Fred Hutchinson Cancer Research Center, MD Anderson Cancer Center, and Dana-Farber Cancer Institute.

Meetings are getting cancelled in all fields, including oncology:

The National Comprehensive Cancer Network March 5 postponed its 2020 annual conference of about 1,500 attendees March 19-22 in Orlando, citing precautions against coronavirus.

“The health and safety of our attendees and the patients they take care of is our number one concern,” said Robert W. Carlson, chief executive officer of NCCN. “This was an incredibly difficult and disappointing decision to have to make. However, our conference attendees work to save the lives of immunocompromised people every day. Some of them are cancer survivors themselves, particularly at our patient advocacy pavilion. It’s our responsibility, in an abundance of caution, to safeguard them from any potential exposure to COVID-19.”

UPDATED 3/12/2020

And today the AACR canceled its yearly 2020 Meeting (https://www.aacr.org/meeting/aacr-annual-meeting-2020/coronavirus-information/)

The American Association for Cancer Research (AACR) Board of Directors has made the difficult decision, after careful consideration and comprehensive evaluation of currently available information related to the novel coronavirus (COVID-19) outbreak, to terminate the AACR Annual Meeting 2020, originally scheduled for April 24-29 in San Diego, California. A rescheduled meeting is being planned for later this year.

The AACR has been closely monitoring the rapidly increasing domestic and worldwide developments during the last several weeks related to COVID-19. This evidence-based decision was made after a thorough review and discussion of all factors impacting the Annual Meeting, including the U.S. government’s enforcement of restrictions on international travelers to enter the U.S.; the imposition of travel restrictions issued by U.S. government agencies, cancer centers, academic institutions, and pharmaceutical and biotech companies; and the counsel of infectious disease experts. It is clear that all of these elements significantly affect the ability of delegates, speakers, presenters of proffered papers, and exhibitors to participate fully in the Annual Meeting.

The health, safety, and security of all Annual Meeting attendees and the patients and communities they serve are the AACR’s highest priorities. While we believe that the decision to postpone the meeting is absolutely the correct one to safeguard our meeting participants from further potential exposure to the coronavirus, we also understand that this is a disappointing one for our stakeholders. There had been a great deal of excitement about the meeting, which was expected to be the largest ever AACR Annual Meeting, with more than 7,400 proffered papers, a projected total of 24,000 delegates from 80 countries and more than 500 exhibitors. We recognize that the presentation of new data, exchange of information, and opportunities for collaboration offered by the AACR Annual Meeting are highly valued by the entire cancer research community, and we are investigating options for rescheduling the Annual Meeting in the near future.

We thank all of our stakeholders for their patience and support at this time. Additional information regarding hotel reservation cancellations, registration refunds, and meeting logistics is available on the FAQ page on the AACR website. We will announce the dates and location of the rescheduled AACR Annual Meeting 2020 as soon as they are confirmed. Our heartfelt sympathies go out to everyone impacted by this global health crisis.

However,  according to both Dr. Fauci and Dr. Scott Gottlieb (former FDA director)  the outbreak may revisit the US and the world in the fall (see https://www.cnbc.com/2020/03/04/were-losing-valuable-time-ex-fda-chief-says-of-coronavirus-spread.html)  therefore these meetings may be cancelled for the whole year.

Is It Time For the Virtual (Real-Time) Conference?

Readers of this Online Access Journal are familiar with our ongoing commitment to open science and believe that forming networks of scientific experts in various fields using a social strategy is pertinent to enhancing the speed, reproducibility and novelty of important future scientific/medical discoveries.  Some of these ideas are highlighted in the following articles found on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

Twitter is Becoming a Powerful Tool in Science and Medicine

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

Reconstructed Science Communication for Open Access Online Scientific Curation

In addition, we understand the importance of communicating the latest scientific/medical discoveries in an open and rapid format, accessible over the social media platforms.  To this effect we have developed a methodology for real time conference coverage

see  Press and Conference Coverage

at  https://pharmaceuticalintelligence.com/press-coverage/

AND

The Process of Real Time Coverage using Social Media

at https://pharmaceuticalintelligence.com/press-coverage/part-one-the-process-of-real-time-coverage-using-social-media/

Using these strategies we are able to communicate, in real time, analysis of conference coverage for a multitude of conferences.

Has technology and social media platforms now have enabled our ability to rapidly communicate, in a more open access platform, seminal discoveries and are scientists today amenable to virtual type of meetings including displaying abstracts using a real-time online platform?

Some of the Twitter analytics we have curated from such meetings show that conference attendees are rapidly adopting such social platforms to communicate with their peers and colleagues meeting notes.

Statistical Analysis of Tweet Feeds from the 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

Word Associations of Twitter Discussions for 10th Annual Personalized Medicine Conference at the Harvard Medical School, November 12-13, 2014

Comparative Analysis of the Level of Engagement for Four Twitter Accounts: @KDNuggets (Big Data) @GilPress @Forbes @pharma_BI @AVIVA1950

Twitter Analytics on the Inside 3DPrinting Conference #I3DPConf

 

Other Twitter analyses of Conferences Covered by LPBI in Real Time have produced a similar conclusion: That conference attendees are very engaged over social media networks to discuss, share, and gain new insights into material presented at these conferences, especially international conferences.

And although attracting international conferences is lucrative to many cities, the loss in revenue to organizations, as well as the loss of intellectual capital is indeed equally as great.  

Maybe there is room for such type of conferences in the future, and attending by a vast more audience than currently capable. And perhaps the #openscience movement like @MozillaScience can collaborate with hackathons to produce the platforms for such an online movement of scientific conferences as a Plan B.

Other articles on Real Time Conference Coverage in the Online Open Access Journal Include:

Innovations in electronic Scientific Publishing (eSP): Case Studies in Marketing eContent, Curation Methodology, Categories of Research Functions, Interdisciplinary conceptual innovations by Cross Section of Categories, Exposure to Frontiers of Science by Real Time Press coverage of Scientific Conferences

Real Time Coverage and eProceedings of Presentations on 11/16 – 11/17, 2016, The 12th Annual Personalized Medicine Conference, HARVARD MEDICAL SCHOOL, Joseph B. Martin Conference Center, 77 Avenue Louis Pasteur, Boston

Tweets by @pharma_BI and by @AVIVA1950: Real Time Coverage and eProceedings of The 11th Annual Personalized Medicine Conference, November 18-19, 2015, Harvard Medical School

REAL TIME Cancer Conference Coverage: A Novel Methodology for Authentic Reporting on Presentations and Discussions launched via Twitter.com @ The 2nd ANNUAL Sachs Cancer Bio Partnering & Investment Forum in Drug Development, 19th March 2014 • New York Academy of Sciences • USA

Search Results for ‘Real Time Conference’

Read Full Post »

Reporter: Stephen J. Williams, PhD @StephenJWillia2

Science and technology bring tremendous value to society in years of life and quality of life, yet the public often perceives science as difficult, irrelevant or even threatening. Moreover, the inspirational and moving stories of scientists and innovators working around the world are often hidden or misrepresented in popular culture. Whose responsibility is it to communicate science and engage the public in supporting the scientific enterprise? Can everyone be a Champion of Science and what are the solutions to enlist and engage more champions of science across generations and geographies? How do we work together to enhance transparency, accessibility and relevance of science for everyone, everywhere? Can science become more inclusive and engage hearts and not only minds?

Join this exciting session as Johnson & Johnson announces the winners of the Champions of Science – BioGENEius Storytelling Challenge, and brings together other key stakeholders in a discussion about the importance of engaging the public to fall in love in science all over again.

Sponsored by: Johnson & Johnson Innovation

Seema: We need to solve the problem of the lack of trust in scientists.  Some of JNJ winners of their acheivement program went on to become Nobel Laureates.   Arthur Horwich and Hans Ullrich won the Jannsen Award for discovering compounds that could refold proteins, including protein chaperones.  Many diseases occur because of protein misfolding like neuro-degenerative diseases.
Seema:  Great science going on in Africa.  JNJ wanted to showcase the great science in Africa. they awarded four individuals with storytelling award (Emily).
Dr. Horwich: got interested in science early on.  Worked on N terminal mitochondrial signal peptides.  also then got interested in how proteins fold and unfold and refold since the 1950s.  He had changed the thinking of how proteins are processed within cells and over many years he had worked on this.
Emily Wang:  Parents and schoolteachers prodded her curiosity in biology. The impact of day to day work of scientists is arduous but the little things can lead to advances that may help people.  If passionate and have a great mentor then can get a foot in the door.  Worked at Stanford in the lab.
Dr. Mukherjee: He likes to cure diseases, physican first, scientist second, writer third but he doesn’t separate this.  In older times scientists wrote to think and true today. How we visualize the word, or use our hands, is similar.  He takes the word translational research very seriously.  Can you say in one sentence how this will help patients in three years?
There are multitude ways of love for science.
Dr. Pinela: loved asking big question and loved storytelling but asking bigger questions. Moved from Columbia and moved to US; loved the freedom and government funding situation at that time.  Need the training and mentorship so mentors are a very big aspect in innovation as it led her to entrepreneurship.  We need to use technology to disrupt and innovate.
Nsikin:  A lot of mentors nurture curiosity.  People like to see them in that story of curiosity.  That is how is bases the PBS science videos: did  a study on engagement and people wants a morality, and a science identity (an inner nerd in all of us i.e. spark the interest).  The feedback if they focus on this has been positive.

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

Read Full Post »

How Will FDA’s new precision FDA Science 2.0 Collaboration Platform Protect Data? Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

How Will FDA’s new precisionFDA Science 2.0 Collaboration Platform Protect Data?

Reporter: Stephen J. Williams, Ph.D.

As reported in MassDevice.com

FDA launches precisionFDA to harness the power of scientific collaboration

FDA VoiceBy: Taha A. Kass-Hout, M.D., M.S. and Elaine Johanson

Imagine a world where doctors have at their fingertips the information that allows them to individualize a diagnosis, treatment or even a cure for a person based on their genes. That’s what President Obama envisioned when he announced his Precision Medicine Initiative earlier this year. Today, with the launch of FDA’s precisionFDA web platform, we’re a step closer to achieving that vision.

PrecisionFDA is an online, cloud-based, portal that will allow scientists from industry, academia, government and other partners to come together to foster innovation and develop the science behind a method of “reading” DNA known as next-generation sequencing (or NGS). Next Generation Sequencing allows scientists to compile a vast amount of data on a person’s exact order or sequence of DNA. Recognizing that each person’s DNA is slightly different, scientists can look for meaningful differences in DNA that can be used to suggest a person’s risk of disease, possible response to treatment and assess their current state of health. Ultimately, what we learn about these differences could be used to design a treatment tailored to a specific individual.

The precisionFDA platform is a part of this larger effort and through its use we want to help scientists work toward the most accurate and meaningful discoveries. precisionFDA users will have access to a number of important tools to help them do this. These tools include reference genomes, such as “Genome in the Bottle,” a reference sample of DNA for validating human genome sequences developed by the National Institute of Standards and Technology. Users will also be able to compare their results to previously validated reference results as well as share their results with other users, track changes and obtain feedback.

Over the coming months we will engage users in improving the usability, openness and transparency of precisionFDA. One way we’ll achieve that is by placing the code for the precisionFDA portal on the world’s largest open source software repository, GitHub, so the community can further enhance precisionFDA’s features.Through such collaboration we hope to improve the quality and accuracy of genomic tests – work that will ultimately benefit patients.

precisionFDA leverages our experience establishing openFDA, an online community that provides easy access to our public datasets. Since its launch in 2014, openFDA has already resulted in many novel ways to use, integrate and analyze FDA safety information. We’re confident that employing such a collaborative approach to DNA data will yield important advances in our understanding of this fast-growing scientific field, information that will ultimately be used to develop new diagnostics, treatments and even cures for patients.

fda-voice-taha-kass-1x1Taha A. Kass-Hout, M.D., M.S., is FDA’s Chief Health Informatics Officer and Director of FDA’s Office of Health Informatics. Elaine Johanson is the precisionFDA Project Manager.

 

The opinions expressed in this blog post are the author’s only and do not necessarily reflect those of MassDevice.com or its employees.

So What Are the Other Successes With Such Open Science 2.0 Collaborative Networks?

In the following post there are highlighted examples of these Open Scientific Networks and, as long as

  • transparancy
  • equal contributions (lack of heirarchy)

exists these networks can flourish and add interesting discourse.  Scientists are already relying on these networks to collaborate and share however resistance by certain members of an “elite” can still exist.  Social media platforms are now democratizing this new science2.0 effort.  In addition the efforts of multiple biocurators (who mainly work for love of science) have organized the plethora of data (both genomic, proteomic, and literature) in order to provide ease of access and analysis.

Science and Curation: The New Practice of Web 2.0

Curation: an Essential Practice to Manage “Open Science”

The web 2.0 gave birth to new practices motivated by the will to have broader and faster cooperation in a more free and transparent environment. We have entered the era of an “open” movement: “open data”, “open software”, etc. In science, expressions like “open access” (to scientific publications and research results) and “open science” are used more and more often.

Curation and Scientific and Technical Culture: Creating Hybrid Networks

Another area, where there are most likely fewer barriers, is scientific and technical culture. This broad term involves different actors such as associations, companies, universities’ communication departments, CCSTI (French centers for scientific, technical and industrial culture), journalists, etc. A number of these actors do not limit their work to popularizing the scientific data; they also consider they have an authentic mission of “culturing” science. The curation practice thus offers a better organization and visibility to the information. The sought-after benefits will be different from one actor to the next.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

  • Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor

which has created a need for more context-driven scientific search and discourse.

However another issue would be Individual Bias if these networks are closed and protocols need to be devised to reduce bias from individual investigators, clinicians.  This is where CONSENSUS built from OPEN ACCESS DISCOURSE would be beneficial as discussed in the following post:

Risk of Bias in Translational Science

As per the article

Risk of bias in translational medicine may take one of three forms:

  1. a systematic error of methodology as it pertains to measurement or sampling (e.g., selection bias),
  2. a systematic defect of design that leads to estimates of experimental and control groups, and of effect sizes that substantially deviate from true values (e.g., information bias), and
  3. a systematic distortion of the analytical process, which results in a misrepresentation of the data with consequential errors of inference (e.g., inferential bias).

This post highlights many important points related to bias but in summarry there can be methodologies and protocols devised to eliminate such bias.  Risk of bias can seriously adulterate the internal and the external validity of a clinical study, and, unless it is identified and systematically evaluated, can seriously hamper the process of comparative effectiveness and efficacy research and analysis for practice. The Cochrane Group and the Agency for Healthcare Research and Quality have independently developed instruments for assessing the meta-construct of risk of bias. The present article begins to discuss this dialectic.

  • Information dissemination to all stakeholders is key to increase their health literacy in order to ensure their full participation
  • threats to internal and external validity  represent specific aspects of systematic errors (i.e., bias)in design, methodology and analysis

So what about the safety and privacy of Data?

A while back I did a post and some interviews on how doctors in developing countries are using social networks to communicate with patients, either over established networks like Facebook or more private in-house networks.  In addition, these doctor-patient relationships in developing countries are remote, using the smartphone to communicate with rural patients who don’t have ready access to their physicians.

Located in the post Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

I discuss some of these problems in the following paragraph and associated posts below:

Mobile Health Applications on Rise in Developing World: Worldwide Opportunity

According to International Telecommunication Union (ITU) statistics, world-wide mobile phone use has expanded tremendously in the past 5 years, reaching almost 6 billion subscriptions. By the end of this year it is estimated that over 95% of the world’s population will have access to mobile phones/devices, including smartphones.

This presents a tremendous and cost-effective opportunity in developing countries, and especially rural areas, for physicians to reach patients using mHealth platforms.

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

In Summary, although there are restrictions here in the US governing what information can be disseminated over social media networks, developing countries appear to have either defined the regulations as they are more dependent on these types of social networks given the difficulties in patient-physician access.

Therefore the question will be Who Will Protect The Data?

For some interesting discourse please see the following post

Atul Butte Talks on Big Data, Open Data and Clinical Trials

 

Read Full Post »

 

Yay! Bloomberg View Seems to Be On the Side of the Lowly Scientist!

 

Reporter: Stephen J. Williams, Ph.D.

Justin Fox at BloombergView had just published an article near and dear to the hearts of all those #openaccess scientists and those of us @Pharma_BI and @MozillaScience who feel strong about #openscience #opendata and the movement to make scientific discourse freely accessible.

His article “Academic Publishing Can’t Remain Such a Great Business” discusses the history of academic publishing and how consolidation of smaller publishers into large scientific publishing houses (Bigger publishers bought smaller ones) has produced a monopoly like environment in which prices for journal subscriptions are rising. He also discusses how the open access movement is challenging this model and may oneday replace the big publishing houses.

A few tidbits from his article:

Publishers of academic journals have a great thing going. They generally don’t pay for the articles they publish, or for the primary editing and peer reviewing essential to preparing them for publication (they do fork over some money for copy editing). Most of this gratis labor is performed by employees of academic institutions. Those institutions, along with government agencies and foundations, also fund all the research that these journal articles are based upon.

Yet the journal publishers are able to get authors to sign over copyright to this content, and sell it in the form of subscriptions to university libraries. Most journals are now delivered in electronic form, which you think would cut the cost, but no, the price has been going up and up:

 

This isn’t just inflation at work: in 1994, journal subscriptions accounted for 51 percent of all library spending on information resources. In 2012 it was 69 percent.

Who exactly is getting that money? The largest academic publisher is Elsevier, which is also the biggest, most profitable division of RELX, the Anglo-Dutch company that was known until February as Reed Elsevier.

 

RELX reports results in British pounds; I converted to dollars in part because the biggest piece of the company’s revenue comes from the U.S. And yes, those are pretty great operating-profit margins: 33 percent in 2014, 39 percent in 2013. The next biggest academic publisher is Springer Nature, which is closely held (by German publisher Holtzbrinck and U.K. private-equity firm BC Partners) but reportedly has annual revenue of about $1.75 billion. Other biggies that are part of publicly traded companies include Wiley-Blackwell, a division of John Wiley & Sons; Wolters Kluwer Health, a division of Wolters Kluwer; and Taylor & Francis, a division of Informa.

And gives a brief history of academic publishing:

The history here is that most early scholarly journals were the work of nonprofit scientific societies. The goal was to disseminate research as widely as possible, not to make money — a key reason why nobody involved got paid. After World War II, the explosion in both the production of and demand for academic research outstripped the capabilities of the scientific societies, and commercial publishers stepped into the breach. At a time when journals had to be printed and shipped all over the world, this made perfect sense.

Once it became possible to effortlessly copy and disseminate digital files, though, the economics changed. For many content producers, digital copying is a threat to their livelihoods. As Peter Suber, the director of Harvard University’s Office for Scholarly Communication, puts it in his wonderful little book, “Open Access”:

And while NIH Tried To Force These Houses To Accept Open Access:

About a decade ago, the universities and funding agencies began fighting back. The National Institutes of Health in the U.S., the world’s biggest funder of medical research, began requiring in 2008 that all recipients of its grants submit electronic versions of their final peer-reviewed manuscripts when they are accepted for publication in journals, to be posted a year later on the NIH’s open-access PubMed depository. Publishers grumbled, but didn’t want to turn down the articles.

Big publishers are making $ by either charging as much as they can or focus on new customers and services

For the big publishers, meanwhile, the choice is between positioning themselves for the open-access future or maximizing current returns. In its most recent annual report, RELX leans toward the latter while nodding toward the former:

Over the past 15 years alternative payment models for the dissemination of research such as “author-pays” or “author’s funder-pays” have emerged. While it is expected that paid subscription will remain the primary distribution model, Elsevier has long invested in alternative business models to address the needs of customers and researchers.

Elsevier’s extra services can add news avenues of revenue

https://www.elsevier.com/social-sciences/business-and-management

https://www.elsevier.com/rd-solutions

but they may be seeing the light on OpenAccess (possibly due to online advocacy, an army of scientific curators and online scientific communities):

Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals

SAME SCIENTIFIC IMPACT: Scientific Publishing – Open Journals vs. Subscription-based

e-Recognition via Friction-free Collaboration over the Internet: “Open Access to Curation of Scientific Research”

Indeed we recently put up an interesting authored paper “A Patient’s Perspective: On Open Heart Surgery from Diagnosis and Intervention to Recovery” (free of charge) letting the community of science freely peruse and comment, and generally well accepted by both author and community as a nice way to share academic discourse without the enormous fees, especially on opinion papers in which a rigorous peer review may not be necessary.

But it was very nice to see a major news outlet like Bloomberg View understand the lowly scientist’s aggravations.

Thanks Bloomberg!

 

 

 

 

 

Read Full Post »

Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »

Twitter is Becoming a Powerful Tool in Science and Medicine

 Curator: Stephen J. Williams, Ph.D.

Updated 4/2016

Life-cycle of Science 2

A recent Science article (Who are the science stars of Twitter?; Sept. 19, 2014) reported the top 50 scientists followed on Twitter. However, the article tended to focus on the use of Twitter as a means to develop popularity, a sort of “Science Kardashian” as they coined it. So the writers at Science developed a “Kardashian Index (K-Index) to determine scientists following and popularity on Twitter.

Now as much buzz Kim Kardashian or a Perez Hilton get on social media, their purpose is solely for entertainment and publicity purposes, the Science sort of fell flat in that it focused mainly on the use of Twitter as a metric for either promotional or public outreach purposes. A notable scientist was mentioned in the article, using Twitter feed to gauge the receptiveness of his presentation. In addition, relying on Twitter for effective public discourse of science is problematic as:

  • Twitter feeds are rapidly updated and older feeds quickly get buried within the “Twittersphere” = LIMITED EXPOSURE TIMEFRAME
  • Short feeds may not provide the access to appropriate and understandable scientific information (The Science Communication Trap) which is explained in The Art of Communicating Science: traps, tips and tasks for the modern-day scientist. “The challenge of clearly communicating the intended scientific message to the public is not insurmountable but requires an understanding of what works and what does not work.” – from Heidi Roop, G.-Martinez-Mendez and K. Mills

However, as highlighted below, Twitter, and other social media platforms are being used in creative ways to enhance the research, medical, and bio investment collaborative, beyond a simple news-feed.  And the power of Twitter can be attributed to two simple features

  1. Ability to organize – through use of the hashtag (#) and handle (@), Twitter assists in the very important task of organizing, indexing, and ANNOTATING content and conversations. A very great article on Why the Hashtag in Probably the Most Powerful Tool on Twitter by Vanessa Doctor explains how hashtags and # search may be as popular as standard web-based browser search. Thorough annotation is crucial for any curation process, which are usually in the form of database tags or keywords. The use of # and @ allows curators to quickly find, index and relate disparate databases to link annotated information together. The discipline of scientific curation requires annotation to assist in the digital preservation, organization, indexing, and access of data and scientific & medical literature. For a description of scientific curation methodologies please see the following links:

Please read the following articles on CURATION

The Methodology of Curation for Scientific Research Findings

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

Science and Curation: The New Practice of Web 2.0

  1. Information Analytics

Multiple analytic software packages have been made available to analyze information surrounding Twitter feeds, including Twitter feeds from #chat channels one can set up to cover a meeting, product launch etc.. Some of these tools include:

Twitter Analytics – measures metrics surrounding Tweets including retweets, impressions, engagement, follow rate, …

Twitter Analytics – Hashtags.org – determine most impactful # for your Tweets For example, meeting coverage of bioinvestment conferences or startup presentations using #startup generates automatic retweeting by Startup tweetbot @StartupTweetSF.

 

  1. Tweet Sentiment Analytics

Examples of Twitter Use

A. Scientific Meeting Coverage

In a paper entitled Twitter Use at a Family Medicine Conference: Analyzing #STFM13 authors Ranit Mishori, MD, Frendan Levy, MD, and Benjamin Donvan analyzed the public tweets from the 2013 Society of Teachers of Family Medicine (STFM) conference bearing the meeting-specific hashtag #STFM13. Thirteen percent of conference attendees (181 users) used the #STFM13 to share their thoughts on the meeting (1,818 total tweets) showing a desire for social media interaction at conferences but suggesting growth potential in this area. As we have also seen, the heaviest volume of conference-tweets originated from a small number of Twitter users however most tweets were related to session content.

However, as the authors note, although it is easy to measure common metrics such as number of tweets and retweets, determining quality of engagement from tweets would be important for gauging the value of Twitter-based social-media coverage of medical conferences.

Thea authors compared their results with similar analytics generated by the HealthCare Hashtag Project, a project and database of medically-related hashtag use, coordinated and maintained by the company Symplur.  Symplur’s database includes medical and scientific conference Twitter coverage but also Twitter usuage related to patient care. In this case the database was used to compare meeting tweets and hashtag use with the 2012 STFM conference.

These are some of the published journal articles that have employed Symplur (www.symplur.com) data in their research of Twitter usage in medical conferences.

B. Twitter Usage for Patient Care and Engagement

Although the desire of patients to use and interact with their physicians over social media is increasing, along with increasing health-related social media platforms and applications, there are certain obstacles to patient-health provider social media interaction, including lack of regulatory framework as well as database and security issues. Some of the successes and issues of social media and healthcare are discussed in the post Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

However there is also a concern if social media truly engages the patient and improves patient education. In a study of Twitter communications by breast cancer patients Tweeting about breast cancer, authors noticed Tweeting was a singular event. The majority of tweets did not promote any specific preventive behavior. The authors concluded “Twitter is being used mostly as a one-way communication tool.” (Using Twitter for breast cancer prevention: an analysis of breast cancer awareness month. Thackeray R1, Burton SH, Giraud-Carrier C, Rollins S, Draper CR. BMC Cancer. 2013;13:508).

In addition a new poll by Harris Interactive and HealthDay shows one third of patients want some mobile interaction with their physicians.

Some papers cited in Symplur’s HealthCare Hashtag Project database on patient use of Twitter include:

C. Twitter Use in Pharmacovigilance to Monitor Adverse Events

Pharmacovigilance is the systematic detection, reporting, collecting, and monitoring of adverse events pre- and post-market of a therapeutic intervention (drug, device, modality e.g.). In a Cutting Edge Information Study, 56% of pharma companies databases are an adverse event channel and more companies are turning to social media to track adverse events (in Pharmacovigilance Teams Turn to Technology for Adverse Event Reporting Needs). In addition there have been many reports (see Digital Drug Safety Surveillance: Monitoring Pharmaceutical Products in Twitter) that show patients are frequently tweeting about their adverse events.

There have been concerns with using Twitter and social media to monitor for adverse events. For example FDA funded a study where a team of researchers from Harvard Medical School and other academic centers examined more than 60,000 tweets, of which 4,401 were manually categorized as resembling adverse events and compared with the FDA pharmacovigilance databases. Problems associated with such social media strategy were inability to obtain extra, needed information from patients and difficulty in separating the relevant Tweets from irrelevant chatter.  The UK has launched a similar program called WEB-RADR to determine if monitoring #drug_reaction could be useful for monitoring adverse events. Many researchers have found the adverse-event related tweets “noisy” due to varied language but had noticed many people do understand some principles of causation including when adverse event subsides after discontinuing the drug.

However Dr. Clark Freifeld, Ph.D., from Boston University and founder of the startup Epidemico, feels his company has the algorithms that can separate out the true adverse events from the junk. According to their web site, their algorithm has high accuracy when compared to the FDA database. Dr. Freifeld admits that Twitter use for pharmacovigilance purposes is probably a starting point for further follow-up, as each patient needs to fill out the four-page forms required for data entry into the FDA database.

D. Use of Twitter in Big Data Analytics

Published on Aug 28, 2012

http://blogs.ischool.berkeley.edu/i29…

Course: Information 290. Analyzing Big Data with Twitter
School of Information
UC Berkeley

Lecture 1: August 23, 2012

Course description:
How to store, process, analyze and make sense of Big Data is of increasing interest and importance to technology companies, a wide range of industries, and academic institutions. In this course, UC Berkeley professors and Twitter engineers will lecture on the most cutting-edge algorithms and software tools for data analytics as applied to Twitter microblog data. Topics will include applied natural language processing algorithms such as sentiment analysis, large scale anomaly detection, real-time search, information diffusion and outbreak detection, trend detection in social streams, recommendation algorithms, and advanced frameworks for distributed computing. Social science perspectives on analyzing social media will also be covered.

This is a hands-on project course in which students are expected to form teams to complete intensive programming and analytics projects using the real-world example of Twitter data and code bases. Engineers from Twitter will help advise student projects, and students will have the option of presenting their final project presentations to an audience of engineers at the headquarters of Twitter in San Francisco (in addition to on campus). Project topics include building on existing infrastructure tools, building Twitter apps, and analyzing Twitter data. Access to data will be provided.

Other posts on this site on USE OF SOCIAL MEDIA AND TWITTER IN HEALTHCARE and Conference Coverage include:

Methodology for Conference Coverage using Social Media: 2014 MassBio Annual Meeting 4/3 – 4/4 2014, Royal Sonesta Hotel, Cambridge, MA

Strategy for Event Joint Promotion: 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

REAL TIME Cancer Conference Coverage: A Novel Methodology for Authentic Reporting on Presentations and Discussions launched via Twitter.com @ The 2nd ANNUAL Sachs Cancer Bio Partnering & Investment Forum in Drug Development, 19th March 2014 • New York Academy of Sciences • USA

PCCI’s 7th Annual Roundtable “Crowdfunding for Life Sciences: A Bridge Over Troubled Waters?” May 12 2014 Embassy Suites Hotel, Chesterbrook PA 6:00-9:30 PM

CRISPR-Cas9 Discovery and Development of Programmable Genome Engineering – Gabbay Award Lectures in Biotechnology and Medicine – Hosted by Rosenstiel Basic Medical Sciences Research Center, 10/27/14 3:30PM Brandeis University, Gerstenzang 121

Tweeting on 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

http://pharmaceuticalintelligence.com/press-coverage/

Statistical Analysis of Tweet Feeds from the 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

1st Pitch Life Science- Philadelphia- What VCs Really Think of your Pitch

What VCs Think about Your Pitch? Panel Summary of 1st Pitch Life Science Philly

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

Medical Applications and FDA regulation of Sensor-enabled Mobile Devices: Apple and the Digital Health Devices Market

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

Read Full Post »

Track 9 Pharmaceutical R&D Informatics: Collaboration, Data Science and Biologics @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reporter: Aviva Lev-Ari, PhD, RN

 

April 30, 2014

 

Big Data and Data Science in R&D and Translational Research

10:50 Chairperson’s Remarks

Ralph Haffner, Local Area Head, Research Informatics, F. Hoffmann-La Roche AG

11:00 Can Data Science Save Pharmaceutical R&D?

Jason M. Johnson, Ph.D., Associate Vice President,

Scientific Informatics & Early Development and Discovery Sciences IT, Merck

Although both premises – that the viability of pharmaceutical R&D is mortally threatened and that modern “data science” is a relevant superhero – are

suspect, it is clear that R&D productivity is progressively declining and many areas of R&D suboptimally use data in decision-making. We will discuss

some barriers to our overdue information revolution, and our strategy for overcoming them.

11:30 Enabling Data Science in Externalized Pharmaceutical R&D

Sándor Szalma, Ph.D., Head, External Innovation, R&D IT,

Janssen Research & Development, LLC

Pharmaceutical companies have historically been involved in many external partnerships. With recent proliferation of hosted solutions and the availability

of cost-effective, massive high-performance computing resources there is an opportunity and a requirement now to enable collaborative data science. We

discuss our experience in implementing robust solutions and pre-competitive approaches to further these goals.

12:00 pm Co-Presentation: Sponsored by

Collaborative Waveform Analytics: How New Approaches in Machine Learning and Enterprise Analytics will Extend Expert Knowledge and Improve Safety Assessment

  • Tim Carruthers, CEO, Neural ID
  • Scott Weiss, Director, Product Strategy, IDBS

Neural ID’s Intelligent Waveform Service (IWS) delivers the only enterprise biosignal analysis solution combining machine learning with human expertise. A collaborative platform supporting all phases of research and development, IWS addresses a significant unmet need, delivering scalable analytics and a single interoperable data format to transform productivity in life sciences. By enabling analysis from BioBook (IDBS) to original biosignals, IWS enables users of BioBook to evaluate cardio safety assessment across the R&D lifecycle.

12:15 Building a Life Sciences Data

Sponsored by

Lake: A Useful Approach to Big Data

Ben Szekely, Director & Founding Engineer,

Cambridge Semantics

The promise of Big Data is in its ability to give us technology that can cope with overwhelming volume and variety of information that pervades R&D informatics. But the challenges are in practical use of disconnected and poorly described data. We will discuss: Linking Big Data from diverse sources for easy understanding and reuse; Building R&D informatics applications on top of a Life Sciences Data Lake; and Applications of a Data Lake in Pharma.

12:40 Luncheon Presentation I:

Sponsored by

Chemical Data Visualization in Spotfire

Matthew Stahl, Ph.D., Senior Vice President,

OpenEye Scientific Software

Spotfire deftly facilitates the analysis and interrogation of data sets. Domain specific data, such as chemistry, presents a set of challenges that general data analysis tools have difficulty addressing directly. Fortunately, Spotfire is an extensible platform that can be augmented with domain specific abilities. Spotfire has been augmented to naturally handle cheminformatics and chemical data visualization through the integration of OpenEye toolkits. The OpenEye chemistry extensions for Spotfire will be presented.

1:10 Luncheon Presentation II 

1:50 Chairperson’s Remarks

Yuriy Gankin, Ph.D., Co. Founder and CSO, GGA Software Services

1:55 Enable Translational Science by Integrating Data across the R&D Organization

Christian Gossens, Ph.D., Global Head, pRED Development Informatics Team,

pRED Informatics, F. Hoffmann-La Roche Ltd.

Multi-national pharmaceutical companies face an amazingly complex information management environment. The presentation will show that

a systematic system landscaping approach is an effective tool to build a sustainable integrated data environment. Data integration is not mainly about

technology, but the use and implementation of it.

2:25 The Role of Collaboration in Enabling Great Science in the Digital Age: The BARD Data Science Case Study

Andrea DeSouza, Director, Informatics & Data Analysis,

Broad Institute

BARD (BioAssay Research Database) is a new, public web portal that uses a standard representation and common language for organizing chemical biology data. In this talk, I describe how data professionals and scientists collaborated to develop BARD, organize the NIH Molecular Libraries Program data, and create a new standard for bioassay data exchange.

May 1. 2014

BIG DATA AND DATA SCIENCE IN R&D AND TRANSLATIONAL RESEARCH

10:30 Chairperson’s Opening Remarks

John Koch, Director, Scientific Information Architecture & Search, Merck

10:35 The Role of a Data Scientist in Drug Discovery and Development

Anastasia (Khoury) Christianson, Ph.D., Head, Translational R&D IT, Bristol-

Myers Squibb

A major challenge in drug discovery and development is finding all the relevant data, information, and knowledge to ensure informed, evidencebased

decisions in drug projects, including meaningful correlations between preclinical observations and clinical outcomes. This presentation will describe

where and how data scientists can support pharma R&D.

11:05 Designing and Building a Data Sciences Capability to Support R&D and Corporate Big Data Needs

Shoibal Datta, Ph.D., Director, Data Sciences, Biogen Idec

To achieve Biogen Idec’s strategic goals, we have built a cross-disciplinary team to focus on key areas of interest and the required capabilities. To provide

a reusable set of IT services we have broken down our platform to focus on the Ingestion, Digestion, Extraction and Analysis of data. In this presentation, we will outline how we brought focus and prioritization to our data sciences needs, our data sciences architecture, lessons learned and our future direction.

11:35 Data Experts: Improving Sponsored by

Translational Drug-Development Efficiency

Jamie MacPherson, Ph.D., Consultant, Tessella

We report on a novel approach to translational informatics support: embedding Data Experts’ within drug-project teams. Data experts combine first-line

informatics support and Business Analysis. They help teams exploit data sources that are diverse in type, scale and quality; analyse user-requirements and prototype potential software solutions. We then explore scaling this approach from a specific drug development team to all.

 

Read Full Post »

Older Posts »

%d bloggers like this: