Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Reproducibility’


Mozilla Science Lab Promotes Data Reproduction Through Open Access: Report from 9/10/2015 Online Meeting

Reporter: Stephen J. Williams, Ph.D.

Mozilla Inc. is developing a platform for scientists to discuss the issues related to developing a framework to share scientific data as well as tackle the problems of scientific reproducibility in an Open Access manner. According to their blog

https://blog.mozilla.org/blog/2013/06/14/5992/

We’re excited to announce the launch of the Mozilla Science Lab, a new initiative that will help researchers around the world use the open web to shape science’s future.

Scientists created the web — but the open web still hasn’t transformed scientific practice to the same extent we’ve seen in other areas like media, education and business. For all of the incredible discoveries of the last century, science is still largely rooted in the “analog” age. Credit systems in science are still largely based around “papers,” for example, and as a result researchers are often discouraged from sharing, learning, reusing, and adopting the type of open and collaborative learning that the web makes possible.

The Science Lab will foster dialog between the open web community and researchers to tackle this challenge. Together they’ll share ideas, tools, and best practices for using next-generation web solutions to solve real problems in science, and explore ways to make research more agile and collaborative.

On their blog they highlight various projects related to promoting Open Access for scientific data

On September 10, 2015 Mozilla Science Lab had their scheduled meeting on scientific data reproduce ability.  The meeting was free and covered by ethernet and on social media. The Twitter hashtag for updates and meeting discussion is #mozscience (https://twitter.com/search?q=%23mozscience )

Open Access Meeting Announcement on Twitter

https://twitter.com/MozillaScience/status/641642491532283904

//platform.twitter.com/widgets.js

mozilla science lab

Mozilla Science Lab @MozillaScience

Join @khinsen @abbycabs + @EvoMRI tmrw (11AM ET) to hear about replication, publishing + #openscience. Details: https://etherpad.mozilla.org/sciencelab-calls-sep10-2015 …

AGENDA:

  • Mozilla Science Lab Updates
  • Staff welcomes and thank yous:
  • Welcoming Zannah Marsh, our first Instructional Designer
  • Workshopping the “Working Open” guide:
    • Discussion of Future foundation and GitHub projects
    • Discussion of submission for open science project funding
  • Contributorship Badges Pilot – an update! – Abby Cabunoc Mayes – @abbycabs
  • Will be live on GigaScience September 17th!
  • Where you can jump in: https://github.com/mozillascience/paperbadger/issues/17
  • Questions regarding coding projects – Abby will coordinate efforts on coding into their codebase
  • The journal will publish and authors and reviewers get a badge and their efforts and comments will appear on GigaScience: Giga Science will give credit for your reviews – supports an Open Science Discussion

Roadmap for

  • Fellows review is in full swing!
  • MozFest update:
  • Miss the submission deadline? You can still apply to join our Open Research Accelerator and join us for the event (PLUS get a DOI for your submission and 1:1 help)

A discussion by Konrad Hinsen (@khinsen) on ReScience, a journal focused on scientific replication will be presented:

  • ReScience – a new journal for replications – Konrad Hinsen @khinsen
  • ReScience is dedicated to publishing replications of previously published computational studies, along with all the code required to replicate the results.
  • ReScience lives entirely on GitHub. Submissions take the form of a Git repository, and review takes place in the open through GitHub issues. This also means that ReScience is free for everyone (authors, readers, reviewers, editors… well, I said everyone, right?), as long as GitHub is willing to host it.
  • ReScience was launched just a few days ago and is evolving quickly. To stay up to date, follow @ReScienceEds on Twitter. If you want to volunteer as a reviewer, please contact the editorial board.

The ReScience Journal Reproducible Science is Good. Replicated Science is better.

ReScience is a peer-reviewed journal that targets computational research and encourages the explicit reproduction of already published research promoting new and open-source implementations in order to ensure the original research is reproducible. To achieve such a goal, the whole editing chain is radically different from any other traditional scientific journal. ReScience lives on github where each new implementation is made available together with the comments, explanations and tests. Each submission takes the form of a pull request that is publicly reviewed and tested in order to guarantee any researcher can re-use it. If you ever reproduced computational result from the literature, ReScience is the perfect place to publish this new implementation. The Editorial Board

Notes from his talk:

– must be able to replicate paper’s results as written according to experimental methods

– All authors on ReScience need to be on GitHub

– not accepting MatLab replication; replication can involve computational replication;

  • Research Ideas and Outcomes Journal – Daniel Mietchen @EvoMRI
    • Postdoc at Natural Museum of London doing data mining; huge waste that 90% research proposals don’t get used so this journal allows for publishing proposals
    • Learned how to write proposals by finding a proposal online open access
    • Reviewing system based on online reviews like GoogleDocs where people view, comment
    • Growing editorial and advisory board; venturing into new subject areas like humanities, economics, biological research so they are trying to link diverse areas under SOCIAL IMPACT labeling
    • BIG question how to get scientists to publish their proposals especially to improve efficiency of collaboration and reduce too many duplicated efforts as well as reagent sharing
    • Crowdfunding platform used as post publication funding mechanism; still in works
    • They need a lot of help on the editorial board so if have a PhD PLEASE JOIN
  • Website:
  • Background:
  • Science article:
  • Some key features:
  • for publishing all steps of the research cycle, from proposals (funded and not yet funded) onwards
  • maps submissions to societal challenges
  • focus on post-publication peer review; pre-submission endorsement; all reviews public
  • lets authors choose which publishing services they want, e.g. whether they’d like journal-mediated peer review
  • collaborative WYSIWYG authoring and publishing platform based on JATS XML

A brief discussion of upcoming events on @MozillaScience

Meetings are held 2nd Thursdays of each month

Additional plugins, coding, and new publishing formats are available at https://www.mozillascience.org/

Other related articles on OPEN ACCESS Publishing were published in this Open Access Online Scientific Journal, include the following:

Archives of Medicine (AOM) to Publish from “Leaders in Pharmaceutical Business Intelligence (LPBI)” Open Access On-Line Scientific Journal http://pharmaceuticalintelligence.com

Annual Growth in NIH Clicks: 32% Open Access Online Scientific Journal http://pharmaceuticalintelligence.com

Collaborations and Open Access Innovations – CHI, BioIT World, 4/29 – 5/1/2014, Seaport World Trade Center, Boston

Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals

Reconstructed Science Communication for Open Access Online Scientific Curation

The Fatal Self Distraction of the Academic Publishing Industry: The Solution of the Open Access Online Scientific Journals

 

Advertisements

Read Full Post »


Echocardiogram Quantification: Quest for Reproducibility and Dependability

Reporter: Aviva Lev-Ari, PhD, RN

How can echo quantification become more reproducible and dependable?

 Senior Director, Global Cardiology at Innovations in Cardiology

Innovations in Cardiology

a subgroup of Innovations In Health on LinkedIn.com

Echocardiography encompasses an array of clinically important tasks including quantifying cardiac chamber size, ventricular mass and function.

Based on your experience, how can echo quantification become more reproducible and dependable?

Comments made by Group members:

 

Yoni TiroshYoni

Yoni Tirosh

CEO at M.I. Medical Incentive Ltd.

Hello Ivan,
I’ve sent you a personal message regarding an innovative development related to Echo use.

 

Tim ZepickTim

Tim Zepick

Office Manager; Technical Director, Ultrasound at Line Medical

It’s impossible.

There are large variations in quality of ultrasound systems. There are also large variations in skill levels of operators. And those skill levels change over time. But the most detrimental factor is that the human body is a dynamic and unique system. Some subjects are technically difficult and LV function is basically impossible to assess, even with an excellent US system. Measurement of LVPWiD is a guess on these patients. Then there are subjects on whom you can obtain excellent images at intercostal space #2, #3, and #4. And the anatomy is bisected at a different angle and might yield three different measurements at each approach. The same can be said for a lot of the 2-D length measurements. I can probably make your RA five centimeters wide, if I try.

That said, doing about 5,000 studies will get you pretty good at recognizing your limitations, realizing the need to remeasure erroneous data, common failings of ultrasound physics and other sources of error.

Thankfully, inexperienced techs get a good education on spotting and evaluating the “exciting” stuff because these nuanced stuff takes time to develop.

 

Wayne PetersonWayne

Wayne Peterson

Product Manager

Ivan,
As a former Philips employee, hello. My clinical skills included cardiac ultrasound. In response to your question, a software program with edge resolution enhancement and auto analysis would be amazing. It would remove user variability.

 

Tony GallagherTony

Tony Gallagher

Clinical Coordinator of Cardiology and Cardio-Pulmonary Rehabilitation at Floyd Medical Center

I agree with Tim that it is not possible. In deference to Wayne; edge recognition software would help. But the variety of equipment skill levels, even peoples varied vision; prevent 100% agreement.

Even at the larger conferences, when you attend the “read with the experts” courses, you see that they tend to disagree looking at the same images.

unless equipment, education, and criteria for performing studies gets standardized; not going to happen.

 

Alberto GomezAlberto

Alberto Gomez

Research Assistant at King’s College London

Reproducibility and reduction (I.e. not complete removal) of variability could be achieved in several ways. For example, to cite a few: multi view imaging to remove view dependency on edge definition and occlusion; angle independent flow quantification using 3D color Doppler; image fusion and compounding with tracked probes; simultaneous (or quasi-simultaneous) multi-prove systems. All these are engineering and research challenges but we will get to them. How long it will take highly depends on how willing manufacturers are to open up to research institutions and how willing research institutions are to share and exploit results.

 

Aviva Lev-Ari, PhD, RN

Aviva Lev-Ari, PhD, RN

Cardiovascular Original Research: Curation Methodology Designer at Leaders in Pharmaceutical Business Intelligence

Please visit us
http://pharmaceuticalintelligence.com

On right hand side Categories, pl. Click on Medical Imaging
search box
Type
Justin Pearlman, MD, PhD, FACC

Thank you
Aviva Lev-Ari, PhD, RN

 

Clifford ThorntonClifford

Clifford Thornton

Echocardiography Technician at CapitalHealth

You’re a very brave man Ivan, asking the holy grail question of echocardiography! I’ve been doing echoes at prominent institutions for 10 years, been registered in echo since 2006, have performed probably more than 6,000 adult echoes, teach echo to technicians and sometimes residents, attended echo related conferences and read the latest Dr. Feigenbaum and Dr. Otto textbooks – so I can speak to this topic.

Here’s the deal. From what I understand, the best echo has to offer as far as EF quantification (which I think is usually the focus) is 3D volume quantification. The problem is, is that primarily due to reimbursement issues this technology has had very slow adoption and application and therefore low availability. A great application of this technology would be evaluating a patient for possible LVAD placement/treatment or heart transplant.

Given this, what most technicians are left with is 2 Dimensional echo. There are many ways we can measure Ejection Fraction with 2D Echo and they include direct 2D measurement (measuring the left ventricular internal dimension at end-diastole (LVIDd) and the left ventricular internal dimension at end-systole (LVIDs). Most modern echo systems have very specific packages that enable fairly easy measurements of these aspects of the heart and are clearly labeled. A technician can also make a similar measurement using M-Mode. The achilles-heel of M-Mode based measurement of EF is that the picture or the heart in the picture (from the parasternal long-axis window/view – PLAX) must be on-axis. If the picture is off-axis, your direct 2D measurements should be more accurate. Side note: this all goes out the window with a poorly trained or lazy echo tech who has little to no idea of what they’re doing – and unfortunately there’s too many of these out there (read more on this later). So, as far as 2D left ventricular dimension measurements of EF go, direct 2D (on screen) measurements are preferable.

According to a Wake Forest Cardiology conference which I attended several years ago in Orlando, “The Beat Goes On” the best or most accurate measurement of EF with 2D echo (assuming there’s a good, well-trained, knowledgeable and hard working echo tech performing the test) is with Simpson’s Bi-plane method/Quantification. This measurement is based on volume of blood in the heart at end-diastole and end-systole (see the pattern). Once this is calculated a technician can then calculate the stroke volume (which most packages calculate automatically once the proper measurements are calculated and entered) and the cardiac output (Stroke volume (SV) X heart rate (HR)). As a reference the heart wants to push around 5 liters of blood per minute to sustain life and normal body function. Of course this can greatly increase with exercise or decrease slightly with rest as can your respiratory function.

The way the Biplane Simpsons’ method is performed is that a technician calculates the following from both the apical-4-chamber view and the apical-2-chamber view (again most modern systems have these measurements built into the package and labeled and they can also be exported directly to the preliminary report through DICOM specs.):
1. Left ventricular volume at End-Diastole (LVVED) – A4C

1. Left ventricular volume at End-Systole (LVVES) – A4C

1. Left ventricular volume at End-Diastole (LVVED) – A2C

1. Left ventricular volume at End-Systole (LVVED) – A2C

Yes, you can imagine this is very time consuming. It can be done later once the scan is done however once the patient leaves, you can not go back and adjust your view if you think your picture is foreshortened or off-axis, etc.

Please see very relevant document to this topic from the American Society of Echocardiography, Committee Recommendations:
http://files.asecho.org/files/ChamberQuantification.pdf

 

Clifford ThorntonClifford

Clifford Thornton

Echocardiography Technician at CapitalHealth

cont’d–
The bottom line is that quantification in echo, particularly in calculating EF depends on the situation. Simpsons’ method is not performed routinely in most labs because if the EF visually looks normal (around 55% – 70%) from the long-axis, short-axis and apical 4 chamber and apical 2 chamber views then there’s usually not a huge need for it; little additional benefit. I try to do it as much as possible because I like to do as best an echo I can and also it’s good to practice and a little fun when you have very clear/great quality pictures (ironically these are the people who you know their EF is probably normal the minute they walk through your door!).

There are many tools and techniques one can employ to optimize their 2D/Simpsons’ EF measurements. Here are a few:
* the patient into the proper position (left lateral decubitus) — I use a wedge to keep them on their left side and keep their head well supported with a rolled up pillow or rolled up blankets

* the proper echo settings/frequency. Use penetration setting if you have to, but if they have good pictures, use the best resolution setting you can without sacrificing endocardial border definition — otherwise you’re defeating the purpose

* the proper breathing techniques (I find from parasternal window it’s best to have the patient inhale, exhale all the way and then hold their breath for loop acquisition and best to have them inhale and hold for apical acquisitions – but just play around with it until you get the picture you want).

* the picture on axis and avoid foreshortening — this is very key for the Simpsons’ method of discs

Now, you’ve tried all this, you’re sweating, your hand and shoulder are about to fall off, you see stars or angels or both and the patient and their family think you are completely clueless and think you’re torturing their Wife/Husband/Daughter/Friend/etc. and you’re wondering if you’ll have a job tomorrow. So what do you do?

Definity Echo contrast (Perflutren Lipid Microsphere) – http://www.definityimaging.com/ – you say? Yes, possibly. You need to A. Get the patient’s consent (although this is beginning to change) B. Establish IV access for the injection of the Definity solution C. Activate the Definity and use it within a certain period D. Utilize it correctly.

Basically Definity contrast is little gas bubbles that reflect the ultrasound beams (for which 2D pictures are generated from pulsed-wave doppler) very well or strongly and allow for a stronger, clearer/better resolution image. The heart walls/endocardial borders are one color and the contrast is the other (the contrast is usually the white-milky substance you see inside the left ventricle while it’s filling and contracting. Most people think it’s pretty “cool” when they see it and it can make a dramatic difference in how you visually estimate or calculate the EF. As I mentioned, Simpsons’ method (the preferred 2D EF calc. method) is highly dependent on operator skill and effort and hence picture quality. And Definity contrast can greatly enhance the picture quality. Last week I had a patient where you could barely see any endocardial border without Definity and visually estimating his EF would be a total shot in the dark. Well, we administered Definity, and I’m not lying it was still a tough scan, but once the Definity was injected and began to appear in the right ventricle and then left, I could see immediately that his EF was completely normal (55-60%). This was important to assess clinically because the patient was in the CCU at the time and he was s/p CABG. Judging whether the EF is normal or not can have a big play in clinical decision making for other conditions.

Ironically getting an accurate EF has to do more with having the right technician perform your test than it has to do with technology or anything else. And unfortunately there’s no lack of pitfalls there.

 

Reza

Reza Mehzad, MD, MPH

Mercy Heart Institute

Full automation for 3D echocardiography volume assessment AND having a safe contrast agents to be used with all echo studies.


 Clifford Thornton likes this

Read Full Post »