Thursday, May 14, 2015

AAAS Forum Take #2

Another point of view of the AAAS Forum by Matthew Facciani:

I have provided scientific testimony and met with some of my local legislators, but I’ve never had any formal exposure to science policy. I was really excited to hear about the AAAS Science & Technology Policy Forum to learn more about how scientists can impact policy. The information I absorbed at the conference was overwhelming, but incredibly stimulating. Some of the lectures discussed the budget cuts and the depressing barriers for achieving science policy. However, I felt there was definitely an atmosphere of optimism at the conference and it was focused on how we can create positive change.

One of my favorite aspects of the conference were the discussions of how to effectively communicate science to non-scientists. Before we can even have discussions of funding, the general public needs to understand how science works and why basic science is so important. For example, science never proves anything with 100% certainty, but it may sound weak if politicians are only saying that science “suggests” instead of “proves.” One creative way to circumvent this problem is to use comparisons. Instead of saying “science suggests GMOs are safe” we could say “scientists are as sure that GMOs are safe as they are sure that smoking is bad for your health.” The conference was rife with these kinds of effective tactics and I left the conference with a sense of confidence that we can collectively make a difference to influence science policy.


Matthew Facciani is a sociology PhD student at The University of South Carolina. He is also a gender equality activist and science communicator. Learn more at www.matthewfacciani.com, and follow him at @MatthewFacciani.

Monday, May 11, 2015

2015 AAAS Science and Technology Policy Forum Summary


I recently had the opportunity to attend the 2015 AAAS Science and Technology Policy Forum in Washington, D.C. This annual meeting brings together a range of academics and professionals to discuss the broad S&T policy landscape. Below are some of my takeaways from the meeting. I hope to have additional comments from other National Science Policy Group members up soon.

By Chris Yarosh

The talks and panels at the Forum encompassed a huge range of topics from the federal budget and the appropriations outlook to manufacturing policy and, of course, shrimp treadmills. My opinion of the uniting themes tying this gamut together is just that—my opinion— and should only be taken as such. That being said, the threads I picked on in many of the talks can be summarized by three C’s: cooperation, communication, and citizenship.

First up, cooperation. Although sequestration’s most jarring impacts have faded, AAAS’s budget guru Matthew Hourihan warns that fiscal year 2016 could see a return of…let’s call it enhanced frugality. These cuts will fall disproportionately on social science, clean energy, and geoscience programs. With the possibility of more cuts to come, many speakers suggested that increased cooperation between entities could maximize value. This means increased partnership between science agencies and private organizations, as mentioned by White House Office of Science and Technology Policy Director John Holdren, and between federal agencies and state and local governments, as highlighted by NSF Director France Córdova. Cooperation across directorates and agencies will also be a major focus of big interdisciplinary science and efforts to improve STEM education. Whatever the form, the name of the game will be recognizing fiscal limitations and fostering cooperation to make the most of what is available.

The next “C” is communication. Dr. Córdova made a point of listing communication among the top challenges facing the NSF, and talks given by Drs. Patricia Brennan (of duck penis fame) and David Scholnick (the aforementioned shrimp) reinforced the scale of this challenge. As these two researchers reminded us so clearly, information on the Web and in the media can be easily be misconstrued for political or other purposes in absence of the correct scientific context. To combat this, many speakers made it clear that basic science researchers must engage a wider audience, including elected officials, or risk our research being misconstrued, distorted, or deemed unnecessary. As Dr. Brennan said, it is important to remind the public that while not every basic research project develops into something applied, “every application derives from basic science.”

The last “C” is citizenship. Several of the speakers discussed the culture of science and interconnections between scientists and non-scientists. I think that these presentations collectively described what I’ll call good science citizenship.  For one, good science citizenship means that scientists will increasingly need to recognize our role in the wider innovation ecosystem if major new programs are ever going to move forward. For example, a panel on new initiatives in biomedical research focused on 21st Century Cures and President Obama’s Precision Medicine Initiative. Both of these proposal are going to be massive undertakings; the former will involve the NIH and FDA collaborating to speed the development and introduction of new drugs to the market, while the latter is going to require buy in from a spectrum of stakeholders including funders, patient groups, bioethicists, and civil liberty organizations. Scientists are critical to these endeavors, obviously, but we will need to work seamlessly across disciplines and with other stakeholders to ensure the data collected from these programs are interpreted and applied responsibly.

Good science citizenship will also require critical evaluation of the scientific enterprise and the separation of the scientific process from scientific values, a duality discussed during the William D. Carey lecture given by Dr. William Press. This means that scientists must actively protect the integrity of the research enterprise by supporting all branches of science, including the social sciences (a topic highlighted throughout the event), and by rigorously weeding out misconduct and fraud. Scientists must also do a better job of making our rationalist approach works with different value systems, recognizing that people will need to come together to address major challenges like climate change.  Part of this will be better communication to the public, but part of it will also be learning how different value systems influence judgement of complicated scientific issues (a subject of another great panel about Public Opinion and Policy Making). Good science citizenship, cultivated through professionalism and respectful engagement of non-scientists, will ultimately be critical to maintaining broad support for science in the U.S.

Thursday, April 30, 2015

Asking for a Small Piece of the Nation’s Pie

By Rosalind Mott, PhD


This article was originally published in the Penn Biomed Postdoctoral Council Newsletter (Spring 2015).

Historically, the NIH has received straightforward bipartisan support; in particular, the doubling of the NIH budget from FY98-03 led to a rapid growth in university based research. Unfortunately, ever since 2003, inflation has been slowly eating away at the doubling effort (Figure 1). There seems little hope for recovery other than the brief restoration in 2009 by the American Recovery and Reinvestment Act (ARRA). Making matters worse, Congress now has an abysmal record of moving policy through as bipartisan fighting dominates the Hill.

Fig 1: The slow erosion of the NIH budget over the past decade
(figure adapted from: http://fas.org/sgp/crs/misc/R43341.pdf)
Currently, support directed to the NIH is a mere 0.79% of federal discretionary spending. The bulk of this funding goes directly to extramural research, providing salaries for over 300,000 scientists across 2500 universities.  As the majority of biomedical researchers rely on government funding, it behooves these unique constituents to rally for sustainable support from Congress. Along with other scientists across the country who are becoming more politically involved, the Penn Science Policy Group  arranged for a Congressional Visit Day (CVD) in which a small group of post doctoral researchers and graduate students visited Capitol Hill on March 18th to remind the House and Senate that scientific research is a cornerstone to the US economy and to alert them to the impact of the erosion on young researchers. 

Led by post-docs Shaun O’Brien and Caleph Wilson, the group partnered with the National Science Policy Group (NSPG), a coalition of young scientists across the nation, to make over 60 visits to Congressional staff. NSPG leaders from other parts of the country, Alison Leaf (UCSF) and Sam Brinton (Third Way, Wash. DC), arranged for a productive experience in which newcomers to the Hill trained for their meetings.  The Science Coalition (TSC) provided advice on how to effectively communicate with politicians: keep the message clear and simple, provide them with evidence of how science positively impacts society and the economy, and tell personal stories of how budget-cuts are affecting your research. TSC pointed out the undeniable fact that face to face meetings with Congress are the most effective way to communicate our needs as scientists. With the announcement of President Obama’s FY16 budget request in February, the House and Senate are in the midst of the appropriations season, so it was no better time to remind them of just how important the funding mechanism is.

Meeting with the offices of Pennsylvania senators, Pat Toomey and Bob Casey, and representatives, Glenn Thompson and Chaka Fattah were key goals, but the meetings were extended to reach out to the states where the young scientists were born and raised – everywhere from Delaware to California. Each meeting was fifteen to twenty minutes of rapid discussion of the importance of federally funded basic research. At the end of the day, bipartisan support for the NIH was found to exist at the government’s core, but the hotly debated topic of how to fund the system has stalled its growth.
Shaun O’Brien recaps a disappointing experience in basic requests made to Senator Toomey. Sen. Toomey has slowly shifted his stance to be more supportive of the NIH, so meeting with his office was an important step in reaching the Republicans:

We mentioned the "Dear Colleague" letter by Sen. Bob Casey (D-PA) and Sen. Richard Burr (R-NC) that is asking budget appropriators to "give strong financial support for the NIH in the FY2016 budget". Sen. Toomey didn't sign onto it last year, especially as that letter asked for an increase in NIH funding to $31-32 billion and would have violated the sequester caps-which Sen. Toomey paints as a necessary evil to keep Washington spending in check. I asked the staffer for his thoughts on this year's letter, especially as it has no specific dollar figure and Sen. Toomey has stated his support for basic science research. The staffer said he would pass it along to Sen. Toomey and let him know about this letter.

Unfortunately, three weeks later, Sen. Toomey missed an opportunity to show his "newfound" support for science research as he declined to sign a letter that essentially supports the mission of the NIH.  I plan to call his office and see if I can get an explanation for why he failed to support this letter, especially as I thought it wouldn't have any political liability for him to sign.

Working with Congressman Chaka Fattah balanced the disappointment from Toomey with a spark of optimism. Rep. Fattah, a strong science supporter and member of the House Appropriations Committee, encourages scientists to implement twitter (tweet @chakafattah) to keep him posted on recent success stories and breakthroughs; these bits of information are useful tools in arguing the importance of basic research to other politicians.

Keeping those lines of communication strong is the most valuable role that we can play away from the lab.  Walking through the Russell Senate Office building, a glimpse of John McCain waiting for the elevator made the day surreal, removed from the normalcy of another day at the bench. The reality though is that our future as productive scientists is gravely dependent upon public opinion and in turn, government support. The simple act of outreach to the public and politicians is a common duty for all scientists alike whether it be through trips to the Hill or simple dinner conversations with our non-scientist friends.


Participants represented either their professional society and/or the National Science Policy Group, independent from their university affiliations. Support for the training and experience was provided by both the American Academy of Arts & Sciences (Cambridge, MA) and the American Association for the Advancement of Science (AAAS of Washington, DC).

Friday, April 3, 2015

Dr. Sarah Cavanaugh discusses biomedical research in her talk, "Homo sapiens: the ideal animal model"

Biology and preclinical medicine rely heavily upon research in animal models such as rodents, dogs, and chimps. But how translatable are the findings from these animal models to humans? And what alternative systems are being developed to provide more applicable results while reducing the number of research animals?
Image courtesy of PCRM


Last Thursday, PSPG invited Dr. Sarah Cavanaugh from the Physicians Committee for Responsible Medicine to discuss these issues. In her talk entitled, “Homo sapiens: the ideal animal model,” she emphasized that we are not particularly good at translating results from animal models into human patients. Data from the FDA says that 90% of drugs that perform well in animal studies fail when tested in clinical trials.  It may seem obvious, but it is important to point out that the biology of mice is not identical to human biology. Scientific publications have demonstrated important dissimilarities in regards to the pathology of inflammation, diabetes, cancer, Alzheimer’s, and heart disease.

All scientists understand that model systems have limitations, yet they have played an integral role in shaping our understanding of biology. But is it possible to avoid using experimental models entirely and just study human biology?

The ethics of studying biology in people are different from those of studying biology in animals.  The “do no harm” code of medical ethics dictates that we can’t perform experiments that have no conceivable benefit for the patient, so unnecessarily invasive procedures can not be undertaken just to obtain data. This limitation restricts the relative amount of information we can obtain about human biology as compared to animal biology.  Regardless, medical researchers do uncover important findings from human populations. Dr. Cavanaugh points out that studies of risk factors (both genetic and environmental) and biomarkers are important for understanding diseases, and non-invasive brain-imaging has increased our understanding of neurodegenerative diseases like Alzheimer’s.

Yet these are all correlative measures. They show that factor X correlates with a higher risk of a certain disease. But in order to develop effective therapies, we need to understand cause and effect relationships - in other words, the mechanism. To uncover mechanisms researchers need to be able to perturb the system and measure physiological changes or observe how a disease progresses. Performing these studies in humans is often hard, impossible, or unethical. For that reason, researchers turn to model systems in order to properly control experimental variables to understand biological mechanisms. We have learned a great deal about biology from animal models, but moving forward, can we develop models that better reflect human biology and pathology?

Using human post-mortem samples and stem cell lines is one way to avoid species differences between animals and humans, but studying isolated cells in culture does not reflect the complex systems-level biology of a living organism. To tackle this problem, researchers have started designing ways to model 3D human organs in vitro, such as the brain-on-a-chip system. Researchers also have envisioned using chips to model a functioning body using 10 interconnected tissues representing organs such as the heart, lungs, skin, kidneys, and liver.
Image from: http://nanoscience.ucf.edu/hickman/bodyonachip.php

Dr. Cavanaugh explained that toxicology is currently a field where chip-based screening shows promise. It makes sense that organs-on-a-chip technology could be useful for screening drug compounds before testing in animals. Chip-screening could filter out many molecules with toxic effects, thus reducing the number of compounds that are tested in animals before being investigated clinically.

A major counterpoint raised during the discussion was whether replacing animal models with human organs on a chip was simply replacing one imperfect, contrived model with another. Every model has limitations, so outside of directly testing therapeutics in humans, it is unlikely that we will be able to create a system that perfectly reflects the biological response in patients. The question then becomes, which models are more accurate? While ample data shows the limitations of animal models, very little is available showing that alternatives to animal-free models perform better than existing animal models. Dr Cavanaugh argues, however, that there is an opportunity to develop these models instead of continuing to pursue research in flawed animal models. “I don’t advocate that we end all animal research right now, rather that we invest in finding alternatives to replace the use of animals with technologies that are more relevant to human biology.”

This topic can ignite a passionate debate within the medical research community. Animal models are the status quo in research, and they are the gatekeepers in bench-to-bedside translation of scientific discoveries into therapeutics. In the absence of any shift in ethical standards for research, replacing animal models with alternatives will require mountains of strong data demonstrating better predictive performance. The incentives exist, though. Drug companies spend roughly $2.6 billion to gain market approval for a new prescription drug. Taking a drug into human trials and watching it fail is a huge waste of money. If researchers could develop new models for testing drugs that were more reliable than animal models at predicting efficacy in humans, it’s safe to say that Big Pharma would be interested. Very interested.


-Mike Allegrezza

"Wistar rat" by Janet Stephens via Wikimedia Commons 

Tuesday, January 6, 2015

Publish or Perish: an old system adapting to the digital era

    By Annie Chen and Michael Allegrezza
      
           When scientific publishing was developed in the 19th century, it was designed to overcome barriers that prevented scientists from disseminating their research findings efficiently. It was not feasible for scientists to arrange for typesetting, peer review, printing, and shipping of their results to every researcher in their field. As payment for these services offered by publishers, the researchers would transfer the exclusive copyrights for this material to the publisher, who would then charge subscribers access fees. To limit the printing costs associated with this system, journals only published articles with the most significant findings. Now, nearly 200 years later, we have computers, word processors, and the Internet. Information sharing has become easier than ever before, and it is nearly instantaneous. But the prevailing model of subscription-based publishing remains tethered to its pre-digital origins, and for the most part these publishers have used the Internet within this model, rather than as a tool to create a new and better system for sharing research.

Figure 1. Trend lines show an annual increase of 6.7%
for serials expenditures vs 2.9% for the Consumer Price
Index over the period 1986-2010, relative to 1986 prices.
In theory, digitization should have decreased costs of communicating science: (authors can perform many of the typesetting functions, articles can be uploaded online instead of printed and shipped, etc. In practice, however, digitization has actually increased the price of journals. Statistics from the Association of Research Libraries show that the amount spent on serials increased 6.7% per year between 1986 and 2011, while inflation as measured by the US Consumer Prices Index only rose 2.9% per year over the same period (Figure 1).1 Shawn Martin, a Penn Scholarly Communication Librarian, explained, “Penn pays at least twice for one article, but can pay up to 7 or more times for the same content,” in the process of hiring researchers to create the content, buying subscriptions from journals, and paying for reuse rights. To be fair, the transition phase from print to digital media has been costly for publishers because they have had to invest in infrastructure for digital availability while still producing print journals. Many publishers argue that while journal prices may have increased, the price per reader has actually decreased due to a surge in the ease of viewers accessing articles online.

Regardless of whether increasing journal prices was justified, a new model for academic publishing emerged in the 1990s in opposition: open access (OA). There are two ways of attaining open access: Gold OA, when the publisher makes the article freely accessible, and Green OA, which is self-archiving by the author. A few years ago, Laakso et al. conducted a quantitative analysis of the annual publication volumes of Direct OA journals from 1993 to 2009 and found that the development of open access could be described by three phases: Pioneering (1993-1999), Innovation (2000-2004), and Consolidation (2005-2009).2 During the pioneering years, there was high year-to-year growth of open access articles and journals, but the total numbers were still relatively small. OA publishing bloomed considerably from 2000 to 2009, growing from 19,500 articles and 740 journals to 191,850 articles and 4,769 journals, respectively. During the innovation years, new business models emerged. For example, BioMedCentral, later purchased by Springer in 2008, initiated the author charge. In 2004, some subscription-based journals began using a hybrid model, such as Springer’s Open Choice program, which gave authors the option of paying a fee to make their article openly available. During the consolidation phase, year-to-year growth for articles decreased from previous years but was still high, at about 20%.

The introduction of open access journals has sparked fierce and passionate debates among scientists. Proponents of open access believe scientific research should be available to everyone from anywhere in the world. Currently, subscription fees prevent many people from accessing the information they need. With open access, students and professors in low- and middle-income countries, health care professionals in resource-limited settings, and the general public would gain access to essential resources. For instance, Elizabeth Lowenthal, MD, at the Penn Center for AIDS Research, recently published a paper in PLoS One analyzing variables that influence adherence to retroviral drugs in HIV+ adolescents living in Botswana. Her decision to publish open access was because “the article will be of most direct use to clinicians working in Botswana and I wanted to make sure that it would be easy for them to access it.” Open access also provides re-use rights and may facilitate a more rapid exchange of ideas and increased interactions among scientists to generate new scientific information.

However, there may also be some downsides to increased access. Open access may increase the number of articles that people have to sift through to find important studies.3 Furthermore, people who do not know how to critically read scientific papers may be misled by articles with falsified data or flawed experiments. While these papers often get retracted later on, they may undermine the public’s confidence in scientists and medicine. Wakefield’s (retracted) article linking vaccines to autism, for example, may have contributed to the rise of the anti-vaccine movement in the US.4 Furthermore, many open access journals require authors to pay for their papers to be published to offset the cost of publication, and some people have taken advantage of this new payment system to make a profit through predatory journals (a list of predatory OA journals can be found here: http://scholarlyoa.com/publishers/). It is clear though, that the expansion of open access from 1993 until present time suggests that open access can be a sustainable alternative to the traditional model of subscription-based academic publishing.

In addition to facilitating access to scientific articles, the Internet has also created opportunities to improve the peer review process. Peer review was designed to evaluate the technical merit of a paper and to select papers that make significant contributions to a field. Scientists supporting the traditional model of publishing argue that the peer review process in some open access journals may not be as rigorous, and this may lead to the emergence of a “Wild West” in academic publishing. Last year, reporter John Bohannon from Science magazine sent a flawed paper to 304 open access journals, and of the 255 journals that responded, 157 accepted the paper, suggesting little or no peer review process in these journals.5 However, even high-impact journals publish papers with flawed experiments.6 Michael Eisen, co-founder of PLoS, wrote, “While they pocket our billions, with elegant sleight of hand, they get us to ignore the fact that crappy papers routinely get into high-profile journals simply because they deal with sexy topics…. Every time they publish because it is sexy, and not because it is right, science is distorted. It distorts research. It distorts funding. And it often distorts public policy.”7 Nature, for example, published two articles last year about acid-bath stem cell induction, which were later retracted due to data manipulation. However, according to Randy Sheckman, editor-in-chief of eLife, “these papers will generate thousands of citations for Nature, so they will profit from those papers even if they are retracted.”8

With digital communication, peer review for a manuscript could shift from a rigid gate controlled by 3 or 4 people, who might not even be active scientists, into a more dynamic, transparent, and ongoing process with feedback from thousands of scientists. Various social media platforms with these capabilities already exist, including ResearchGate9 and PubMedCommons.10 Some open access journals are using different strategies to address these issues in peer review. eLIFE, for example, employs a fast, streamlined peer review process to decrease the amount of time from submission to publication while maintaining high-quality science. On the other hand, PLoS One, one of the journals published by the Public Library of Science, judges articles based on technical merit alone, not on the novelty.

We polled a few scientists at Penn who had recently published for their thoughts on open access and peer review. Most people did not experience a difference in the peer review process at an open access journal compared to non-open access. The exception was at eLIFE, where reviewers’ comments were prompt, and the communication between reviewers and editors is “a step in the right direction,” according to Amita Sehgal, PhD. To improve the peer review process, some suggested a blind process to help eliminate potential bias towards well-known labs or against lesser-known labs.

The digital revolution is changing the culture of academic publishing, albeit slowly. In 2009, the NIH updated their Public Access Policy to require that any published research conducted with NIH grants be available on PubMed Central 12 months after publication.11 Just last month, the publisher Macmillan announced that all research papers in Nature and its sister journals will be made free to access online in a read-only format that can be annotated but not copied, printed or downloaded. However, only journal subscribers and some media outlets will be able to share links to the free full-text, read-only versions.12 Critics such as Michael Eisen13 and John Wilbanks14 have labeled this change merely a public relations ploy to appeal to demands without actually increasing access. It will be interesting to see if other publishers follow this trend.

Scientific communication has yet to reap the full benefits in efficiency made possible by the Internet. The current system is still less than ideal at furthering ideas and research with minimal waste of resources. But this generation of young researchers is more optimistic and may revolutionize scientific publishing as we know it. “I think [open access is] the future for all scientific publications,” says Bo Li, a postdoc at Penn. “I hope all research articles will be freely accessible to everyone in the world.”

A companion opinion article by Penn PhD student Brian S. Cole can be found here.

This article appeared in the Penn Science Policy January 2015 newsletter



Annie Chen
Michael Allegrezza














1Ware M, Mabe M. (2012) The stm report. http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf
2Laakso M, Welling P, Bukvova, H, et al. (2011) The development of open access journal publishing from 1993 to 2009. PLoS ONE.
3Hannay, T. (2014) Stop the deluge of scientific research. The Guardian: Higher Education Network Blog. http://www.theguardian.com/higher-education-network/blog/2014/aug/05/why-we-should-publish-less-scientific-research.
4Wakefield AJ, Murch SH, Anthony A, Linnell, et al. (1998) Ileal lymphoid nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children [retracted]. Lancet. 351:637-41.
5Bohannon, J. (2013) Who’s afraid of peer review? Science. 342(6154): 60-65. DOI: 10.1126/science.342.6154.60
6Wolfe-Simon F, Switzer Blum J, Kulp TR, et al. (2011) A bacterium that can grow by using arsenic instead of phosphorus. Science. 332(6034) 1163-6. doi: 10.1126/science.1197258.
7Eisen M. (2013) I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals. it is NOT junk. http://www.michaeleisen.org/blog/?p=1439.
8(2014) Episode 12. The eLIFE podcast. http://elifesciences.org/podcast/episode12
9ResearchGate. http://www.researchgate.net/
10PubMed Commons. http://www.ncbi.nlm.nih.gov/pubmedcommons/
11NIH Public Access Policy Details. http://publicaccess.nih.gov/policy.htm
12Baynes G, Hulme L, MacDonald S. (2014) Articles on nature.com to be made widely available to read and share to support collaborative research. Nature. http://www.nature.com/press_releases/share-nature-content.html
13Eisen M. (2014) Is Nature’s “free to view” a magnanimous gesture or a cynical ploy?. it is NOT junk. http://www.michaeleisen.org/blog/?p=1668
14Van Noorden R. (2014) Nature promotes read-only sharing by subscribers. Nature. http://www.nature.com/news/nature-promotes-read-only-sharing-by-subscribers-1.16460