Concerned about Net Neutrality Contact Congress

I’m sure you have all heard about the FCC not taking (or looking) at any public comments regarding Net Neutrality.  However, you can still voice your opinion on Net Neutrality by contacting your members of Congress.  It is particularly important to contact them if your Congress person is on one of the committees that oversees the FCC. (Go to links highlighted below, click “about” and find out if your state and Congress person is listed)

(I am re-posting what I was sent from the MLA/AAHSL Legislative Task Force)

Through the ALA Washington Office we have learned that last week, Federal Communications Commission (FCC) Chairman Pai announced plans to dismantle network neutrality protections approved by the FCC in 2015 and affirmed by the federal appeals court in 2016. The new draft order is scheduled to be voted on by the five FCC commissioners on December 14th.

Why Net Neutrality Matters:

MLA and AAHSL support the net neutrality protections approved by the Federal Communications Commission in 2015 and affirmed by the federal appeals court in 2016. Net neutrality is critical to libraries, library patrons and the public.

Health sciences libraries require an open internet to provide

  • unencumbered access to the National Library of Medicine’s (NLM) almost 300 databases that support health care, education, and research; and
  • Internet access to images that support telemedicine.

The public requires Internet access without restrictions and barriers to access consumer health information.

Libraries depend on the principles of net neutrality which allow them to create and make available on their websites content that supports educational opportunities online worldwide and to provide access to datasets to promote research and collaboration.

What You Can Do (before December 13th):

As the ALA Washington Office reports, right now, the FCC is not accepting public comments (that may come later), but strong disapproval from members of Congress (especially from those who serve on committees with oversight for the FCC) could force a pause in the December 14 vote to derail net neutrality. Make your voice heard now by emailing your member of Congress (www.House.gov) and (www.Senate.gov) to support net neutrality protections.

Links below are provided to the following House and Senate Committees and Subcommittees with jurisdiction over the FCC.

For talking points see:

MLA/AAHSL Comments to the Federal Communications Commission re: Restoring Internet Freedom,” Docket 17-108 (July 14, 2017) and feel free to personalize your letter by addressing the impact of this potential rollback on your libraries and users.

Here are some other good video (non medical) explaining why Net Neutrality is important. https://trib.al/cfpx4Oa

Systematic Review Search Strategy Development: (Very Nearly) A Thing of the Past?

A guest post by Rachel Pinotti, MLIS, AHIP

Recently, a faculty member sent me a copy of a June 2017 editorial published in Annals of Internal Medicine entitled Computer-Aided Systematic Review Screening Comes of Age along with the article which it accompanied.  The editorial argues, in short, that machine learning algorithms generate superior results to human-designed search strategies.  It asks (and answers), “Is it time to abandon the dogma that no stone be left unturned when conducting literature searches for systematic reviews? We believe so, because it has a deleterious effect on the number and timeliness of updates and, ultimately, patient health.” (Hemens & Iorio, 2017)

As a librarian who conducts, consults, and teaches systematic review searching, this unleashed a flood of thoughts and questions.

On a philosophical level, these authors’ thesis raised a real tension that I feel with regards to so many topics I teach about: the tension between teaching students about the way things are now vs. the way they very likely will be in the near-to-medium term future. As of now, I don’t think GLMnet and GBM, the machine-learning algorithms utilized in the original article which the editorial accompanies (Shekelle, Shetty, Newberry, Maglione, & Motala, 2017) are widely utilized for systematic review searching, but they quite possibly may be in 3-5-7 years’ time (or less).  Are students better off learning to design and execute comprehensive search strategies, a skill that will serve them in the immediate term and perhaps a few years hence or better off learning how to use GLMnet and GBM, tools that may come into wide use a few years from now?  The answer is probably that they are best off learning both.  Unfortunately I don’t know of anyone within my institution who could teach the current cohort of students these new tools.  (Maybe such people exist and I’m not familiar with them, maybe they don’t exist, or maybe they exist but exercise their skills exclusively for research, not teaching purposes….)

Even once these tools come in to wide use, I wonder if teaching students to design and execute comprehensive search strategies is a bit like teaching them long division – not something they are likely to use frequently or maybe ever in their day to day work, but you need to learn long division in order to understand the concept of division so you understand what is happening when you type 48756/38 into a calculator (or enter your initial search terms into a machine learning search tool).

On a practical level, a big concern with machine learning algorithms is whether they are able to effectively handle multiple information sources and grey literature?  Shekelle indicates, “Although initial results were encouraging, these methods required fully indexed PubMed citations.”  The algorithms could likely be adapted for Embase and other databases, though this might require permission from database providers. Grey literature (conference abstracts, theses, etc.) often does not have complete abstracts and almost by definition is not fully indexed.  Excluding grey literature from a systematic review or meta-analysis introduces a real risk that publication bias will produce a biased result, as documented by McAuley, Pham, Tugwell, & Moher, 2000 and others.

I’ve always felt that some of the best practices recommended in systematic review searching, such as the recommendation to use both index terms and keywords, are redundant and unhelpful.  So I don’t doubt – and on the contrary am quite receptive to –the argument that current best practices in systematic review searching favor sensitivity far too greatly over precision. Some filters (e.g. language) are much more effective than others (e.g. age, gender). I absolutely think that filters, especially Clinical Queries, are useful when you need to find the few most relevant articles on a topic, but do not believe that they are yet able to unearth all the studies that would have a bearing on a topic, as suggested by Hemens.

The editorial asserts, “Simple Boolean filters applied when searching increased precision 50%…while identifying 95% of the studies…” (Hemens & Iorio, 2017) with an underlying, unstated assumption that these results would hold true across fields and topics. Thinking of this in evidence-based medicine terms, these articles present case-series level evidence for the effectiveness of machine learning search algorithms.  Before changing our standard of practice, I would like to see larger scale studies (perhaps cross-over studies in which the same review is conducted both using evidence found through machine learning and traditional search methods) which indicate that machine learning search tools are as effective or more effective than traditional search methods before advocating that they be widely adopted.

The ideal outcome would be getting to a point where machine learning is used to effectively search traditional published material allowing reviewers to focus their energy on searching grey literature. However until we reach the point where these methods have been validated and can be widely used and taught, current best practice remain just that: the best methods for unearthing all evidence that may have a bearing on a given research topic.

Rachel Pinotti, MLIS, AHIP
Assistant Library Director, Education & Research Services
Levy Library, Icahn School of Medicine at Mount Sinai
Box 1102 – One Gustave L. Levy Pl.
New York, NY 10029-6574

Email: rachel[dot]pinotti[atsign]mssm[dot]edu
Phone: available via MLA members list

Follow us on Twitter @Levy_Library to learn about Levy Library events and initiatives.

References:

Hemens, B. J., & Iorio, A. (2017). Computer-aided systematic review screening comes of age. Annals of Internal Medicine, 167(3), 210-211. doi:10.7326/M17-1295

Shekelle, P. G., Shetty, K., Newberry, S., Maglione, M., & Motala, A. (2017). Machine learning versus standard techniques for updating searches for systematic reviews: A diagnostic accuracy study. Annals of Internal Medicine, 167(3), 213-215. doi:10.7326/L17-0124

McAuley, L., Pham, B., Tugwell, P., & Moher, D. (2000). Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? The Lancet, 356(9237), 1228-1231. doi:http://dx.doi.org/10.1016/S0140-6736(00)02786-0

PubMed’s Backdoor Makes Me Question Quality

Ok, I am dancing in my seat saying “I told you there is a problem with PMC as the backdoor to PubMed.” I know it is gloating but librarians get so little to gloat about so forgive me.

Back in 2011 I wrote the post, Backdoor Method to Getting Articles into PubMed: Is indexing so important?  At the time I was more concerned about the findability and lack of indexing of PMC articles that found their way into PubMed.  It wasn’t until a few years later with the popularity of Beall’s List did I begin to think about the quality of PMC articles now in PubMed.  In a discussion on the Medlib-l listserv regarding Beall’s list I mentioned the PMC backdoor again.  As I said, ther are “some researchers who see no distinction between PMC submitted journal articles from non-indexed journals and ones that are indexed in MEDLINE. To them it is in PubMed and that is good enough.”

I am so happy that Kent Anderson in the Scholarly Kitchen wrote the post, “A Confusion of Journals – What is PubMed Now?”  His post along with articles and communications published in peer reviewed journals Neuroscience, Lancet, and Archives of Physical Medicine and Rehabilitation mention the ever growing problems of predatory journals within the profession.  Neuroscience and Lancet specifically mention the problem within PubMed.

Unfortunately, many doctors and researchers believe what Jocalyn Clark and Richard Smith write in the editorial piece in The BMJ, “Articles in predatory journals, although publicly available through internet searches, are not indexed in reputable library systems. The articles are not discoverable through standard searches, and experienced readers and systematic reviewers will be wary of citing anything from these journals. The research is thus lost. ”

That is a partially true statement. The articles in PubMed Central are NOT indexed in MEDLINE, HOWEVER they are totally discoverable through standard searches in PubMed.  The “research” from that predatory journal is not lost.  It is is very discoverable in PubMed.

Personally, I think this is a mess.  NLM should not have given its repository the name PubMed Central, it is confusing and just the name muddies the waters.  But that ship has sailed.  NLM needs to put some serious authority control on PMC quickly.  Not only do articles in predatory journals water down what is already indexed in MEDLINE and within PubMed but it could be have significant problems for the treatment of patients and researchers.  Fake news seems to be the phrase of the day recently.  More than ever people need reliable resources where they can find credible information.  Allowing articles from predatory journals into PubMed through PMC creates a credibility problem with PubMed.  It is basically like seeing Gweneth Paltrow’s Goop medical advice published on WebMD.

I really hope Kent’s post and the other researcher’s articles get the attention they deserve.  I hope it leads others to look at and question the quality of articles allowed into PubMed by virtue of PMC.  Then perhaps NLM will look at quality control methods for PMC.  If they don’t they run the risk of ruining PubMed.  We already use another (pay) database when we must assess the quality of article.

Unsustainable Costs of Library Resources

Sometimes I feel like medical librarians have been talking to brick walls.  Either that, or we are talking to bobble heads who don’t really listen to us but nod their heads in agreement.

I get a weekly email summarizing the healthcare industry.  It is broken into local and national information and it is often an interesting quick read.  Today I read the article “US medical expenditures on the rise, except for primary and home health.” The largest expenditures were attributed to prescribed medications, specialty physicians, visits to the emergency department and inpatient hospitalizations.  While that was interesting, what really caught my eye were the links to the Top 40 articles in the past 6 months on the right:

 

Besides the one article about Trump’s budget, the first 9 articles listed were all about hospitals losing money or going bankrupt.

While Medicaid enrollment has increased, its reimbursement is significantly less than private insurers.  Then you have the increased costs of providing care. “In 2016, Cleveland Clinic’s expenses totaled $7.3 billion, up 19 percent from $6.1 billion in the year prior. The increase was largely attributable to growth in pharmaceutical, labor and supplies costs, which climbed 23 percent, 19 percent and 13 percent, respectively, year over year.”  The Cleveland Clinic is not alone,  Nationally, hospitals’ operating margins have shrunk due to smaller reimbursement, regulatory uncertainty and new alternative payment models.

Yet medical library resource vendors operate business as usual by increasing the costs of their products to clients (libraries) that are viewed as expensive cost centers in an industry that is losing money.  When librarians complain to the vendors known for price gouging, the vendors answer is to have the hospital shift the cost of their product out of the library budget and to IT, operations or another department with more money.  It doesn’t take a financial analyst to know THAT ISN’T A SUSTAINABLE SOLUTION!!! The hospital still pays for that product at a rate far above inflation and far above their reimbursement.

Librarians have been telling vendors for years that their large price increases are unsustainable.  You could say we are complicit because we suck it up and pay for the price increases by finding the money through cutting book budgets, dropping other products, etc.  However, we are between a rock, a hard place, and an abyss.  The doctors need the price gouging resources so they can practice and treat patients. The vendors justify their price gouging by saying they actually “save the institution much more” because the doctors use it.  Hospital administrators end up cutting budgets overall to get handle on increasing costs.

It isn’t a pretty picture and it will get worse.  Are they getting the same healthcare industry news I am getting?  Are the just putting their head in the sand?  Is it like the home loan industry where the banks knew something was amiss but they rode that horse until it died a national debt crippling death?

Just wondering…..

 

 

 

Research Impact Part 2: Whole New System

This is a 3 part blog series.
Research Impact Part 1: Moving Away from tracking authors’ articles.

I am not going to mentioned the company we went with.  The primary reason for this is because I am trying to write this as broadly as possible so that it applies to anybody who is considering this type of endeavor, not the nitty gritty of a specific software.  While there is always room for improvement I am happy with what we chose and I am very happy with the support we have received upon implementing it.  If you are interested in learning more about the specific products we chose, email me and I will answer those questions.

As I mentioned in Part 1, there are a lot of products out there, Converis, InCites, Profiles, Pure, Plum, etc.  After looking at several products we ended up choosing two products by the same vendor.  The two products allowed us to upload HR data so that articles would be automatically sorted and indexed by author AND department, and it also included article level metrics that were more informative than just the journal impact factor.

There were a few major points that we had have in our system.  I recommend creating your list or requirements BEFORE you start contacting vendors because it easy to get caught up in all of the cool things their products can do which may or may not be compatible with your needs.  For example, you don’t want to get excited about new dishwasher that has a new wash cycle that gets your dishes so clean it could wash the white off of them when that model only comes with large handle that blocks your silverware drawer making it necessary to always open the dishwasher before opening the silverware drawer (or completely re-design your kitchen).  So have your must haves ahead of time.

Our must haves:

Automation –  That sounds stupid but there are some systems that are more automated than others.  All require some human touching even after implementation. Think about how much time you want/can spend on the system once it is all up and running.

Institutional organization structure – It must be able to organize published articles from all of our employees by department and institute. (Institute has several departments within it.)  This was a requirement because Administration wants to know the authorial output of each department and institute and annual performance review time.  So we need to click on Urology and see the papers written by people in Urology.  Do you need to track secondary appointments? Be careful that can be a long dark rabbit hole to go down.

Impact – While almost all of the products we looked at had some type of article impact number/indicator we needed to communicate with Administration as to the one that THEY wanted and felt the most comfortable with.  This is VERY important. There are about as meany methods of measurement as there are digits in pi.  Our Administration is very traditional, so that required us to look at product that used well established metrics that have been around for many years that our people were familiar with.

Things we didn’t need:

Repository system – Currently our institution has no interest in hosting a repository of the papers written by authors.  Obviously this could change, and if it does then it requires a fresh new look at things.

Author submission – Authors are not reliable providers of the citations they publish.  We had 20+ years of experience with this. Some authors don’t have the time to upload anything. Some authors add citations to their weight loss article in Ladies Home Journal.  Other authors have citations that say “in press” from 5 years ago.  Your data out is only as good as the data you put in, and we needed tight control over the data. So we didn’t want author submission. If it was a feature, it had to be something that could be turned off.

ORCID – That sounds odd.  We actually need ORCID. Everybody needs ORCID.   Until there is a mandate that requires an author to provide their ORCID number upon publication then ORCID will just be something “nice to have.” Even in a heavy research and publishing institution, ORCID is still something of a novelty.  We did not want something that was overly built on ORCID.

Panacea systems – Many of these products track everything under the sun.  They track grants, funding, etc.  There are systems that track the entire research life cycle, from the sparkle in that researcher’s eye to the mature cited paper and everything in between.  Like many institutions we have a various systems (some home grown) that track a lot of things that the “all in one” systems track.  Unless you have buy in across the institution to change every part of the research IT process then an all in one system may be overkill.

Lessons we learned:

HR or organizational data is messy –  Unfortunately this is not unique to us.  I have heard from people at several large institutions to learn that this type of data is often not clean.  What do I mean by that?  Assuming HR will let you have an HR dump of all employees (they are often very reluctant to do this) you might discover that there is missing or duplicate information.  You might find out that several people’s secondary appointment is an entire hospital (not a department).  You might find that HR data doesn’t include graduate students.  We had to piece together our data with several institutional systems and we created a python script to strip, clean, and organize the data into the format that they vendor used.

***
Updated Paragraph 5/19/17

Regarding HR uploads….. Think very carefully about if you want your entire HR list of employees added into the system.  That could be a lot of unnecessary data.  Do you need/want people in environmental services, security, IT, etc. in your system?  Do you only want doctors and researchers?  What about nurses, PA’s, medical students, residents, and allied health who publish?  You need to sit down and figure who you want to track and how you are going to get that list of people.

*****

Comparisons – You have to be very mindful if you use one of these systems to compare your institution with another.  If your administration is competitive and likes to see how they are ranked in their disciplines or overall, they are going to ask you to use the product to compare themselves against their peers.  Most of the products we looked at could compare different institutions, disciplines, and people.  But you must do this carefully.  For example: You cannot compare a large research hospital system with university hospital system.  Even though they are peers, the university system includes many more researchers and disciplines that can skew the results.  While you can compare disciplines or subjects, you cannot compare departments.  One institution’s cardiovascular department may include pediatric cardiology while another may not.

 

 

 

Future of Biomedical Publishing

A medical librarian friend of mine agreed to answer questions for a week on NEJM Resident 360. It involved some future casting and she emailed the medical librarian listserve to pick our brains. I sent her a few crystal ball predictions and she thought they were good and I should post them on the blog to further the discussion.

So, here is the question: What does the future of delivering medical literature and latest research hold?<https://resident360.nejm.org/posts/6339>

Here are my thoughts:

  • We are going to see more movement in the area of Open Textbooks.  Open access journals have started paving the way and now with more institutions really looking into curbing the costs of textbooks you are going to see medical schools and hospitals go in that direction once the larger universities really start committing to that idea.
  • There are going to be some big changes to peer review and publishing editorial boards to have more transparent data, information, etc. Currently we are living in a world that questions established medical facts as false.  Part of the problem is that there wasn’t enough vetting or the ability to vet information that allowed questionable, conflict of interest,  or fake articles to be published.  These questionable articles hurt the entire profession and cause people to distrust good information.  It took over 10 years to Andrew Wakefield’s article to be officially retracted. We need to ask ourselves, would the autism vs vaccines controversy have become as big as it was if the data was published immediately?
  • Reproducible data is getting more and more important.  With NIH’s data sharing requirements and the increase in data repositories, the ability reproduce research based on the data is extremely important.  However, a recent Nature study http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970 found that 70% of researchers tried and failed to reproduce another scientist’s experiments and more than half of the scientists failed to reproduce their own experiments.  Yet we must be able to sort through the false leads from the latest discoveries.
  • Access will be more integrated.  Currently you can do a search in PubMed and links to the full text are available along with similar articles and citing articles.  Electronic medical health records can integrate health information such as UpToDate into the medical record.  I think as we move forward the literature will be more “on demand” and more integrated in other resources.
  • We will see more medical literature delivered via social media in the next few years.  The real growth is customized on demand information retrieval.  I can see where something like Amazon’s Alexa or Google Home could interface with medical journals table of contents and articles and give you the latest updates or sync with your device or car and listen to the article while you are commuting.  Similar to a Browzine for the Echo or Home.

What do you see in your crystal ball?

Research Impact Part 1: Moving Away from Tracking Authors’ Articles

I have been toying with this post for quite a while, trying to think of a good way to present the information without it being to long.  Well the only way to do it is to break it into parts.  I will link all of the parts together once I have finished writing and posting them.

Part 1: Moving Away from Tracking Authors’ Articles

Before I was a medical librarian my library had been tracking every article, book, and book chapter that somebody within the institution authored.  It used to be a list that was published then it evolved into a database that was on a citation management software.

In the beginning it started with finding citations in PubMed.  But it evolved over the years to finding citations from other databases.  Basically the librarian in charge of finding the citations had MANY saved searches on PubMed, Scopus, Web of Science, etc. that had the institution’s name in address of the author.   She would then download the citations, verify the author, and then add the name(s) of the department(s) that the institutional author(s) belonged to as a keyword field in the citation management software.

Books and book chapters were always a booger.  Their is no PubMed for books so finding those relied on a lot of web searching, notifications from our book suppliers, and from the from institutional authors themselves.  That information was also uploaded into the citation database.

However this practice was unsustainable for many reasons.

There are over 1800 variations of my institution’s name in PubMed.  From what I understand PubMed does no authority work for institutions.  Whatever the author writes is what is used.  This a HUGE problem if you are searching for all of the citations written by people in your institution.

In 2015 we had over 43,000 employees of which 3,200 were staff physicians, 10,965 nurses and over 1,500 research personnel in labs.  That’s a lot of citations to find.  While the saved searches were automated, the rest of the process needed to be to.    As the hospital system grew it made finding, verifying, indexing (adding the department names) uploading citations and maintenance a full time job for one librarian and became part of the duties for 3-4 other people.

At some point in time during the years of compiling a list of all the articles, books, and book chapters our authors wrote, administration decided to try to rank the citations.  Administration decided to compare the department(s) list of published articles.  Because we were still hand coding departments and loading the citations into a static reference database (like EndNote, RefMan, RefWorks) there was no way to add a continually changing variable like h-index, impact factor, or other metrics.  So they did the imprecise method of just having somebody sort all of the department(s) articles by the impact factor of the journal it was published in.  (Yes, that sound you hear are librarian teeth gnashing.)

As you can imagine this always presented issues, specifically for disciplines whose top journals don’t have huge impact factors like NEJM or JAMA.  Yet we were limited by our retrieval and storage capabilities and administrations (understandable) demand to quantify quality.

Something had to give, and it did.  Our entire database was housed on RefMan.  (Hey RefMan was state of the art when we started down this path.) RefMan was no longer supported by its maker as of December 2016.  We couldn’t migrate the data over to EndNote for two major reasons. One, all of the indexing we did to make sure we could sort people by department was done in the notes field.  We used other fields in RefMan for other “notes” and purposes.  This was all done by a cataloger so there was really good consistency, but the notes field and other fields did not map well between RefMan and EndNote so there would have been EXTENSIVE cleanup for 20 yrs worth of data. (not the best use of time or resources)  Two, migrating everything to EndNote still did not solve our metrics problem, assigning a value to the published articles that administration wanted. This forced our hand to make major changes such as automating the entire collection of citations procedure, including article level metrics within the database, and making it more sustainable as our institution continues to grow.

Through our investigations we discovered several products out there, Converis, InCites, Profiles, Pure, Plum, the list is large (note I don’t agree with all listed on Wikipedia but gives you an idea).  We ended up choosing two products by the same vendor. The two products allowed us to upload HR data so that articles would be automatically sorted and indexed by author AND department, and it also included article level metrics that were more informative than just the journal impact factor.

Migrating to this was not an easy task.  Part 2 will talk about the migration and things we learned (still learning) and I think Part 3 will talk a bit about the cultural shift from moving away from a cumulative list of publications to a list of publications’ impact.  Stay tuned.

 

How Librarians Can Help Healthcare Professionals

I recently wrote a blog post for NEJM Resident 360 (NEJM subscription required) about how residents can better utilize librarian services.  How to Take Advantage Your Medical Librarian, details a few of the common ways librarians can help doctors during their residency program and beyond.  As a medical librarian, I know there are a lot of other things we can do for residents and other healthcare professionals.  There are medical librarians who are doing different types of services, reaching out to provide information in creative ways, and doing things beyond the walls of the library that help our healthcare professionals in ways I have never dreamed.

So this post is sort of a “shared” post.  I would like any medical librarian to either comment below, tweet, or email* me the ways you help your healthcare professionals.  Healthcare professionals can be anyone (doctor, student, nurse, researchers, social work, pastoral care, hospital administration, etc.) that work with biomedical information, patients or families of patients, or who help fellow healthcare professionals in their jobs.

I will kick things off:

  • Create online journal club portals for nurses, enabling them to get CEs
  • Acquire spiritual & religious resources from other libraries to help pastoral care
  • Track every article written in a journal with an impact factor by the institution’s researchers & authors and provide those statistics, citation analysis, and collaboration impact to individuals and departments within the institution.
  • Help create treatment and care guidelines within the institution and with national associations.

Don’t leave me hanging…. contact me and I will add them to the bullet list. IF you have online documentation (research, website, article) give me the URL and I will link to that within the bullet point.
*email
krafty[atsign]kraftylibrarian[dotcom]
If you are a member of MLA use my email contact in the MLA directory

 

What is Real?

I have always been a scifi junkie even before I knew that was genre term. I can remember as a grade school kid checking out all of the books about ghosts, the Loch Ness monster, Bigfoot and the Bermuda Triangle. I remember being disappointed when I had read everything on those topics in my public library. As I got older I branched out into aliens and conspiracy theories.  When the X-Files came out it was like seeing my public library bookshelf had added new titles and gone on TV.

All of those books, movies and TV shows dealt with what was real and what was fake.  Is there really a Bigfoot or is somebody walking around with really big fake feet?  Are there really ghosts or is it a shadow, lens flare, or active imagination?  The question of what is real, is always at the forefront.  You have the absolute believers who I think if they saw me early in the morning yelling at my kids to get ready for school would think demonic possession (I’m scary in so many ways in the morning).  You have the complete skeptics who can’t explain and discount a mother that had a bad feeling about her son, who unknown to her at the time was in a car accident in another city.

The question of what is real and fake has gone mainstream.  As I mentioned in my previous post, Masters of Illusion, you have more questionable types of “organizations” producing journals and holding conferences.  As CTVNews reported, you have questionable companies buying up reputable scientific journals.  For example, OMICS recently purchased several respected Canadian medical journals.  This is a cause for concern because the U.S. Federal Trade Commission filed a lawsuit against OMICS, “alleging that the company is ‘deceiving academics and researchers about the nature of its publications’ and falsely claiming that its journals follow rigorous peer-review protocols.”

We live in a Wag the Dog world.  Where technology and communication have made it difficult to tell the difference between real news/research and fabricated.  The mud slinging and fact twisting of previous Presidential elections seems so quaint compared to the outright fake news about both candidates that flooded people’s dashboards and “news streams.”  We have people questioning reputable news organizations and claiming they are either fake news agencies or report on fake news.

How does the Presidential election fake news mess impact problem of fake scientific research and publishing.  It doesn’t, but it illustrates how fake information can easily be taken for real and how real information can be then called fake by opposition.  Take the example vaccines causing autism. There is not one reputable study that can show that vaccines cause autism.  The whole debacle was caused by a researcher who faked his research (funded by solicitors seeking evidence against vaccines) and published in The Lancet, a reputable journal, that there was a link to vaccines and autism. The Lancet has since retracted the article.  You have faked research/information which was believed to be real by the people. Now that it is proven to be fake you still have people believing in the fake research questioning the real research and calling it fake. They believe in the fake research and disbelieve the real research so much that people say the researcher is a scapegoat.

The lines between real and fake research and real and fake news have become entangled, making me wonder how it can be fixed.  Jeffrey Beall’s list of predatory journals has been disappeared.  Inside Higher Ed,  quotes a spokesperson for CU Denver (Beall’s employer) that “Beall made a ‘personal decision’ to take down the website.” There is much speculation online as to why he made this personal decision.  Reputable publishers do their best to sniff out fake research.  However, if reputable publishers publish questionable journal issues funded by drug companies and reputable publishers are bought by questionable companies it paints a picture of an industry that has problems policing itself.  Who is left to determine what is real and what is fake?

Which leaves me to end this post with a quote from one my favorite scifi movies, The Matrix.

Boy: Do not try and bend the spoon. That’s impossible. Instead only try to realize the truth.
Neo: What truth?
Boy: There is no spoon.

 

Masters of Illusion

One of my favorite scenes from the Simpsons is where bartender Moe sets up a fake upscale looking entrance to his bar to try and attract more customers.  After entering the bar the upscale customer says, “Hey, this isn’t faux dive. This is a dive,” to which Moe responds, “You’re a long way from home, yuppie-boy. I’ll start a tab.”

With just a few word changes and the same idea could be expressed about fake academic journals.  This has been a topic of discussion for the last few years in the library world.  The New York Times has an interesting and more mainstream article addressing the issue of fake journals and fake academic conferences.

In the article “Fake Academe, Looking Much Like the Real Thing” Kevin Carey, describes how he was contacted via phone call to attend a conference in Philadelphia a mere 20 minutes after he entered his information on a website.  Carey also goes on to describe how many of these real sounding “associations” can have shady if not outright illegal dealings and offer little to no academic rigor for paper submissions.

Unfortunately we live in a time when what is faux dive and real dive is getting harder and harder to determine.  Lots of people have fallen prey to fake news by re-posting the stories on the social media accounts.  People need to do more investigating to determine legitimate sources of information (news or academic).  However, we also live in a time where people often feel too rushed to do that.  Everything must be done NOW!  Wait for an article to come via ILL? Nope just find another one that is available online that can.

Unlike falling prey to fake news on social media, the fake “scholarly” associations, publications, etc. might cause the researcher more time and money than if they had slowed down and investigated things.  I know there are librarians who actively help their researchers avoid questionable publishers.  My guess though is that for every researcher a librarian helps there is another who falls victim.  Hopefully more mainstream stories like this will help alert others to do a little more digging.