Research Impact Part 1: Moving Away from Tracking Authors’ Articles

I have been toying with this post for quite a while, trying to think of a good way to present the information without it being to long.  Well the only way to do it is to break it into parts.  I will link all of the parts together once I have finished writing and posting them.

Part 1: Moving Away from Tracking Authors’ Articles

Before I was a medical librarian my library had been tracking every article, book, and book chapter that somebody within the institution authored.  It used to be a list that was published then it evolved into a database that was on a citation management software.

In the beginning it started with finding citations in PubMed.  But it evolved over the years to finding citations from other databases.  Basically the librarian in charge of finding the citations had MANY saved searches on PubMed, Scopus, Web of Science, etc. that had the institution’s name in address of the author.   She would then download the citations, verify the author, and then add the name(s) of the department(s) that the institutional author(s) belonged to as a keyword field in the citation management software.

Books and book chapters were always a booger.  Their is no PubMed for books so finding those relied on a lot of web searching, notifications from our book suppliers, and from the from institutional authors themselves.  That information was also uploaded into the citation database.

However this practice was unsustainable for many reasons.

There are over 1800 variations of my institution’s name in PubMed.  From what I understand PubMed does no authority work for institutions.  Whatever the author writes is what is used.  This a HUGE problem if you are searching for all of the citations written by people in your institution.

In 2015 we had over 43,000 employees of which 3,200 were staff physicians, 10,965 nurses and over 1,500 research personnel in labs.  That’s a lot of citations to find.  While the saved searches were automated, the rest of the process needed to be to.    As the hospital system grew it made finding, verifying, indexing (adding the department names) uploading citations and maintenance a full time job for one librarian and became part of the duties for 3-4 other people.

At some point in time during the years of compiling a list of all the articles, books, and book chapters our authors wrote, administration decided to try to rank the citations.  Administration decided to compare the department(s) list of published articles.  Because we were still hand coding departments and loading the citations into a static reference database (like EndNote, RefMan, RefWorks) there was no way to add a continually changing variable like h-index, impact factor, or other metrics.  So they did the imprecise method of just having somebody sort all of the department(s) articles by the impact factor of the journal it was published in.  (Yes, that sound you hear are librarian teeth gnashing.)

As you can imagine this always presented issues, specifically for disciplines whose top journals don’t have huge impact factors like NEJM or JAMA.  Yet we were limited by our retrieval and storage capabilities and administrations (understandable) demand to quantify quality.

Something had to give, and it did.  Our entire database was housed on RefMan.  (Hey RefMan was state of the art when we started down this path.) RefMan was no longer supported by its maker as of December 2016.  We couldn’t migrate the data over to EndNote for two major reasons. One, all of the indexing we did to make sure we could sort people by department was done in the notes field.  We used other fields in RefMan for other “notes” and purposes.  This was all done by a cataloger so there was really good consistency, but the notes field and other fields did not map well between RefMan and EndNote so there would have been EXTENSIVE cleanup for 20 yrs worth of data. (not the best use of time or resources)  Two, migrating everything to EndNote still did not solve our metrics problem, assigning a value to the published articles that administration wanted. This forced our hand to make major changes such as automating the entire collection of citations procedure, including article level metrics within the database, and making it more sustainable as our institution continues to grow.

Through our investigations we discovered several products out there, Converis, InCites, Profiles, Pure, Plum, the list is large (note I don’t agree with all listed on Wikipedia but gives you an idea).  We ended up choosing two products by the same vendor. The two products allowed us to upload HR data so that articles would be automatically sorted and indexed by author AND department, and it also included article level metrics that were more informative than just the journal impact factor.

Migrating to this was not an easy task.  Part 2 will talk about the migration and things we learned (still learning) and I think Part 3 will talk a bit about the cultural shift from moving away from a cumulative list of publications to a list of publications’ impact.  Stay tuned.

 

Leave a Reply

Your email address will not be published. Required fields are marked *