Citations and Its Limitations

Reflection of Citation Exercise.
I did the citation exercise for two authors, Kenneth Frampton and Keith Eggener. While the former is a well-known name in the academic conversation of modern architectural history, the latter is equally renowned yet many of his work are published only in architectural magazines. Following are my reports on both authors:
Author: Keith Eggener
Work: Garden of El Pedregal (a book)
Total number of citations: 4
Two of which are self-cited while the remaining two are book reviews.
Total number of publications by the author were 9.
h-index: 3
average citation per item: 3.11
sum of times cited (without self-citations): 25
Author: Kenneth Frampton
Work: Towards a Critical Regionalism: Six points for an architecture of resistance.
Total number of citations: 2
One in an architectural education journal while other in a romantic studies journal, Hispanofila
Total number of publications by the author were 38.
h-index: 10
average citation per item: 9.05
sum of times cited (without self-citations): 323
In studying these two data sets, I realized the shortcomings of the data collection platform that is selective and gather data only from journals. It looks across only academic fields and how these works are used within the academic circle. This is of particularly interest because if these numbers are used in evolutions for hiring and tenure and it only makes judgment basis on the influence of academic on academia. It does not account for influence which research may have on other professional fields. For example, an article of pedagogy may not get cited enough but it possibly could be applied to practice of pedagogy. These influences do not get accounted for. Additionally, I am concerned about the highly limited influence circle this data set studies. Limited in sense of region. The data set acknowledges academic circles of USA and Europe. These data set may not be available from other countries like India, Nigeria or Peru. The absence of these data set from other countries would make academic opportunities difficult for people outside of it to attain a tenure.

Reflection on Redings:
In the article The Mismeasure of Science: Citation Analysis, we see how citations may not always account for the reference and influences upon a work. In my own research I am guilty of this mismeasure. How do I account for information which:
1) I have gathered over the years as a native of the region. Those sources of information and influences often have lack of empirical support due to which they may not get cited.
2) Methods like survey and interviews, which are often not the prime data collection methodology in field of History of Art and Architecture, are not accounted in the citations.
These are few among many citation difficulties I as a scholar find myself juggling with. Within this frame where citations are an ‘on-going process’, how do we account for the success of a scholar and estimate the evaluation for hiring? In my estimation, only looking at the citations would not be helpful.

Why have certain articles not been cited so many times?

(1) Linda Nochlin, “Why Have There Been No Great Women Artists?” ARTnews, January 1971, 194–204.

What is the total number of citations?  30

What can you learn about the number of citations to this article per year since it was published?  From what I can understand, the number of citations started to increase in 2015 and 2019 was has the highest point of citations, with a total of 8.

What can you learn about who cites this article?  What are their disciplinary identifications? The majority of the articles were published in journals of art history, history, art and culture, museum and curatorial studies, or feminist and gender studies. It was also cited in publications of literature, theater, design, or area-specific, such as Japanese studies, or Latin American studies. Also, one citation was from an article on a journal of Informational Science (Ciencias de la información, in Spanish, I am not sure is that is the correct translation). All articles were written in English, but a few were published in journals with Spanish or German titles.

(2) Nochlin, L

What is the total number of publications? 46

What is the H-index? 3

What are the average citations per item? 0.5

Which of these numbers would you prefer to have used in evaluations for hiring and tenure?  Why?

My first idea is that both numbers could offer information for the evaluation but should not be taken as single or central parameters. Evaluating a candidate should be a more holistic process that takes into account many other aspects of the person’s profile, such as age or time working in the field. As we have seen in the readings, the reasons why an article or an author gets cited are many and not always linked to the quality of content. But if one of the main institution’s values or objectives is to increase the statistics of productivity of their faculty based on publication and impact on the high-profile journals, then it would make sense to use these numbers and to prioritize the candidates that bring statistics up. But if the institution has other interests, such as an orientation towards teaching or reaching out to the broad community beyond academia, then the citation indexes are not too relevant.

Is this kind of analysis appropriate for all academic fields? Why or why not?

No, the nature and structure of each field are different. From the readings, I understood that the Web of Science only includes certain types of “top” journals and prioritizes English-written articles. Just to mention one example, for academics working on regional-specific fields, it seems logical to write in the languages of such regions, and not necessarily always or most of the time in English.

Quantifying Worth

Wood, J. W., Milner, G. R., Harpending, H. C., Weiss, K. M., Cohen, M. N., Eisenberg, L. E., … & Katzenberg, M. A. (1992). The osteological paradox: problems of inferring prehistoric health from skeletal samples [and comments and reply]. Current anthropology33(4), 343-370.

1a. Total number of citations? 8,226 (Excluding self-citations)

1b. What can you learn about the number of citations to this article per year since it was published?

The paper was relatively well cited until the year 2000 where the number of citations began to steadily increase each year indicating that the article may be part of the core literature of a field. Alternatively, it could also be reflective of the general trends in academia like the increasing number of scholars within academic disciplines or the increasing number of publications produced by authors, both of which have increased dramatically in this digital age.

1c. What can you learn about who cited this article? What are their disciplinary identifications?

Of the 5,167 citing articles the majority were categorized under anthropology or a related subfield. Interestingly paleopathology, the field from which the paper originated, is not listed as a category in this report.

2a. Total number of publications? 225

2b. What is the H-Index? 38

2c. Average citations per item? 20.08

2d. Which of these numbers would you prefer to have used in evaluations for hiring and tenure? Why?

You would have to consider whom you are comparing for hiring and tenure, an individual just coming out of a postdoc may have a low scores for all of these numbers due to the newness of their publications. The paper I chose is part of the foundational literature for a field but was cited fewer than 10 times per year in its first couple years after publication. An average number of publications per year would at least measure some form of consistent productivity but these are all poor measures for hiring and tenure unless your goal is departmental prestige.

2e. Is this kind of analysis appropriate for all academic fields? Why or Why not?

From my first answer I obviously have some issues with these measures. This style disproportionately benefits students who are in a lab-project based program where a lead professor gives students smaller research projects to work and publish on as contributions to an overall larger project, ensuring early publication and continued citations as long as people are still publishing on the head PI’s project. These measures hurt people coming from small programs who don’t have large research projects, people coming from academic fields with independently lead (and often slow) research, and people whose research requires long periods of data collection in the field which is often slowed down by politics and bureaucracy (see ecology, etc.).

Useful? Not really

Rice, Louise. “Urban VIII, the Archangel Michael, and a Forgotten Project for the Apse Altar of St Peter’s.” The Burlington Magazine 134, no. 1072 (1992): 428-34. Accessed February 10, 2020. www.jstor.org/stable/885200

Step 1)

What is the total number of citations? 6

What can you learn about the number of citations to this article per year since it was published? 2 years had 2 citations (2001, 2017), 2 years had 1 citation (1997, 1998)…but I don’t understand where the years are coming from because they don’t correspond to publication dates of the articles that are citing it. 

What can you learn about who cites this article?  What are their disciplinary identifications? 4 art historians cite the article, 1 Milton scholar (maybe? I wasn’t able to actually find the citation.)

Step 2)

What is the total number of publications? 7

What is the H-index? 2

What are the average citations per item? 1.86

Which of these numbers would you prefer to have used in evaluations for hiring and tenure?  Why? If I’m Louise Rice, number of publications. If I’m looking for impact on scholarship, probably H-index. None of these number seem particularly useful for hiring or tenure because they don’t actually provide a real representation of anything in this case.

Is this kind of analysis appropriate for all academic fields? Why or why not? I had a really difficult time even finding any of the articles that I thought were influential in my field in this database (this was the 7th article I tried), so I don’t feel that it is providing me with an accurate representation of the impact of the scholarship on my field. Part of the reason for this is that a lot of art historical scholarship is published in monographs or in edited volumes rather than in the journals that would appear in this database. I imagine this is true for a lot of humanities scholarship.

“[…] queries must always be in English.”

Hello All,

I’m sorry I didn’t realize we were to have published our experiences on Web of Science on here… Mine is as follows…

Green, James N. “The Emergence of the Brazilian Gay Liberation Movement, 1977-1981.” Latin American Perspectives 21, no. 1 (1994): 38-55.

What is the total number of citations?

10

What can you learn about the number of citations to this article per year since it was published?

There is a spike in 2017 for some reason… ?

What can you learn about who cites this article?  What are their disciplinary identifications?

Mostly historians or political scientists who are outside of the US. Yet all of the references are in the English language, why?

 

AUTHOR: GREEN, JN

What is the total number of publications?

38

What is the H-index?

3

What are the average citations per item?

1.24

Which of these numbers would you prefer to have used in evaluations for hiring and tenure?  Why?

I suppose you would want to highlight the times other scholars cite your works because it speaks to how others value your publications in relation to their own work. I’m not sure I see the value in the h-Index.

Is this kind of analysis appropriate for all academic fields? Why or why not?

No – if there are only a few individuals who are interested in the same subject matter as to what you’re studying, then the possibilities of raw number of citations will be lower. Basing success on how many times others cite your work would create a positive-only feedback loop wherein researchers only publish material they think others will want to cite. Additionally, this site only references articles, which are important to the field of history, but most academic historians are judged on their monographs.

Why does the site only include English language research? While most academic journals are monolinguistic, many (especially international and specialized area-study fields) are multilingual and publish articles in various languages.

The Limitations of Citation Analysis

John Tipton

Jonathan M. Weiner, “Radical Historians and the Crisis in American History, 1959-1980,” Journal of American History 76 (1989).

What is the total number of citations?
1.

What can you learn about the number of citations to this article per year since it was published?
This article may have been perceived to be more relevant in 2019, as that was the only year in which it was cited.

What can you learn about who cites this article? What are their disciplinary identifications?
This article was cited by James Barret for the article “Making and Unmaking the Working Class: E.P. Thompson and the ‘New Labor History’ in the United States”, Historical Reflections-Reflexions Historiques, vol 41, issue 1.
James Barret, before his retirement, was a prominent labor historian at the University of Illinois, and his article was dealing primarily with the legacy of E.P. Thompson’s The Making of the English Working Class. He likely cited Weiner as Weiner’s article extensively dealt with the rise of the New Left and History from Below, in which E.P. Thompson played a prominent role.

What is the total number of publications?
1.

What is the H-index?
The H index is 2.

What are the average citations per item?
1.

Which of these numbers would you prefer to have used in evaluations for hiring and tenure? Why?
I would rather have the H-index number taken into account for hiring and tenure, as It would indicate that my work had a wider reach and was a greater significance to my field.

Is this kind of analysis appropriate for all academic fields? Why or why not?
I do not believe that this kind of analysis is appropriate for all academic fields. It his heavily geared toward the sciences, to the point that when I attempted to refine my search, the only search term applicable to history was “interdisciplinary humanities.” Furthermore, the authors full name is “Jonathan M. Weiner.” After performing the basic search, in which five articles were furnished, only one was written by this author. All other articles were written by either “Jesse Weiner” or “JF Weiner.” Furthermore, upon searching a more recent article, Web of Science indicated that it had been cited 80 times. Therefore, the accuracy of Web of Science’s citation index and analysis may be limited by the age of the article. If this is the case, the significant articles that had been instrumental in the development of a field, but not applied within the recent past, may be inaccurately represented.
Given the limitations of Web of Science’s citation analysis when applied to the humanities, as well as the imprecisions and constraints of its citation search and analysis, I would be reluctant to employ it when evaluating the significance of a body of work within my field. Furthermore, as far as I was able to tell, citation analysis does not indicate to what degree such works are being employed to support someone’s argument. It is entirely possible that an author’s work is being rebutted while being appropriately cited, and depending solely on citation analysis would give a skewed perspective concerning the current significance of an individual’s argument or methodology within their respective field.

Boris Eikhenbaum’s “How Gogol’s Overcoat Was Made”

I chose to look at citations of Boris Eikhenbaum’s “How Gogol’s Overcoat Was Made,” as it is used in many courses and articles to talk about Gogol’s narrative technique of skaz, from the Russian verb skazat’ (to tell), which aims to form a written language that mirrors oral storytelling. Because the work was originally written in Russian, there were two instances of the work that appeared, “How Gogol’s Overcoat IS Made” and “How Gogol’s Overcoat WAS Made.” The first is translated incorrectly, as the verb “to make” is in the past tense in the original title; however, they still reference the same work, so I included them both in the citation search. Overall, the total number of citations was 16, across 6 articles, with the work being cited 1.23 times per year. The English translation of the work was first made available in the 1960s, so it was surprising to see a small amount of citations be returned, as I have seen it cited in a lot of articles; however, this could be due to the specific translations being cited and delineated as separate entities.

Unsurprisingly, everyone who cited the article was a Slavist. Boris Eikhenbaum was a prominent Russian Formalist, and this work is uniformly used to teach Gogol’s prose; moreover, when writing on Gogol, it is very common to discuss the mimetic, speech-like qualities of his works, which Eikhenbaum’s work foregrounds. When I performed the basic search to see all of Eikhenbaum’s works, I was a little confused, as when I did the citation search, many more of his works were returned. There might be some distinction I’m missing between the basic and citation search (are the citations related to works cited in works available on Web of Science, while the basic search returns works from Eikhenbaum that are actually available in the database?). The basic search returned 7 of Eikhenbaum’s works, with an H-index of 1, and average citation of 0.29 per item. I’m unsure which number would be better to report, as each doesn’t look very appealing for evaluations/tenure.

When working on this analysis, I saw that it can have utility for certain fields; however, for my field of Slavic Languages and Literatures, the transliterations of the authors’ names and translations of the works’ titles made it harder to narrow the search down, as one person could have many different names in the database. For example, I originally wanted to look at Yuri Lotman’s work in Russian Formalism; however, his name appeared as Iuri Lotman, Iurii Lotman, Juri Lotman, Yuri Lotman, etc., so finding a way to aggregate the citations was made a little more difficult. Overall, though, I think this is a useful tool for seeing when works are cited and by whom, and if I had looked at influential computer science articles from the U.S., a more streamlined search might have occurred.

Metrics and Citations

Epstein, Richard A. “Caste and the Civil Rights Laws: From Jim Crow to Same-Sex Marriages.” Michigan Law Review 92, no. 8 (1994): 2456-478.

 

http://apps.webofknowledge.com.pitt.idm.oclc.org/CitationReport.do?product=UA&search_mode=CitationReport&SID=7EEgg7EEbMxjPWfsX3T&page=1&cr_pqid=39&viewType=summary

 

Total number of citations:

191

What can you learn about the number of citations to this article per year since it was published?

Published in 1995, the highest number of citations occurred in 1997. The number dipped to 15 in 1998. By 2000, the article was cited only eight times. In 2015, the article only received two citations. As such, the results demonstrate that an article is most likely to be cited shortly after its publication (>5 years), often when the article is most ground-breaking. However, there was a rejuvenated spike from 2016 to 2018. Are there social and political factors that influence research? Can we, as researchers, gauge and synthesize intersections between political and social events, and the research they influence? 2016 to 2018 was the beginning of the current presidential administration’s regime. Did this influence a rejuvenated interest in an article pertaining to Jim Crow and inequality in the United States? This data can therefore potentially indicate and reveal the ways in which (and why) research has ebbs and flows of being cited/

What can you learn about who cites this article? What are their disciplinary identifications?

Analyzing who is citing the article and the application they are citing for reveals which disciplines are engaging with the text and which fields are finding the research most integral for their own studies. In this instance, the article is heavily utilized by legal scholars.

What is the total number of publications?

11

What is the H-index?

6

What are the average citations per item?

17.36

Which of these numbers would you prefer to have used in evaluations for hiring and tenure? Why?

Neither. But unfortunately, that does not answer the question. Depending on what metrics a department is interested in the H index can be a powerful tool for showcasing the influence a researcher has in high-impact journals. This can be useful in determining the relevance and timeliness of one’s scholarship. If a department is seeking influential leaders within the field, the H index can be useful. The H index, however, can not compare professors 1:1. There are external factors based upon specific journals and fields that influence one’s H index. In short, the index can be an intriguing metric that can credit one’s influential scholarship, but I do not believe an H index can be used to discredit the influence of one’s work.

Is this kind of analysis appropriate for all academic fields? Why or why not?

It is not. The H index, for instance, does not necessarily take into consideration the variating numbers of citations in different fields. Will articles in fields that cite less be viewed as less impactful? Put simply, can an H Index really be used to compare articles from one department to articles from an entirely different discipline with different modes of access and standards? Likewise, the reasons for citing (average citations per item category) can be influenced by external factors that have little to do with the quality of one’s research.

J. L. Schrader, “George Grey Barnard: The Cloisters and The Abbaye”

Schrader, J. L. “George Grey Barnard: The Cloisters and The Abbaye.” The Metropolitan Museum of Art Bulletin 37, no. 1 (Summer 1979): 3–52.

Cited Reference Search

Total number of citations: 3

Borland, Jennifer and Martha Easton. “Integrated Pasts: Glencairn Museum and Hammond Castle.” Gesta 57, no. 1 (Spring 2018): 95–118.

Chong, Alan. “The Gothic Experience: Re-creating History in American Museums.” Journal of the History of Collections 27, no. 3 (2015): 481–491.

Maxwell, Robert. “Accounting for Taste: American Collectors and Twelfth-Century French Sculpture.” Journal of the History of Collections 27, no. 3 (2015): 389–400.

What can you learn about the number of citations to this article per year since it was published?

The number of citations to this article seems restricted to citations in articles, rather than including citations to this article in books, essays in edited volumes, and other formats, such as exhibition catalogues. For example, the article I selected for this exercise is cited in Elizabeth Bradford Smith, “George Grey Barnard: Artist/Collector/Dealer/Curator,” Medieval Art in America: Patterns of Collecting 1800–1940 (University Park: Palmer Museum of Art, 1996): 133–142, which is an essay in an exhibition catalogue from 1996, seventeen years after the Schrader article was published, but the Smith essay did not appear as having a citation to the Schrader article in my cited reference search. The restriction of publication formats considered would seem to limit the usefulness of such a search for my field of study, which is the history of art and architecture.

The cited reference index provided three search results, each to the same article by Schrader, with slightly different information for the cited author, issue, and page of the article, and each of the three results led to a different citing article. Two of the citing articles were published in 2015 in the same special issue of the Journal of the History of Collections, and the third citing article was published in 2018. That Schrader’s article from 1979 is cited in these three recent articles indicates that Schrader’s work is still of interest to scholars writing on similar material, or at least that Schrader’s work is referred to in discussions of the state of the literature.

What can you learn about who cites this article? What are their disciplinary identifications?

The four authors who cite Schrader’s article are art historians, medievalists, and museum professionals with interests in medievalism, the history of collections, and antiquarianism.

Basic Search

I had difficulty with the basic search. The author of the journal article I selected, J. L. Schrader, did not appear in the Basic Search results of any variation of “J. L. Schrader” that I tried. I did, however, find three articles by J. L. Schrader using the Author Search, though the articles were included in the algorithmically generated author record of Jordyn Lee Schrader, who is apparently affiliated with the Department of Biomedical Engineering at the University of Delaware. The information below is gathered from the results of my Author Search.

Total number of publications: 3

Schrader, J. L. “A Medieval Bestiary.” The Metropolitan Museum of Art Bulletin New Series 44, no. 1, A Medieval Bestiary (Summer 1986): 1, 12–55.

Schrader, J. L. “George Grey Barnard: The Cloisters and The Abbaye.” The Metropolitan Museum of Art Bulletin 37, no. 1 (Summer 1979): 3–52.

Schrader, J. L. “Antique and Early Christian Sources for the Riha and Stuma Patens.” Gesta 18, no. 1, Papers Related to Objects in the Exhibition “Age of Spirituality,” The Metropolitan Museum of Art (November 1977–February 1978) (1979): 147–156.

What is the H-index?

N/A: an H-index was not provided for the J. L. Schrader for whom I was searching.

What are the average citations per item?

0.33

Which of these numbers would you prefer to have used in evaluations for hiring and tenure? Why?

For the reasons discussed previously regarding the limited usefulness of these searches for my field of study, I think that using these numbers in evaluations for hiring and tenure would be misleading, at least in the case of the article I chose for this exercise. J. L. Schrader was a curator at The Cloisters, The Metropolitan Museum of Art’s branch museum of medieval art, and would have written or contributed to various exhibition catalogues and essays on the museum’s permanent collection, none of which were included in the search results, which seems to be due to the restriction of searchable publications to articles.

Is this kind of analysis appropriate for all academic fields? Why or why not?

The restriction of the Web of Science’s searchable publications to articles is a considerable shortcoming, in my view, that would prevent me from searching with confidence. In addition, in the Cited Reference Search, users are not able to click on the name of the cited author to view their authored articles, which I assume is in part why we were asked to perform a search for the author of our chosen article using the Web of Science’s Basic Search. For example, JSTOR has a feature that allows users to easily view all of the search results authored by and related to a given author by clicking on the author’s name, which generated six results for J. L. Schrader. Given my criticisms of the Web of Science for my field based on my searches, I don’t see myself returning to the Web of Science as a research tool.

Bartholomae’s “Inventing the University”

Bartholomae, David. “Inventing the University.” When a Writer Can’t Write: Studies in Writer’s Block and Other Composing Process Problems. Ed. Mike Rose. New York: Guilford, 1985. 134-65.

What can you learn about the number of citations to this article per year since it was published? As is evidenced in the graph the number of times this article is reference shoots up in the early 2010’s and remains popular through the 20-teens. While it’s popularity seems to be waning in the last few years it is much more cited than in the 10 years initially after its publication. I posit this is because the field of rhetoric and composition has come into itself in the past years. There has been more interest in defining the boundaries of the field and perhaps seminal pieces like this help with that work.

What can you learn about who cites this article?  What are their disciplinary identifications? The majority are from English, composition, writing, or rhetoric. However I am seeing linguistics, TSOL (quite a few L2 publications actually), and education scholars as well. I see a few from library science too. The big hitters tend to be College English publications and other NCTE publications associated with writing studies.

Which of these numbers would you prefer to have used in evaluations for hiring and tenure?  Why? I think I’d prefer to have the second number because it provides a more holistic view of a body of research and a trajectory of academic communication than just one article does. It is very interesting, however, to note that it took quite a while for this to spike. Could it be because of digitization and access to the information? Or, did it take that long for folks to find Bartholomae’s work important? I don’t know. Tenure usually takes 6-10 years, though, and if Bartholomae were assessed by his initial numbers, he would not look nearly as impressive as he does now. Just thinking about the tenure process itself as problematic when considering “impact” and the like.

Is this kind of analysis appropriate for all academic fields? Why or why not? I think this is complicated. While metrics are needed for promotion decisions, these metrics only show who is citing your work. There is bias in citations. There have been historical issues of people of color, for example, being under-cited. Women, too are cited less than men. Finally, I wonder how things like creative projects, DH projects, or other un-traditionally published kinds of scholarship can be accounted for here. In my field the journal Kairos is an example of this non-traditional kind of scholarly work. I see that some of our readings deal with this for the week, so I look forward to seeing what they say about digital humanities and bibliometrics.

A final question I have–and I should know the answer to this given my library background–is how does this compare or contrast to the citation search in Google Scholar? The numbers for Bartholomae were different than in Web of Science so what is being indexed in each? Insofar as I’m aware WoS veers more social science and hard science, no? Does this affect the number (it clearly does) and what does that say about us relying on such numbers? Especially for humanists? Furthermore, for outward facing humanities or scholarship that might be reported in non-academic settings, how does that fit into the mix? Being cited by journalists, for example, is a feat in-and-of itself, but it would not show up in these kinds of bibliometric ratings.