
The pressure to publish in elite “A journals” has become the single most powerful force shaping academic careers. How did this happen? And what are the real consequences – both positive and negative – of turning research evaluation into a tournament of hits and goals?
Curated by Business Science Daily — peer-reviewed sources, human-verified.
Learn more
About Our Curation Process
Business Science Daily curates academic research in business and economics. Each featured study is selected from reputable, peer-reviewed journals, institutional repositories, or working papers (e.g., Elsevier, Sage, NBER, SSRN).
Articles are carefully summarized to ensure clarity and accuracy, with direct citations or links to original sources. Our process emphasizes transparency, academic integrity, and accessibility for a broader audience.
Learn more in our Editorial Standards & AI Policy.
This post breaks down the Academy of Management Perspectives article that diagnoses the “an A is an A” phenomenon.
It explains why performance management systems and accountability pressures drove business schools toward journal lists. It also covers the seven positive outcomes that make the system so sticky, and the nine negative consequences – from questionable research practices (p‑hacking, HARKing) to the homogenisation of knowledge and loss of researcher care.
Finally, the article’s forward‑looking recommendations show how to build a more balanced, multi‑stakeholder evaluation system without throwing out the benefits of clarity and fairness.
https://doi.org/10.5465/amp.2017.0193
“An A is an A” – the new bottom line in management research. Counting publications in elite “A journals” has become the dominant metric for hiring, tenure, and rewards. This article examines why this phenomenon took hold, its positive and negative consequences, and how to build more balanced evaluation systems.
The new bottom line for valuing academic research
In sports, “a win is a win” – how the game was won matters less than the fact that it was won. In management research, we have reached a similar point: “an A is an A.” The increased pressure to publish in elite (“A”) journals means that faculty evaluation committees now focus almost exclusively on the count of such publications, treating all A‑journal articles as equivalent badges of success, regardless of their content, originality, or practical relevance.
A journals are elite outlets labelled as A, A+, A*, top, premiere, or even A++ and A**. They vary across institutions but share one feature: publishing in them signals high research value. The “an A is an A” dictum reduces scholarship to a simple count – a “hit” or a “goal” – and treats any article in such a journal as intrinsically valuable, irrespective of its actual contribution.
Business schools worldwide – in Asia, Europe, North and South America – have adopted this logic. As Aguinis et al. note, “Faculty recruiting committees and promotion and tenure panels readily discuss how many A’s a candidate has published and how many A’s are needed for a favorable decision, while conversations about the distinctive intellectual value of a publication are often secondary to its categorical membership in journals.”
This tournament‑style competition (Connelly et al., 2014) has turned academic publishing into a zero‑sum game: individual faculty compete for scarce pages in a few A journals; departments and schools compete for rankings. Publication in A journals has become the universal currency for intellectual status, job placement, tenure, salary, and research funds.
Why the “an A is an A” phenomenon took hold
Two powerful mechanisms drive the new bottom line: performance management systems and research accountability pressures inside business schools.
Performance management systems
University administrators overseeing diverse departments needed a common, verifiable measure of research quality. Journal lists replaced subjective evaluations with “common, intersubjective, verifiable standards, independent of human individuality” (Kula, 1986). Although originally intended as a loose framework, these lists became reified – taken for granted as the measure of research value. Business schools created “quanta” (Power, 2004) to distribute rewards in a systemic, fair, and conflict‑free manner.
Research accountability
Since the Gordon‑Howell report (1959), business schools have increasingly adopted a social‑science paradigm to gain legitimacy. Today, less government funding, intensified rankings, faculty shortages, and entrenched research values push schools to quantify research outputs and link them to financial outcomes. Pay‑for‑article systems, summer support, teaching load reductions, and base salary decisions are all tied to A‑journal counts.
As Aguinis et al. summarise: “The new bottom line to measure the value of research follows naturally from the practices used by business schools … to make the process of evaluating research more standardized, transparent, and fair. It is also the consequence of increasing pressures on business schools and universities to become more accountable.”
Positive consequences – why the system persists
Despite widespread criticism, the A‑journal counting system has undeniable benefits. Aguinis et al. list seven positive outcomes (see Table 1 in the original paper).
- Standardisation: A‑journal lists provide a common yardstick, reducing the burden of evaluating each study’s unique merit.
- Transparency and fairness: Faculty know exactly what is required for tenure and promotion; even unsuccessful candidates tend to accept decisions based on clear rules (“points scored after the buzzer do not count”).
- Protection for junior faculty: Department chairs or senior colleagues with outdated research skills cannot easily dismiss a junior scholar’s work if it appears in an A journal.
- Clear developmental goals: Doctoral students and junior researchers receive unambiguous targets, which can enhance performance (Locke & Latham, 2002).
- Self‑selection: Scholars who reject the A‑journal game can purposefully choose schools that value broader criteria.
- Exemplars of rigour: A journals signal the level of theorising, methodology, and reporting expected, raising the overall quality floor.
Negative consequences – the dark side of counting A’s
The excessive focus on A‑journal publications has produced a long list of unintended harms – to research methods, knowledge generation, researcher motivation, and the credibility of management scholarship.
Questionable research practices (QRPs)
Pressure to produce A’s encourages p‑hacking, HARKing (hypothesising after results are known), selective reporting, outlier manipulation, and lack of transparency. Aguinis et al. cite evidence that such practices are rampant and directly linked to the “publish or perish” culture in elite journals.
Loss of researcher care and intrinsic motivation
When the locus of control shifts from the researcher to external gatekeepers, scholarship becomes extrinsically driven. Scholars face a forced choice between research they truly care about and research that will be accepted in A journals.
Homogenisation of knowledge
A journals favour hypothetico‑deductive methods, large datasets, and incremental theoretical contributions. Inductive, abductive, and risky exploratory work is marginalised. This reduces the variety and innovation needed for the field to grow.
Excessive co‑authorship and “publication communes”
To hedge against low acceptance rates, researchers form large teams and sometimes exchange sham co‑authorships to pad résumés. This dilutes accountability and genuine collaboration.
Neglect of practice and other stakeholders
A‑journal publication prioritises theory contribution over practical implications. Research is done primarily for other researchers, not for managers, students, or society – a troubling trend for professional business schools.
Maximising positive and minimising negative outcomes
Aguinis et al. argue that the current situation is unsustainable, but radical change is unlikely. Instead, they offer concrete, actionable recommendations for performance management system design, research performance measures, and researcher development.
1. Broaden performance management systems
Business schools should base evaluation on strategic choices, not just A‑journal counts. Involve a broader set of stakeholders (corporations, government, students, media) to assess research value using criteria such as actionability, pedagogical usefulness, and broad interest. For example, require a balanced portfolio: conceptual, empirical, and practitioner‑targeted articles.
2. Use multiple, continuous measures of research quality
Replace the dichotomous “A vs not‑A” with richer indicators:
- Citation context – not just counts but how and why a work is cited (Golden‑Biddle et al., 2006).
- Altmetrics – media coverage, social media mentions, textbook citations, and practitioner blog references.
- Risk‑of‑bias assessments – similar to Cochrane Collaboration’s checklists, to evaluate methodological transparency.
- Nondichotomous journal lists – e.g., the Chartered ABS guide with five quality levels, allowing weighted scoring.
3. Ask candidates to identify their 3‑5 most important works
Promotion and tenure committees should read and evaluate a small set of publications for originality, quality, and actual or potential impact – not just count them.
4. Sign DORA (San Francisco Declaration on Research Assessment)
The Academy of Management could signal commitment to improving research evaluation by joining the 1,800+ organisations that have signed DORA, which calls for eliminating journal‑based metrics in favour of article‑level assessment.
5. Invest in research skills training
To reduce QRPs, researchers need better training in both deductive methods and inductive/abductive inquiry. CARMA webcasts, Academy methods workshops, and best‑practice checklists (e.g., for handling outliers, control variables, transparency) are practical tools.
Aguinis et al. conclude: “The realization of the dominance of this new bottom line for valuing academic research provides a foundation for moving management research beyond A‑journal structures.” The goal is not to eliminate journal lists but to use them as one element in a comprehensive, strategic, and multi‑stakeholder performance management system. With advances in machine learning, automated text analysis, and alternative metrics, the feasibility of richer evaluation is increasing.
Ultimately, “an A is an A” is a powerful but dangerous simplification. It brings efficiency and fairness but at the cost of questionable practices, narrowed knowledge, and lost intrinsic meaning. By broadening the criteria, using multiple measures, and investing in researcher development, the management field can preserve the benefits while mitigating the harms.
Full reference & acknowledgements
Aguinis, H., Cummings, C., Ramani, R. S., & Cummings, T. G. (2020). “An A is an A”: The new bottom line for valuing academic research. Academy of Management Perspectives, 34(1), 135–154. https://doi.org/10.5465/amp.2017.0193
Key related works: Adler & Harzing (2009) – journal rankings; Bedeian et al. (2010) – “cardinal sins”; Honig et al. (2018) – scientific misconduct; Shapiro & Kirkman (2018) – relevance of business school research; DORA (San Francisco Declaration on Research Assessment).
This interactive summary faithfully synthesises the original article’s arguments, evidence, and policy recommendations. All direct quotes are attributed to Aguinis et al. (2020). Designed for educational and knowledge‑dissemination purposes.