Dozens of tools exist for testing casuality. Yet, none has the clarity provided by randomization and double blind research designs. As such experiments are central to the growth and pruning of any theory.
For decades, experiments have been employed in organization science. The earliest in my recollection (and also my favorite all around) was the legendary Chert, March, and Starbuck (1961). A experiment that showed how organizations self regulate through the rules and roles imposed on their members. A finding so great that Piezunka and Schilke (2023) built new theory around it.
However, not all experiments are so strong. A few years into the replication crisis in psychology, experiments continue to be a fundamental tool for studying the casual links that connect our theories.
As such the general up tick on experimental work has been refreshing. Multiple paradigms have been introduced and replicated in the past decades (NK models, Target the two, Origami, etc.). However, not all is great.
Let me use one paper as an example of an endemic problem. Billinger, Srikanth, Stieglitz, and Schuhmacher (2021) is a great paper. It is not only a replication with extension of the 2013 paper by three of the author's, it is a continuation of the study of complexity we inherited from biology (i.e. Kaufman,'s nk models). There is a problem though.
How long would you think it would take reviewers to accept this paper? By this I do not mean from the time it was first sent to any journal and then was rewritten and resubmitted to a different journal. No, I mean from the date it was first sent to SMJ to the date it was printed?
Seven years. That is a long time. A human goes from birth to primary school in the same time. I went from being an experimental phycisist to faculty in a business school in less. That is the time it took reviewers to believe in the soundness of the paper we are discussing.
This is not an indictment on Billinger and coauthors. It is not a case of Janteloven. It is a case of the system working as designed. Experiments are somewhat uncommon, the 62 years since Cyert, March, and Starbuck's paper in Management Science is too little time for management scholar to learn how to understand how to review experiments.
Normally, this would not be a problem. What are 62 years at the end? Not enough time to review 10 papers, that's what!
But let's be serious. We know experiments are not perfect. There is a replication crisis and the only way to create real knowledge from experiments is by replicating them. And the only way we can replicate is if journals publish experiments fast.
Schilke, Levine, Kacperczyk, and Zucker tried to change the paucity of experiments with their Special Issue at Organization Science. According to the journal, they accepted 21 papers, during the almost four years it took the papers to get through the review process. Some are even replications of foundational papers (e.g. Silva et al. 2022). This is great!
However, this is not enough. If it takes four years for a paper to come out, it will take many more for a replication to support or refute it. Decades will pass until our knowledge is verified. And in reality much more is needed to make experiments a central tool in the belt of organization science.
Case: Journal of Organization Science Experiments
Let's think about what's missing?
Rapidity is the obvious one. The more experiments are published, the shorter our papers need to be, as common practices will become standardized in the pages of the journal. As this happens, the barriers of entry will lower and new fields of organization science will get experimented upon.
Replicability comes second, as it becomes clear that a replication will be published and used by our peers, more and more will come along. If a paper takes a year or two from data collection to publication, many of us will engage in trying out new paradigms and retesting our prior contributions. Even more so if we accept to publish experiments carried away under pre-registrations that meet the criteria of the journal.
Resilience is the last building block. Experiments are abstractions of reality. Yet sometimes we might abstract too much or too little. As such the journal should have articles aimed at spotlighting new paradigms (e.g., nk, n-arm bandit, target the two) and distributing these openly. This will increase the speed and make replications easier but more importantly create a set of best practices that push organization science forward.
If we are serious for a second, we do not need a new journal. Yet, I firmly believe on the need for a separate route for publishing organization science experiments. This is starting to happen, see JOM Scientific Reports. I am sure some journal will pick on this need.
If I am honest, the reason, I write this post is that I would so enjoy having a paper at JOSE. Imagine the citation:
Arrieta JP. (2027). Who am I? JOSE. 1(1), 1-13.
It would be glorious! Can we do this? Please