White, Brown, Silicon: A Natural Progression in Capability Building
- Jose Arrieta

- 12 minutes ago
- 2 min read
Organizations employ simple rules, heuristics, and routines to adapt to change. These so-called dynamic capabilities are simplified heuristics that help organizations sense and seize opportunities and reconfigure their resources to create adequate solutions (Teece, 2007). In Winter's (2003) formulation, dynamic capabilities are seen as higher-order routines, i.e., ways of adapting ad-hoc solutions to the problem's conditions. However, as one creates higher and higher-order routines, the content of these dynamic capabilities decreases. In uncertain, fast-paced environments, as organizations create higher and higher-order routines, they are left with simple rules for managing change (Bingham & Eisenhardt, 2011).
Another way of managing change is --paradoxically-- to assume that it does not happen. This assumption is widely used in machine learning, in which large data repositories are collected and carefully curated to enable predictions (Aggarwal et al. 2018). These solutions are not dynamic capabilities but stable ones. Dynamic capabilities sense, seize, and reconfigure solutions to adapt to the problem at hand (Teece, 2007). Stable capabilities can match problems to known solutions through digging into their large knowledge repositories and computing an adequate solution to the problem at hand. Both should be able to solve problems across broad sets of conditions, but differ drastically in how they find these solutions.
To incorporate machine learning into organizations and build a "stable capability," we start, as Winter (2003) did, with ad hoc problem-solving. However, the key point is that after solving the problem, we store and annotate the solution. Over time, as more solutions are created, the organization will build an in-house repository that allows it to mix and match problems and solutions --the organization will learn in the jargon of Argote and Epple (1990). Managers can speed up the learning process by offshoring the division responsible for finding solutions or by outsourcing the process, if data privacy constraints allow. As time goes by, the organization might prefer to crowdsource the work. To save costs even further, the organization might allow an active learning algorithm to control data annotation and ultimately organize how the problems are solved.
If the pursuit of a larger dataset requires progressing from in-house to offshore, to outsourcing, to crowdsource, then how do we design these organizations? Are there new governance structures required? As Kumar and Puranam (2012) explained, when a company offshores its business units, this can lead to "the 'browning' of the top management teams". What would happen if several steps further from offshoring towards crowdsourcing and active learning? From which pool should we draw the future leaders of our firms? MTurk? OpenAI? Where will the new strategic decisions be located? What can we consider a resource? Data, GPUs, electricity, water, anything else? Soon we will see the rise of Silicon TMTs.
Crucially, when should a company decide to invest in more "stable capabilities", such as machine learning, instead of more dynamic ones? Are dynamic capabilities simply a question of missing data, just as the current AI summer was a question of scaling computing?




Comments