Abstract
When looking at the history of technology, we can see that all inventions are not of equal importance. Only a few technologies have the potential to start a new branching series (specifically, by increasing diversity), have a lasting impact in human life and ultimately became turning points. Technological transitions correspond to times and places in the past when a large number of novel artefact forms or behaviours appeared together or in rapid succession. Why does that happen? Is technological change continuous and gradual or does it occur in sudden leaps and bounds? The evolution of information technology (IT) allows for a quantitative and theoretical approach to technological transitions. The value of information systems experiences sudden changes (i) when we learn how to use this technology, (ii) when we accumulate a large amount of information, and (iii) when communities of practice create and exchange free information. The coexistence between gradual improvements and discontinuous technological change is a consequence of the asymmetric relationship between complexity and hardware and software. Using a cultural evolution approach, we suggest that sudden changes in the organization of ITs depend on the high costs of maintaining and transmitting reliable information.
This article is part of the themed issue ‘The major synthetic evolutionary transitions’.
1. Introduction
Technology is a powerful agent of change that allowed us to expand our capacities beyond what any other species ever did [1]. The development of external forms of changing our external world, while becoming more and more independent of its uncertainties, has been key for our success. The presence of a symbolic mind and the capacity of propagating non-genetic information by means of language coevolved with a rapidly improving technological potential. In this context, although other species have been capable of a limited use and construction of tools, none shares with us the cumulative nature of our technology [2].
Several key events mark our historic and prehistoric record of artefacts and inventions. Not all innovations are, however, of equal importance. Only a few novelties have the potential to make a difference, perhaps starting a rapid improvement, diversification and adoption. Some have been turning points in human life, described as technological transitions that are associated with ‘times and places in the past when a large number of innovations appeared together or in rapid succession’ [3, p. 69]. One example is the emergence of personal computers and the associated avalanche of novel uses and applications. Although it is possible to foresee technological progress, many experts have failed to predict the social consequences of new technologies (like the personal computer, see [4]).
The evolution of technology is a human-driven parallel experiment of evolution. A history of technological progress can be described as a list of significant advances [5], but without a causal narrative, this list will not reveal much about the real impact of the underlying rules and the logic of their sequential appearance. According to different authors [6,7], the major transitions of evolution require the emergence of higher-order groups of agents or components resulting in a novel property in the former that was not present in the latter. These qualitative changes require deep changes in the informational organization of life's forms as we move from level to level [6]. Can we apply a similar principle to arrange the history of technology? How does the framework of major transitions inform the evolution of information technology (IT)?
The major transitions in technology correspond to changes from one level of informational organization to another level of higher complexity (figure 1), that is, information is transmitted and/or stored at a level that was not present before the transition. Using the concept of ‘tipping points' [8] as a framework, Bentley & O'Brien [9] defined three major thresholds in the evolution of information storage and communication: the appearance of language, the capacity to store information outside humans and the appearance of technology able to amplify information processing far beyond our biological limitations.
Figure 1. The evolution of technology has been marked by rapid changes towards more complex forms of informational organization. (a) Social learning and cumulative cultural evolution enabled sophisticated stone toolmaking (e.g. Oldowan on). (b) The invention of agriculture allowed our ancestors to settle in any place and develop new social and cultural behaviours. (c) Cities allow for accelerated exchange of ideas among large numbers of people and provide more opportunities for innovation. (d) Societies that develop writing can store information outside humans and (e) send messages to distant places and times. (f) Symbiosis between humans and computers has amplified the capacity to manipulate and accumulate information beyond natural constraints. The picture shows the ENIAC operated by Betty Jean Jennings and Fran Bilas. Source: Wikipedia.
IT is the last stage of technological evolution. The history of IT provides a unique perspective on technological and cultural evolution because ‘the computer in itself embodies one of the central problems in the history of technology, namely the relationship of science and technology’ [10, p. 117]. IT has two components: (i) hardware is the collection of physical elements that defines a computer system and (ii) software the information (instructions) that can be stored and run by the hardware. Among all human technologies, software appears to be the closest innovation to biology [11]. Software plays a role similar to genomes and natural languages, i.e. to encode and transfer information from one level of organization to the next [12].
The history of IT crossed several thresholds that increased the value of software systems: (i) when we learn how to use this technology (programming), (ii) when we accumulate a critical mass of information (software) and (iii) when communities of practice can create and exchange free information (open-source software). First, the introduction of the ‘stored-program’ concept allowed computers to perform any desired functionality without re-arranging physical components. Then, innovation shifted from hardware to software with the introduction of the microprocessor and the personal computer. The last transition was caused by the open-source movement, which ‘effectively disconnects the economics of operating systems from the economics of semiconductor manufacturing’ [13, section 6].
In the evolution of IT, there has been an interesting coexistence between continuous, gradual improvements [14–16] and discontinuous technological change [17–23]. These two different modes of information growth mirror the division of hardware and software. Using Moore's law, there has been a half a million-fold improvement in hardware capability in the last four decades, but a similar pattern has not been achieved with software systems [24]. The growth of software complexity occurs in leaps and bounds. But why does that happen? Software is subject to constraints that we do not yet fully understand. For example, it is highly difficult to find an optimal partition of a complex software project into discrete tasks that can be worked out without coordination. On the other hand, effective division of labour has been achieved by hardware designs and optimization algorithms can minimize the costs of chip production [25].
Understanding the evolution of IT needs theoretical models that explain the emergence of transitions. Although there is a vast literature on empirical software studies, neither theoretical models nor quantitative metrics are easy to define. In the absence of an accepted and uniform theoretical framework, we will study the major transitions in IT using very simple models (except in the case of programming languages where a full empirical analysis is discussed). Here, we provide some basic elements that can be used to build a cultural evolutionary perspective of IT. For example, recombination of existing structures has been a powerful source of diversity, which is a precondition for cultural evolution.
2. Transition from hardware to software
Innovation is an essential but poorly understood component of technological transitions. Although there are good models of technological progress, we cannot predict the social impact of innovations [22,26]. Explaining the stepwise growth of technological complexity remains an elusive goal for mainstream economics [27]. In studies of technological innovation, it is often assumed that invention is an exogenous component of economic growth, e.g. a ‘happy accident’ or a natural response to the needs of existing communities of practice. Using an evolutionary perspective on processes of technological change, Nelson & Winter [28,29] defined a technological regime as the common set of practices (e.g. engineering practices, scientific knowledge and production processes) adopted by a community. Technology usually develops through incremental improvements within a technological regime but sometimes communities have to solve new problems that require ‘radical’ inventions [30].
The above is consistent with the existence of turning points in the evolution of IT, where technological transitions remove social and technological barriers to impending jumps to higher complexity levels. This mechanism has been invoked to explain the emergence of dominant computer designs [31] or computer classes [32]: from early electronic computers to mainframes and minicomputers [33] to microcomputers to portable [34] and embedded computers (figure 2). Design complexity increased at every step of this sequence, i.e. as measured by the number of internal parts or the diversity of functionalities. Is complexity driven by demand and intentionality or the result of accidents and chance? What other mechanisms contribute to the evolution of large-scale increases in technological complexity?
Figure 2. Gradual and discontinuous change in the evolution of computer systems. (a) The price per unit of computing power and the number of transistors per microprocessor has been steadily decreasing since the transition from mechanic calculators to electronic to digital computers (1940s). Adapted from Nordhaus [35]. (b) Waves of innovation in technological evolution: vacuum tubes in the 1940s, mainframes in the 1950s, minicomputers in the 1970s, personal computers in the 1980s and laptops and smartphones in the 1990s. A new class of machines emerges roughly every 10 years.
Another classification looks at the evolution of component technologies used in the abovementioned computer classes [35]:
(1) Manual (up to around 1900)
(2) Mechanical (circa 1623–1945)
(3) Relays (1939–1944)
(4) Vacuum tubes (1942–1961)
(5) Transistor (1956–1979)
(6) Microprocessor (1971 to present)
Electronics have been instrumental because digital computers can operate much faster than manual and mechanical computers [36]. However, the choice of physical substrate for computing has not completely determined its evolution. Innovation has been driven by antagonistic (competition) and synergistic (cooperative) interactions between computing elements. A turning point in the evolution of IT is the emergence of integrated circuits and the microchip [37], a piece of silicon into which a whole electronic circuit (or even a full computer, the microprocessor) is etched using photographic techniques. Selection-based forces are important in technology and the search for the microchip has been pushed by the scaling problem of circuits built with discrete transistors, resistors and capacitors. On the other hand, combination of elements into subsystems is a powerful source of technological innovation. New building blocks replacing key components can trigger cascades of replacements [38] or ‘gales of destruction’ [30]. The rise and fall of relative component abundances in technical documents (figure 3) is consistent with emergence of innovations and the extinction of obsolete technologies, e.g. transistors replaced vacuum tubes because they were faster, easier to manufacture, consumed less energy and needed less maintenance [40]. The sequence of successive innovations can be described with the classic adoption curve [9,41], where the probability of adopting the technology at time t is


Figure 3. (a) Relative abundance of circuit elements in a large corpus of text documents [39]. From left to right: vacuum tube, resistor, capacitor, diode, transistor and microchip (thick line). The shaded region corresponds to the microcomputer regime (1968–1995). (b) Microcomputers like the (1) Commodore PET 2001 (1979), (2) Sinclair's ZX Spectrum (1982), (3) Jupiter ACE (1983) and (4) Spectravideo SVI-318 (1983) were less powerful but had better graphics than previous computers. They allowed new applications like word processing, user programming and video games. The histogram shows the rapid diversification of microcomputers happening by mid-1984.
The microprocessor represents a major innovation. Smaller and inexpensive microcomputers (personal computers) used this technology to displace the larger and much more expensive mainframes and minicomputers. Microchips could be redesigned multiple times, thus enabling the computer's cost to be brought down much quicker than was possible with any past technology. The invention of affordable computers was an expected consequence of the progressive miniaturization of semiconductor technology [43]. Why microcomputers were massively adopted is more difficult to explain (note the correlation between the peak abundance of ‘microchip’ and microcomputer diversity in figure 3). For example, some groups can be pre-adapted to some changes while others are not and social groups can respond quite differently to sudden technological changes. It has been suggested that high price was an important barrier to initial adoption among personal users [44], but software greatly facilitated the transition to personal computers.
With the personal computer, software companies (and their products) have come to dominate IT. No one could foresee the consequences of these events at that time,1 but once software was unlocked, it generated huge demand. For the first time, companies were not forced to develop their own hardware (and specific software), which was a costly strategy with a limited range of possible customers. This was an opportunity for new companies to sell an affordable product (software) in the mass-market. Software could be duplicated, distributed and re-used across different machines at a very low cost. Expanding the functionality of machines without any physical modification was an attractive feature for both customers and hardware manufacturers.
3. Learning curves and design complexity
The basic organization of a computer, i.e. its logical components and their interactions, has remained unaltered since 1945. A computer operates in a sequential fashion and provides separate mechanisms for instruction processing and memory storage. A basic element is the usage of a ‘stored-program’ for computer instructions, which means that the data and the instructions are stored using the same format in computer memory. As described by von Neumann [45], these features allow the computer to perform any possible computation. Having a well-defined and stable computer design,2 engineering efforts could then focus on optimizing the basic blueprint, for example, by replacing inefficient hardware components with transistors (see §2). Empirical measurements suggest that computer engineers have been quite successful in this task (figure 2) and how experience influences the rate of technological progress [46].
Similarly, Wright observed that the cost to build an aeroplane drops following a power-law in the cumulative number of units produced (or Wright's law). This general pattern (the so-called ‘learning curve’ or ‘experience curve’) can be observed in a large number of technologies (except in software). In semiconductor technology, Moore's law predicts that the density of transistors on a microchip will double every 2 years [43]. Moore's law and Wright's law are related: when production increases exponentially and unit costs decline exponentially, then Wright's law holds. The cost C of a specific technology drops with the number z of produced units [47,48]

as




This prediction assumes that there are no other bottlenecks slowing down the learning process. Real technological progress is a more complex story, e.g. the number of transistors on microprocessors has not doubled very regularly and there are strong deviations from the exponential trend [13]. In many cases, design complexity is a limiting factor for technological growth. For instance, computer operation involves the interactions between processor, memory and peripherals, and changes in any component might trigger a spurt of changes in related technologies. Technologies are not isolated from each other and the cost of improving any component depends on the costs of the components with which it interacts [51].
An extension of the above model takes into account the influence of design complexity in the rate of learning. The basic element is the design structure matrix (DSM) [52] defining the network of dependencies between components. A dependency network G = (V, E) is a set of n nodes
corresponding to technological components and a directed link
indicates that a change in component vi affects component vj. This graph can be described with a square |V|2 adjacency matrix A = [Aij] such that element Aij = 1, if there is a link from node vi to node vj, and Aij = 0 otherwise. The matrix notation allows computation of the out-degree di = ΣjAij and the in-degree ki = ΣjAji of the node vi by taking the sum of values in the row or the column in the adjacency matrix, respectively.
In the simplest approximation, we can decompose technologies into n components, and each component depends on exactly d − 1 other components, i.e. di = d − 1 for all
. At every learning step, we pick one component vi at random and determine the new cost
of every affected component Aij = 1 by sampling the distribution (3.2). This change results in an improvement when the sum of the new costs
is less than the previous cost ai = Σjcj. Using extreme value theory and differential equations, McNerney et al. [50] show that the rate of improvement in a homogeneous network follows a power-law

A more realistic case considers a heterogenous network of dependencies. Technology consists of a (possibly non-uniform) distribution of dependencies among components, with average out-degree
. Recall that the probability to accept changes is a decreasing function of out-degree. It can be shown that the improvement rate of individual nodes vi scales with the out-degree
, i.e. the component vj affecting vi that is more likely to change [50]. In this case, the overall improvement of technology also follows a power-law

is the design complexity. Note that this equation recovers the homogeneous case when d* = d holds. Over long timescales, the slowest-improving components with di = d* act as bottlenecks and they dominate the overall progress of technological change. Technological progress in heterogeneous systems is not always smooth and the system jumps between phases of stable and predictable changes and chaotic restructuring phases (or ‘learning bursts').The abovementioned fact suggests how we can accelerate the rate of technological improvement by simplifying designs. There is no evidence of significant improvements in programmer productivity, even when the best software practices (i.e. like ‘code refactorings') engage in active simplification of software structures [53]. But the sources of software complexity are not always exogenous. Common practices like code reuse and recombination (which minimizes production costs) can generate the scale-free structures commonly found in software systems [54–56] (figure 4). These heterogeneous structures are characterized by the presence of a few, highly connected elements (or ‘code hubs') re-used by many other software components. It has been hypothesized that code hubs can act like natural bottlenecks of software development processes, e.g. by facilitating the propagation of software errors [57,58].
Figure 4. Technological systems can be strongly constrained by common rules of development, like reuse and recombination. Software systems can be seen as complex networks of interacting components describing well-defined structures and functions. These networks are scale-free and also display modularity (here depicted with different colours).
4. Evolution of programming languages
Technological progress is associated with more complex human–machine interactions. Developing software is a difficult task that requires highly skilled, technical personnel. In order to address this bottleneck, programming languages define a high-level, human-readable description of software (the so-called ‘source code’) that is automatically translated into the binary form required by the computer. Source code is not strictly required for computer operation but indispensable for successful software development. Source code is a convenient means for human communication with computers and also an efficient way of exchanging information between people, i.e. source code is both a technological and cultural artefact. The choice of programming language and the use of adequate design rules have a large impact on software development costs [59].
The distance between source code (genotype) and function (phenotype) has been increasing ever since the invention of programming languages. There are hundreds (and possibly thousands) of programming languages in virtually every field of human activity: from complex engineering calculations, to handling queries in large databases, to simulating physical systems in realistic ways to music composition and computer graphics. The growing complexity of programming languages (e.g. by using advanced compiler techniques) allows more powerful and expressive human–computer interactions. Similarly, increases in the complexity of production compensates for simplified handling of new technologies [60]. On the other hand, it has been suggested that innovations occur mainly through combinations of previous technologies [61]. Engineers can merge different parts of previous systems and incorporate them into novel technologies, like electronic circuits or programming languages. Why is the diversity of programming languages so large? What large-scale evolutionary patterns are associated with the recombination of technologies?
Like natural languages, the history of programming languages defines evolutionary trees. Some of their branches have been much more successful than others (like Fortran descendants) and dead-ends are frequent (including esoteric and weird languages like Brainfuck or Whitespace). There are many examples of hand-made diagrams of the evolution of programming languages but it is unclear how to obtain evolutionary trees in the absence of a ‘cultural genome’. Recently, we have proposed a method to reconstruct these cultural phylogenies using networks of historical influences [62]. This network is an instance of directed graph G = (V, E) formed by the set of programming languages V and a set E of influence links among them. Several sources provide the listing of most common programming languages and the ancestor languages that have influenced their design. Using a measure of structural similarity, we can make a systematic (and non-ambiguous) distinction between vertical and horizontal links in these cultural graphs. The outcome is a backbone tree Figure 5. The large-scale evolution of programming languages (1953–2012). Lineages of influence form well-defined subgraphs (like procedural languages, shown here). Using network methods, we can reconstruct (a) the phylogenetic tree and (b) the map of horizontal exchanges. These networks show statistical regularities that can be interpreted from an evolutionary perspective. (c) Key innovations (microchip) marked the abrupt increase of the diversity of languages (solid circles) and the influences among them (open circles) and the rise of personal computers (grey area). (d) Frequency-rank plot of language popularity can be fitted with discrete generalized beta distribution (DGBD). This distribution is the outcome of a process of technological competition based on increasing returns. In both (c) and (d), arrows point to popular languages.
that maps the most influential (vertical) ancestor for each language (figure 5a) and the influence graph corresponding to the set of horizontal exchanges among languages (figure 5b). Our reconstruction is consistent with the historical classification of programming languages in two major categories, namely, procedural and declarative.

Phylogenetic networks reveal sudden bursts of language diversification marked by key innovations. For example, a clear transition from top-down designs and restrictive computers to bottom-up development and mass-market products took place around 1982, coinciding with the rise of personal computers. Before this transition, the design of programming languages was constrained by specific needs and hardware limitations. In the 1970s, cheap and efficient microprocessors revolutionized electronics, including rapid advances in programming languages and man–machine interaction. This pattern is consistent with a model of technological recombination and tinkering that does not include selection forces. The network of influences grows by introducing a new language which links to randomly chosen target languages with given probability as well as to a certain fraction of ancestors of each target. Matching this model to historical data predicts structural patterns of influence networks and the two-regime scenario defined by a shift in the number L = |E| of links exchanged by new languages (figure 5c).
The paradox is that the rapid growth of programming languages might be declining precisely because the distance between hardware and software is increasing. Hardware became so powerful that the same software can be executed without modification in different environments. High-level languages like C and Java allow source code portability from one machine to another, that is, users are no longer tied to specific hardware but to their software choices, e.g. operating system and programming languages. The rise and fall of programming languages is a process of technological competition based on increasing returns [63]. Sometimes technological success (or failure) can be explained through a process of increasing returns and with little connection to rational decisions [64]. The explanation comes from the nature of returns in an economic system where compatibility largely dominates the potential choices made by users and consumers. The more users a technology has, the larger the chances that novel users will adopt it. This process of social amplification leads to competition and necessarily ends in the extinction of the less popular choices.
The frequency-rank plot of language popularity follows the discrete generalized beta distribution [65]

5. Software bottleneck
In studies of technological innovation, it is often assumed that invention is the product of solitary geniuses or small groups of innovators. The role of collectives is secondary here, merely acting as social amplifiers of successful inventions, that is, the innovations. Competition-based models of technological diffusion explain the rapid adoption of the new over old across user populations [22]. Much less is understood about the role played by groups and collectives in the origin of technological innovations. Antagonistic interactions prevent the growth of complex projects, which depends not only on sharing and cooperative behaviour, but also on specific technological traits.
The ‘cumulative culture’ hypothesis [67,68] proposes that larger groups produce complex systems more often than do smaller groups. This model assumes that social learning is highly selective, i.e. the most skilled individuals will be responsible for advancing cultural complexity over the generations and many individuals will learn from them [69]. When people copy the most skilled individual, the rate of cultural accumulation is strongly correlated with population size. In this case, there is a critical population size that sustains the progressive accumulation of skills and knowledge over time. An increasing population size might explain the sudden shifts in cultural prehistory [70]. On the other hand, when population drops below this minimum threshold, there is a noticeable risk of technological collapse (as Henrich [67] argued for prehistoric Tasmania).
The assumptions of this model appear to be quite apt for describing computer programming. Software development belongs to the category of ‘tools that are hard to learn to make and easy to screw up’ [71, p. 776]. The difficulty of the task is so high that many practitioners conclude that ‘excellent developers, like excellent musicians and artists, are born, not made’ [72, p. 218]. Empirical observations suggest that software growth is sustainable only when their programmers are sufficiently skilled and there is a large enough number of them, otherwise there is a noticeable risk of project collapse [73]. For example, usage frequency of open-source software in communities of practice appears to be correlated with the rate of project success. Population bottlenecks limit the spreading of technological innovations [74–76], that is, software projects with small teams can create new functions but they fail because of the lack of error detection and repair capabilities.
A minimal model of technological diffusion comprises a population of N individuals (firms, users and programmers), each having a technological level zi (a measure of accumulated knowledge or skill level, e.g. software size or diversity of functions) with i = [1, N]. Let us define
as the mean technological level of the population. At every generation, each ‘newborn’ (or newcomer) in the population tries to copy the most skilled individual. This is a conservative assumption regarding the loss of information with decreasing population size [69]. Using the Price equation [77], it can be shown [67] that the rate of innovation is proportional to the population size. Also, innovation decreases with mean cultural level because more complex knowledge becomes increasingly difficult to acquire [78]


It is easy to check that the previous system allows for multi-stability. By letting dz/dt = 0, we obtain one stable point at N* = z/δ while dN/dt = 0 gives two stable alternative states under the following condition

We suppose that cultural changes take place much more rapidly than demographics, that is

. By substituting Z ≈ δN/γ in (5.2) and assuming a separation of scales between N and Z, we obtain






Using the above approximations, we obtain an expression for the potential function (5.6) that can be solved:

and α = βδ/γ. For (5.11), we have two alternative stable states corresponding to low population, low cultural level and high population, high cultural level, whereas the middle equilibrium state is unstable. This system allows for bi-stability and transitions between the two stable states that can take place when we perform a large perturbation to population size.This model of cumulative cultural innovation describes a mechanism for rapid technological change [76]. This might be a general feature of other cultural and social-technical systems3 (discussed in later sections). Although the model is very simple, it provides interesting insights into the evolution of software projects. First, the model predicts a transition between two different kinds of growth dynamics depending on the quality of programmers, e.g. high rates of error detections and low rate of random errors in (5.1). Fast project growth is possible when programming quality is high enough to detect and fix software errors. Otherwise, the number of errors keeps growing and the project slows down because of the sudden drop in software reliability.
Second, the design matrix model discussed in §3 suggests that network simplification accelerates growth, which seems to contradict the group-size hypothesis, i.e. larger populations (and thus higher levels of complexity) are responsible for accelerated rates of innovation. Programmers combine several mechanisms in the same project to maintain long-term integrity. Software growth is affected not only by design complexity and programmer quality but also by population size. Software development is an instance of a complex optimization problem under multiple constraints [54]. For example, programming teams, or population size in equation (5.2), do not grow beyond the upper bound defined by coordination costs [24].
Finally, cumulative culture theory has been criticized because writing, printing and other forms of information transmission preserve cultural information outside humans and it ‘is unknown whether the maintenance of current cultural complexity is nowadays similarly dependent on group size’ [68, p. 391]. Interestingly, it is generally accepted that the cost of software maintenance is larger than the cost of new developments [80]. In this context, open-source projects can learn from their mistakes through user collaboration and ‘cumulative maintenance’.
6. The information technology evolution space
Our goal in this paper has been to show the potential of cultural evolution when understanding the emergence of transitions in IT. Hardware and software metaphors abound in studies of biological systems but there are very few evolutionary models of IT. The evolution of IT is a very rich and complex history that involves several types of actors, artefacts, organizations and different timescales and mechanisms (tinkering, optimization, social learning and complexity). Three dimensions, i.e. software, hardware and diffusion, summarize the design space of computer systems (figure 6). This space defines a taxonomy that helps us understand where a proposed system stands with respect to other designs. Two dimensions (performance and popularity) define a continuous spectrum of possibilities, and the third (openness) is more qualitative. For the purposes of our study, the major transitions of IT are located at different thresholds of these axes, which divide the space into phases or groups of distinct designs.
Figure 6. Evolution of computer systems taking place in a simplified technological and social space with three dimensions: hardware (performance), software (openness) and rate of diffusion (popularity). Points correspond to computer systems classified according to these parameters. Coloured surface planes indicate transition events separating phases of distinct behaviour: manual and mechanical calculators to electronic computers (hardware), early to late adopters and close to open-source (software) (see text).
Along the performance axis, we find the first transition separating manual and mechanical calculators from electronic computers. Electronic components allowed faster and more reliable computers and paved the way to spectacular hardware progress, so characteristic of the evolution of computers. This was possible because the computer architecture, the logical blueprint that defines main hardware parts and their interactions, has been quite stable. On the other hand, design of complex systems is a task assigned to software engineers and not hardware manufacturers. Early in the history of IT, this division mirrored the belief that the ‘programming bottleneck’ was basically a technical problem [81], which could be solved with specific tools, i.e. programming languages or structured programming techniques.
A turning point in the history of IT was the microprocessor and the massive adoption of personal computers. Semiconductor technologies enabled smaller and cheaper electronic products like personal computers. This adoption process was an important source of innovation because consumers/practitioners applied their computers to novel types of activities, like what happened with the portable radio earlier [82]. However, only innovations that are widely adopted can be self-sustained. The critical mass (or minimal number of adopters of an innovation that allows the rate of adoption to be self-sustaining) provides a natural location for this second transition point. Before microprocessors, computer designs were below this critical mass because they were large, expensive and also application-specific.
The third axis splits the world of computer programming according to whether source code is made public (open source) or not (closed source). The debate of whether or not to use one or another mode involves a number of considerations. Many companies sold their software for money and they do not allow users to change or see their computer code, e.g. in order to amortize the high costs of programming. On the other hand, the open-source movement believes that information must be freely distributed in order to promote cooperation and further innovation. The advantage of open source is the faster rate of testing, where lots of people can find and report bugs, thus alleviating the costs of software maintenance.
Interactions between these axes define heterogeneous regions populated by designs in different degrees. Most frequent designs are located at the corner of high-performance, widely popular, closed information systems. These include personal computers like the Windows PC or the Mac, or more recently, smartphones and tablet computers. The diffusion of computer technology was a prerequisite for distributed, collaborating teams of programmers and users exchanging free information. Although the open-source movement has its detractors, some of the most complex pieces of software (like the Linux operating system or the Apache web server) have been developed under the open-source model. There is a current trend towards open-source development mode, e.g. some closed systems like Mac have adopted basic open-source components (like Linux kernel). During the last decade, we have witnessed the fast rise of the Android platform, an open source alternative for smartphones based on Linux.
The asymmetry between performance and openness suggests a duality principle for IT. At one end, the rate of hardware progress has been steady and continuous. At the other end, software has been evolved in sudden jumps and leaps associated with a programming bottleneck. The reason is that the hardware allows for modular designs and effective division of labour while software architectures are instances of complex, heterogeneous networks that emerge from attempts to minimize development costs. Neither open- nor closed-source software provides a definitive solution to the high development costs. Software development requires highly skilled and experienced individuals that work as learning models for the rest of the population. When beginners learn from the most skilled masters, there is a correlation between the collective knowledge capacity with population size. An increase in population size can shift technological capacity and conversely, a decrease of population size can reduce drastically the diversity of knowledge and skills.
7. Discussion
The tempo and mode in the evolution of IT follows a pattern of punctuated equilibria shared with other major transitions in evolution. The growth of complexity takes place in jumps or technological transitions, which is the manifestation of an adaptive coevolutionary process between technologies and their social environments. We should look at ITs not in isolation, but rather as nodes in a network growing by recombination. For example, engineers have merged different parts of previous systems and incorporated them into innovations, like the microprocessor, programming languages or complex software systems.
Collective invention [83] drove the emergence of transitions. Even when many technologies have been invented truly once, the commonly used form of many of these inventions has been evolved through the cumulative contributions of many practitioners. Computer club meetings in the 1975–1980s and open-source communities are two well-known examples. The model described above supports the cumulative culture hypothesis of technological evolution and software suggests how previous criticisms to this theory did not take into account the high costs of maintaining reliable information. At the moment, programmer expertise cannot be stored in a permanent medium. Software development is largely craftsmanship that has to be learned from experience.
The question is whether IT has been truly revolutionary (the beginning of a major transition in evolution) or a natural step in cultural evolution [6]. For example, software can be considered a special case of symbolic processing systems [84], whose origin can be traced to the emergence of spoken language and cultural evolution. The slowing down of Moore's law [85] suggests that computing needs to be redefined if we are going to maintain the current rates of exponential progress. Software appears to be the natural target for future innovation, but solving the problem of software scalability [86] requires a change of perspective: technological approaches like modular designs and reuse of standard components [52] had limited impact [87]. Mastering the emergent complexity of software systems [88] does not need another technological ‘silver bullet’ [89], but a deep understanding of the nature of technology and its relationship with us.
Competing interests
We declare we have no competing interests.
Funding
This work was supported by the Spanish Ministry of Economy and Competitiveness, Grant FIS2013-44674-P and FEDER, the Botin Foundation and Bank Santander.
Acknowledgements
I thank Ricard Solé, Salva Duran, Raul Montañez, Josep Sardanyes, Jordi Piñero and Daniel Rodriguez for useful advices, criticisms and reviews of this manuscript and the members of the Complex Systems Laboratory for useful suggestions. I also thank two anonymous referees whose comments improved considerably the original manuscript. I thank the hospitality of the Santa Fe Institute, where much of this work was done.
Footnotes
Endnotes
1 IBM was more interested in hardware than software sales when they allowed Microsoft to retain the rights to sell their operating system (MS-DOS) to non-IBM computers.
2 I do not imply that hardware innovations have been stalled, but the ‘stored-program’ design introduces some constraints in computer organization.
3 For example, cities allow for the accelerated exchange of ideas among large numbers of people and provide new opportunities for innovation [79].
References
- 1
Richerson PJ, Boyd R . 2005Not by genes alone: how culture transformed human evolution. Chicago, IL: University of Chicago Press. Google Scholar - 2
Tomasello M . 1999The cultural origins of human cognition. Cambridge, MA: Harvard University Press. Google Scholar - 3
Kuhn SL . 2012Emergent patterns of creativity and innovation in early technologies. In Creativity, innovation and human evolution (ed.Elias S ), pp. 69–87. Amsterdam, The Netherlands: Elsevier. Google Scholar - 4
Smith DK, Alexander RC . 1999Fumbling the future: how Xerox invented, then ignored, the first personal computer.Lincoln, NE: iUniverse. Google Scholar - 5
Haven K . 2007100 greatest science discoveries of all time. Wesport, CT: Libraries Unlimited. Google Scholar - 6
Smith JM, Szathmary E . 1995The major transitions in evolution. Oxford, UK: Oxford University Press. Google Scholar - 7
Schuster P . 1996How does complexity arise in evolution. Complexity 2, 22–30. (doi:10.1002/(SICI)1099-0526(199609/10)2:1<22::AID-CPLX6>3.0.CO;2-H) Crossref, Google Scholar - 8
Gladwell M . 2000The tipping point: how little things can make a big difference. New York, NY: Little, Brown. Google Scholar - 9
Bentley RA, O'Brien MJO . 2012Cultural evolutionary tipping points in the storage and transmission of information. Front. Psychol. 3, 569. (doi:10.3389/psyg.2012.00569) Crossref, PubMed, ISI, Google Scholar - 10
Mahoney MS . 1989History of computing in the history of technology. Ann. Hist. Comput. 10, 113–125. (doi:10.1109/MAHC.1988.10011) Crossref, Google Scholar - 11
Solé RV, Valverde S, Rosas-Casals M, Kauffman SA, Farmer D, Eldredge N . 2011The evolutionary ecology of technological innovations. Complexity 18, 15–27. (doi:10.1002/cplx.21436) Crossref, ISI, Google Scholar - 12
Solé RV, Valverde S, Rodriguez-Caso C . 2011Convergent evolutionary paths in biological and technological networks. Evol. Edu. Outreach 4, 415–426. (doi:10.1007/s12052-011-0346-1) Crossref, Google Scholar - 13
Tuomi I . 2002The lives and death of Moore's Law. First Monday7, 4 November. http://journals.uic.edu/ojs/index.php/fm/article/view/1000. Google Scholar - 14
Dosi G . 1982Technological paradigms and technological trajectories: a suggested interpretation of the determinants and directions of technical change. Res. Policy 11, 147–162. (doi:10.1016/0048-7333(82)90016-6) Crossref, ISI, Google Scholar - 15
Basalla G . 1988The evolution of technology. Cambridge, UK: Cambridge University Press. Google Scholar - 16
Rosenbloom R, Cusumano M . 1987Technological pioneering and competitive advantage: the birth of the VCR industry. Calif. Manage. Rev. 29, 51–76. (doi:10.2307/41162131) Crossref, ISI, Google Scholar - 17
Schumpeter J . 1926The theory of economic development. Cambridge, MA: Harvard University Press. Google Scholar - 18
Abernathy WJ, Utterback JM . 1978Patterns of industrial innovation. Tech. Rev. 80, 40–47. Google Scholar - 19
Tushman M, Anderson P . 1986Technological discontinuities and organizational environments. Admin. Sci. Q. 31, 439–465. (doi:10.2307/2392832) Crossref, ISI, Google Scholar - 20
- 21
- 22
Loch CH, Huberman BA . 1999A punctuated-equilibrium model of technology diffusion. Manage. Sci. 45, 160–176. (doi:10.1287/mnsc.45.2.160) Crossref, ISI, Google Scholar - 23
Levinthal DA . 1998The slow pace of rapid technological change: gradualism and punctuation in technological change. Ind. Corp. Change 7, 217. (doi:10.1093/icc/7.2.217) Crossref, Google Scholar - 24
Brooks F . 1995The mythical man-month: Essays on software engineering, Anniversary Edition. Boston, MA: Addison Wesley. Google Scholar - 25
Moses ME, Forrest S, Davis AL, Lodder MA, Brown JH . 2008Scaling theory for information networks. J. R. Soc. Interface 5, 1469–1480. (doi:10.1098/rsif.2008.0091) Link, ISI, Google Scholar - 26
Tushman M, Romanelli E . 1985Organizational evolution: a metamorphosis model of convergence and reorientation. In Research in organizational behavior (edsCummings L, Staw B ). Greenwich, CT: JAI Press. Google Scholar - 27
Buchanan M . 2007The social atom: why the rich gets richer, cheaters get caught, and your neighbor usually looks like you. London, UK: Bloomsbury. Google Scholar - 28
Nelson RR, Winter SG . 1977In search of useful theory of innovation. Res. Policy 6, 36–76. (doi:10.1016/0048-7333(77)90029-4) Crossref, ISI, Google Scholar - 29
Kemp R, Miles I, Smith K . 1994Technology and the transition to environmental stability. Continuity and change in complex technology systems. Final report of the project ‘Technological paradigms and transition paths: the case of energy technologies’ for the SEER research programme of the Commission of the European Communities (DG-XII). Google Scholar - 30
Schumpeter J . 1942Capitalism, socialism and democracy. Cambridge, MA: Harvard University Press. Google Scholar - 31
van den Ende J, Kemp R . 1999Technological transformations in history: how the computer regime grew out of existing computing regimes. Res. Policy 28, 833–851. (doi:10.1016/S0048-7333(99)00027-X) Crossref, ISI, Google Scholar - 32
Bell CG . 2008Bell's law for the birth and death of computer classes. Commun. ACM 51, 86–94. (doi:10.1145/1327452.1327453) Crossref, ISI, Google Scholar - 33
Bell CG . 1984The mini and micro industries. Computer 17, 14–30. (doi:10.1109/MC.1984.1658955) Crossref, ISI, Google Scholar - 34
Atkinson P . 2005Man in a briefcase: the social construction of the laptop computer and the emergence of a type form. J. Des. Hist. 18, 191–205. (doi:10.1093/jdh/epi024) Google Scholar - 35
Nordhaus WD . 2007Two centuries of productivity growth in computing. J. Econ. Hist. 67, 128–159. (doi:10.1017/S0022050707000058) Crossref, ISI, Google Scholar - 36
Dyson G . 2012Turing's Cathedral: the origins of the digital universe. New York, NY: Vintage. Google Scholar - 37
Reid TR . 2001The chip: how two Americans invented the microchip and launched a revolution. New York, NY: Random House. Google Scholar - 38
Arthur B, Polak W . 2006The evolution of technology within a simple computer model. Complexity 11, 23–31. (doi:10.1002/cplx.20130) Crossref, ISI, Google Scholar - 39
Michel J-B 2011Quantitative analysis of culture using millions of digitized books. Science 33, 176–182. (doi:10.1126/science.1199644) Crossref, ISI, Google Scholar - 40
Riordan M, Hoddeson L . 1998Crystal fire: the invention of the transistor and the birth of the information age. New Delhi, India: W. W. Norton and Company. Google Scholar - 41
- 42
- 43
Moore G . 1965Cramming more components onto integrated circuits. Electronics 38, 114–117. Google Scholar - 44
Atkinson P . 2010The curious case of the kitchen computer: products and non-products in design history. J. Des. Hist. 23, 163–179. (doi:10.1093/jdh/epq010) ISI, Google Scholar - 45
von Neumann J . 1993First draft of a report on the EDVAC.IEEE Ann. Hist. Comput. 15, 27–75. (doi:10.1109/85.238389) Crossref, ISI, Google Scholar - 46
Farmer JD, Lafond F . 2015How predictable is technological progress?Res. Policy 45, 647–665. (doi:10.1016/j.respol.2015.11.001) Crossref, ISI, Google Scholar - 47
Wright T . 1936Factors affecting the cost of airplanes. J. Aeronaut. Sci. 3, 122–128. (doi:10.2514/8.155) Crossref, Google Scholar - 48
Nagy B, Farmer JD, Bui QM, Trancik JE . 2013Statistical basis for predicting technological progress. PLoS ONE 8, e52669. (doi:10.1371/journal.pone.0052669) Crossref, PubMed, ISI, Google Scholar - 49
Muth JF . 1986Search theory and the manufacturing progress function. Manage. Sci. 32, 948–962. (doi:10.1287/mnsc.32.8.948) Crossref, ISI, Google Scholar - 50
McNerney J, Farmer JD, Redner S, Trancik JE . 2011Role of design complexity in technology improvement. Proc. Natl Acad. Sci USA 108, 9008–9013. (doi:10.1073/pnas.1017298108) Crossref, PubMed, ISI, Google Scholar - 51
Auerswald P, Kauffman S, Lobo J, Shell K . 2000The production recipes approach to modeling technological innovation: an application to learning by doing. J. Econ. Dyn. Control 24, 389–450. (doi:10.1016/S0165-1889(98)00091-8) Crossref, ISI, Google Scholar - 52
Baldwin CY, Clark KB . 2000Design rules, volume 1, the power of modularity. Cambridge, MA: MIT Press. Crossref, Google Scholar - 53
Fowler M . 1999Refactoring: improving the design of existing code. Boston, MA: Addison Wesley. Google Scholar - 54
Valverde S, Ferrer-Cancho R, Solé RV . 2002Scale free networks from optimal design. Europhys. Lett. 60, 512–517. (doi:10.1209/epl/i2002-00248-2) Crossref, Google Scholar - 55
Valverde S, Solé RV . 2005Logarithmic growth dynamics in software networks. Europhys. Lett. 72, 858–864. (doi:10.1209/epl/i2005-10314-9) Crossref, Google Scholar - 56
Valverde S, Solé RV . 2007Hierarchical small worlds in software architecture. Dynam.Cont. Discr. Impul. Syst. Ser. B 14, 1–11. Google Scholar - 57
Valverde S . 2007Crossover from endogenous to exogenous activity in open-source software development. Europhys. Lett. 77, 20002. (doi:10.1209/0295-5075/77/20002) Crossref, PubMed, Google Scholar - 58
Challet D, Lombardoni A . 2004Bug propagation and debugging in asymmetric software structures. Phys. Rev. E 70, 046109. (doi:10.1103/PhysRevE.70.046109) Crossref, ISI, Google Scholar - 59
Pressman RS . 2009Software engineering: a practitioner's approach, 7th edn. Boston, MA: McGraw-Hill. Google Scholar - 60
Schuster P . 2006Untomable curiosity, innovation, discovery and bricolage. Complexity 11, 9–11. Google Scholar - 61
Arthur B . 2009The nature of technology: what it is and how it evolves. New York, NY: Free Press. Google Scholar - 62
Valverde S, Solé RV . 2015Punctuated equilibrium in the large-scale evolution of programming languages. J. R. Soc. Interface 12, 20150249. (doi:10.1098/rsif.2015.0249) Link, ISI, Google Scholar - 63
Valverde S, Solé RV . 2015A cultural diffusion model for the rise and fall of programming languages. Human Biol. 87, 224–234. (doi:10.13110/humanbiology.87.3.0224) Crossref, PubMed, ISI, Google Scholar - 64
Arthur WB . 1989Competing technologies, increasing returns, and lock-in by historical events. Econ. J. 99, 116–131. (doi:10.2307/2234208) Crossref, ISI, Google Scholar - 65
Martínez-Mekler G, Alvarez Martínez R, Beltrán del Río M, Mansilla R, Miramontes P, Cocho G . 2009Universality of rank-ordering distributions in the arts and sciences. PLoS ONE 4, e4791. (doi:10.1371/journal.pone.0004791) Crossref, PubMed, ISI, Google Scholar - 66
Solé RV, Corominas-Murtra B, Fortuny J . 2010Diversity, competition, extinction: the econophysics of language change. J. R. Soc. Interface 7, 1647–1664. (doi:10.1098/rsif.2010.0110) Link, ISI, Google Scholar - 67
Henrich J . 2004Demography and cultural evolution: how adaptive cultural processes can produce maladaptive losses—the Tasmanian case. Am. Antiquity 69, 197–214. (doi:10.2307/4128416) Crossref, ISI, Google Scholar - 68
Derex M, Beugin M-P, Godelle B, Raymond M . 2014Experimental evidence for the influence of group size on cultural complexity. Nature 503, 389–391. (doi:10.1038/nature12774) Crossref, ISI, Google Scholar - 69
Bentley A, O'Brien MJO . 2011The selectivity of social learning and the tempo of cultural evolution. J. Evolut. Psychol. 9, 125–141. (doi:10.1556/JEP.9.2011.18.1) Crossref, Google Scholar - 70
Collard M, Buchanan B, O'Brien MJ . 2013Population size as an explanation for patterns in the Paleolithic archaelogical record: more caution is needed. Curr. Anthropol. 54(Suppl. 8), S388. (doi:10.1086/673881) Crossref, ISI, Google Scholar - 71
Henrich J . 2006Understanding cultural evolutionary models: a reply to Read's critique. Am. Antiquity 71, 771–782. (doi:10.2307/40035890) Crossref, ISI, Google Scholar - 72
- 73
Challet D, Le Du Y . 2005Microscopic model of software bug dynamics: closed source versus open source. Int. J. Rel. Qual. Saf. Eng. 12, 521. (doi:10.1142/S0218539305001999) Crossref, Google Scholar - 74
Lee R . 1986Malthus and Boserup: a dynamic synthesis. In The state of population theory (edsColeman D, Schofield R ), pp. 96–130. New York, NY: Blackwell. Google Scholar - 75
Guirlanda S, Enquist M . 2007Evolution of social learning does not explain the origin of human cumulative culture. J. Theor. Biol. 246, 129–135. (doi:10.1016/j.jtbi.2006.12.022) Crossref, PubMed, ISI, Google Scholar - 76
Aoki K . 2015Modeling abrupt cultural change regime shifts during the Paleolithic and Stone Age. Theor. Popul. Biol. 100, 6–12. (doi:10.1016/j.tpb.2014.11.006) Crossref, ISI, Google Scholar - 77
Price GR . 1970Selection and covariance. Nature 227, 520. (doi:10.1038/227520a0) Crossref, PubMed, ISI, Google Scholar - 78
Mesoudi A . 2011Variable cultural acquisition costs constrain cumulative cultural evolution. PLoS ONE 6, 1–10. (doi:10.1371/journal.pone.0018239) Crossref, ISI, Google Scholar - 79
Bettencourt L, Lobo J, Helbing D, Kuhnert C, West GB . 2007Growth, innovation, scaling, and the pace of life in cities. Proc. Natl Acad. Sci USA 104, 7301–7306. (doi:10.1073/pnas.0610172104) Crossref, PubMed, ISI, Google Scholar - 80
- 81
Greenberger M . 1962Management and the computer of the future. Cambridge, MA: MIT Press. Google Scholar - 82
Schiffer MB . 1991The portable radio in American life. Tucson, AZ: University of Arizona Press. Google Scholar - 83
Allen RC . 1983Collective invention. J. Econ. Behav. Organ. 4, 1–24. (doi:10.1016/0167-2681(83)90023-9) Crossref, ISI, Google Scholar - 84
Levine H, Rheingold H . 1987The cognitive connection. Englewood, NJ: Prentice Hall Press. Google Scholar - 85
Mitchell Waldrop M . 2016The chips are down for Moore's law. Nature 530, 144–147. (doi:10.1038/530144a) Crossref, PubMed, ISI, Google Scholar - 86
Naur P, Randell B (eds). 1969Software engineering: report on a conference sponsored by the NATO Science Committee, 7–11 October 1968, Garmisch, Germany. Brussels, Belgium: Scientific Affairs Division, NATO. Google Scholar - 87
Laitinen M, Fayad ME, Ward RP . 2000The problem with scalability. Commun. ACM 43, 105–107. (doi:10.1145/348941.349012) Crossref, ISI, Google Scholar - 88
Forrest S . 1991Emergent computation: self-organizing, collective, and cooperative phenomena in natural and artificial computing networks. In Emergent computation (ed.Forrest S ), pp. 1–11. Cambridge, MA: MIT Press. Google Scholar - 89
Brooks FP . 1987No silver bullet: essence and accidents of software engineering. Computer 20, 10–19. (doi:10.1109/MC.1987.1663532) Crossref, ISI, Google Scholar


