By succeeding in making machines work in tandem with humans to collect and process data, analyze it, and make decisions, enterprises have benefited from a continuously rising productivity.
Over the years, advances in what machines can do have resulted in new tools and methods for analyzing data. These advances have also been accompanied by new waves of excitement and anxiety about automation.
Already at the dawn of the computer age, speedy calculations led to new approaches to data analysis such as simulations and Monte Carlo methods. At the same time, the excitement over these “thinking machines” or “giant brains” as they were popularly called at the time, led 26-year-old John Diebold to write a book titled automation, published in 1952. Diebold placed the term in the context of the new information technology and highlighted the potential for efficiency gains resulting from automating the work of factory and knowledge workers. Others worried about the loss of jobs and the emergence of a "push-button society" in which workers would experience a surfeit of leisure they were not equipped to handle.
For more than half a century, Automated Intelligence (AI) has continued to generate hype—unrealistic expectations of what machines could do and their potentially positive and negative impact on society. The hype was reinforced by researchers believing their goal was to create machines with human-level intelligence. The 1955 proposal for the first workshop on “artificial intelligence” defined it as “making a machine behave in ways that would be called intelligent if a human were so behaving.” One of the participants in the workshop, Nobel Prize and Turing Award-winner Herbert Simon, predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do."
As we know, that did not happen. What did happen is a steady advance in finding innovative ways for machines to augment human work by collecting data and analyzing it to optimize processes, identify new business ventures, and make smarter decisions. This was enabled by the presence of computers in all business activities and all walks of life which created a flood of data. One of the key applications helping us make use of the data collected by computers, has evolved under the general label of “Business Intelligence” (BI). The term was coined by Hans Peter Luhn in a 1958 IBM Journal of Research and Development article, defining BI as an "automatic method to provide current awareness services to scientists and engineers."
Today, Business Intelligence is the combination of tools, processes and skills that help us turn the data deluge into better and faster decisions. It is embedded in all levels of the organization, allowing anyone that needs to make a decision—operational, tactical, or strategic decision—to make it based on the best data available.
If you can swim in the flood of data, you win. According to MIT researchers, companies that excel in data-driven decision making are, on average, 5% more productive and 6% more profitable than their competitors. A study by IDC found that organizations that use diverse data sources, diverse analytical tools, and diverse metrics were five times more likely to exceed expectations for their projects than those that don’t.
As before, both the grand promises and the undue anxiety are focused on Automated Intelligence and its goal of replicating human intelligence in machines. But the real and realistic benefit is in what has served us so well for about 70 years—Augmented Intelligence, enlisting computers to work with us, to complement our skills, experience, and intuition.