Cognitive Computing (CC) describes technology platforms that broadly speaking, are based on the scientific disciplines of Artificial Intelligence - AI and Signal Processing. These platforms encompass machine learning, reasoning, natural language processing, speech and vision, human-computer interaction, dialog and narrative generation and more.
At present, there is no widely agreed upon definition for cognitive computing in either academia or industry.
In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. CC applications link data analysis and adaptive page displays (AUI) to adjust content for a particular type of audience. As such, CC hardware and applications strive to be more affective and more influential by design.
IBM describes the components used to develop, and behaviors resulting from, “systems that learn at scale, reason with purpose and interact with humans naturally.” According to them, while sharing many attributes with the field of artificial intelligence, it differentiates itself via the complex interplay of disparate components, each of which comprise their own individual mature disciplines.
Some features that Cognitive Systems may express are:
- Adaptive: They may learn as information changes, and as goals and requirements evolve. They may resolve ambiguity and tolerate unpredictability. They may be engineered to feed on dynamic data in real time, or near real time.
- Interactive: They may interact easily with users so that those users can define their needs comfortably. They may also interact with other processors, devices, and Cloud services, as well as with people.
- Iterative and Stateful: They may aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They may “remember” previous interactions in a process and return information that is suitable for the specific application at that point in time.
- Contextual: They may understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).
Cognitive computing has been subject to a great deal of marketing hype over the years and there continues to be a struggle with finding a non-proprietary definition, but as cognitive computing platforms have emerged and become commercially available, evidence of real-world applications are starting to surface. Organizations that adopt and use these cognitive computing platforms, purpose-build applications to address specific use cases that are relevant to their internal and external users, with each application utilizing some combination of available functionality necessary for the use case.
Examples of such Real World Use Cases include the following:
- Speech recognition apps powered by HPE Haven OnDemand, IBM Watson
- Sentiment analysis apps powered by HPE Haven OnDemand, IBM Watson
- Face detection apps powered by HPE Haven OnDemand, IBM Watson, and Microsoft Cognitive Services
- Election insights apps powered by HPE Haven OnDemand, IBM Watson
- CogX Massively Parallel Open Source Cognitive Computing
These and many more examples are available on the respective cognitive computing platform provider blog websites, helping to demystify the possibilities into real world applications today. This is important, since even as recently as April 8, 2016 in a Fortune magazine article Meg Whitman cast doubt on IBM Watson's present day capabilities, and Virginia Rometty simply responded with “We are building an era, a platform, an industry, and making a market with it. We have competitors who don’t disclose for a decade, I’m going to protect it and nurture it—we will disclose eventually".
Why Cognitive Systems?
When Watson defeated Brad Rutter and Ken Jennings in the Jeopardy! Challenge of February 2011, it was clear that a new kind of computing system was emerging – one that could learn, reason, and understand natural language.
The systems of today have delivered tremendous business and societal benefits by automating tabulation and harnessing computational processing and programming to deliver enterprise and personal productivity. The machines of tomorrow – cognitive systems -- will forever change the way people interact with computing systems to help people extend their expertise across any domain of knowledge and make complex decisions involving extraordinary volumes of fast moving Big Data.
In healthcare, IBM Watson for Oncology, trained by Memorial Sloan Kettering, helps oncologists treat cancer patients with individualized evidence-based treatment options by analyzing patient data against thousands of historical cases trained through more than 5,000 MSK MD and analyst hours. Watson can help doctors narrow down the options and pick the best treatments for their patients. The doctor still does most of the thinking. Watson is there to make sense of the data and help make the process faster and more accurate. For city leaders, these new systems can help them prepare for major storms to predict electrical outages, plan evacuations and prepare emergency management equipment and personnel to respond in the areas that will need it most.
That is the promise of cognitive systems--a category of technologies that uses Natural Language Processing - NLP and Machine Learning - ML to enable people and machines to interact more naturally to extend and magnify human expertise and cognition. These systems will learn and interact to provide expert assistance to scientists, engineers, lawyers, and other professionals in a fraction of the time it now takes.
Far from replacing our thinking, cognitive systems will extend our cognition and free us to think more creatively. In so doing, they will speed innovations and ultimately help build a Smarter Planet.
A Symbiotic Cognitive Experience - Human-Computer Collaboration at the Speed of Thought
Every era of computing delivers a new experience. In this era of cognitive computing, we envision a partnership between humans and learning systems that augment our individual and group cognitive capabilities, particularly those associated with insight and discovery. How would this work? As people inhabit and move across many physical environments, we see a fluid, coherent computing experience through space and time, connected by an ecosystem of cognitive environments inhabited by a society of specialized software agents called cogs. Cogs work in a mutually beneficial partnership with humans to enable better complex data-driven decision-making. We call these partnerships Symbiotic Cognitive Systems.
Cognition does not occur solely (or even mostly) within an individual human mind, but rather is distributed across people, artifacts and environments. The notion of building a society of cogs as the core of a cognitive environment is based upon this belief. Cogs are designed to follow and interact with humans and other cogs across a variety of everyday environments. They engage individually or collectively with humans through a combination of traditional interfaces and adaptive multi-modal interfaces based upon spoken dialog, gesture, and advanced visualization and navigation techniques. They learn and leverage sophisticated models of human characteristics, preferences and biases so they can communicate naturally.
The Cognitive Environment
A cognitive environment is an infrastructure inhabited by the society of cogs and the devices that let them behave as one shared integrated resource, enabling “human-computer collaboration at the speed of thought.” Cognitive Environments can look and feel very different (from decision rooms in the workplace, to cars, to homes, to mobile), but by being connected to one another they will feel seamless.
Symbiotic Cognitive Experience in Action
Cognitive environments enhance the ability of business managers, emergency planners, and executives to make more effective strategic decisions. The relationship goes beyond interface and interaction, to trusted long term collaboration between cognitive computers and human beings.
In the field of oil and gas, a deeply interactive and significantly more collaborative cognitive environment enables geologists, geophysicists, petrochemical engineers, economists, planners, and developers to come together in a single environment that leverages their individual and unique skills, tools and applications, to collectively influence the course, plan, and direction of strategic decisions for higher quality outcomes.
M&A is an essential part of strategy for profitable growth. Yet optimal identification and successful integration of the right target company is complex. Firms using a cognitive environment can more naturally highlight value and synergy opportunities, visualize trade-offs, and explore what-if scenarios to ensure that the right decision is made.
Emergency planning requires quick and accurate decision making, and can benefit from a set of cognitive agents who can quickly explore successes and failures from past data to recommend options and trade-offs for allocating scarce funds and deploying emergency crews in the most vulnerable locations.
The era of Cognitive Computing: Calling for a Shared Research Agenda
“ Cognitive systems will require innovation breakthroughs at every layer of information technology, starting with nanotechnology and progressing through computing systems design, information management, programming and machine learning, and, finally, the interfaces between machines and humans. Advances on this scale will require remarkable efforts and collaboration, calling forth the best minds—and the combined resources–of academia, government and industry. ”
Zachary Lemnios, Vice President, Strategy, IBM Research