Editor's note: This article is by
Rob High, IBM Fellow and IBM Watson CTO.
It’s only been four years since
Watson played and won Jeopardy!
against the very best players with a machine roughly the size of a master
bedroom. Since then, we’ve completely re-engineered Watson’s appearance and
abilities, delivering it via the cloud and incorporating revolutionary features
that span language, speech, vision and data insights.
With more than 30 cognitive services available via the Watson
we're bridging the gap between technology and today’s leading industries,
accelerating how doctors, lawyers, marketers and other professionals analyze
high volumes of data and glean critical insights to improve decision-making and
to generate new ideas.
Watson’s ability to recognize patterns of text, speech and vision that convey
meaning, to understand complex questions through natural language processing,
and to continually learn, unlocks the potential of unstructured data across a
multitude of industries and disciplines. Watson’s speed and efficiency have
always hinged on the quality of its large scale machine learning algorithms.
Today, at IBM and OpenPOWER’s Accelerating Innovation event in Austin, we
previewed the next iteration of our cognitive computing system architecture,
integrating the NVIDIA Tesla Accelerated Computing Platform into Watson’s core
technologies. The incorporation of NVIDIA flagship Tesla K80 GPU accelerators –
coupled with the Watson’s POWER-based architecture – accelerates Watson’s
retrieve and rank capabilities to 1.7x of its normal speed. This fuels our drive
to further improve our cost-performance of Watson's cloud-based services.
For example, if a call center agent is responding to an individual’s health and
insurance query, the agent will be able to leverage Watson’s natural language
processing technology to obtain an answer in real-time even faster and cheaper
In addition to bolstering response time, the GPU acceleration also increases
Watson’s processing power to 10x its prior performance. Watson often requires a
lot of compute power and time to digest, annotate and index large quantities of
data to prepare it to perform its cognitive tasks. NVIDIA’s Tesla GPUs will
accelerate the time it takes the cognitive computing system to process this
information to enable it to interact in natural language.
The combination of IBM POWER architecture and NVIDIA’s Tesla Platform will also
facilitate the expansion of Watson’s deep learning functionality. Hardware acceleration
is integral to Watson’s ability to deeply reason about vision and speech
recognition – in a short period of time.
This collaborative effort is a result of the OpenPOWER Foundation, a global open development
organization formed by IBM, Google, NVIDIA, Mellanox and Tyan to facilitate
innovation on POWER architecture and develop collaborative hardware and
|Watson on OpenPOWER at Accelerator Day|
By accelerating Watson’s response time and training capabilities through open
innovation, OpenPOWER is also propelling our Watson Ecosystem – which has grown
to more than 77,000 developers globally and 350
startups and established businesses currently commercializing their products
and services embedded with sophisticated cognitive computing APIs.
Today’s demo in Austin was simply an appetizer, a preview of one of several
Watson capabilities that we’re collectively accelerating. We will continue to
optimize Watson services with OpenPOWER’s advanced architecture to give
developers and clients a competitive edge in today’s cognitive era.
One innovation at a time.
Labels: bluemix, ibm research-Austin, IBM Watson, openpower