Rise of the Cognitive APIs

illustration of various interfaces

APIs for Machine Learning and Artificial Intelligence

Almost paradoxically, software development tends to become easier the more advanced and sophisticated tasks become. In the recent past, AI development was a test not only of programming chops but also of the developer's understanding of sophisticated algorithms. However, the explosion of interest in artificial intelligence in recent years has motivated the development of high-level application program interfaces (APIs) which manage a myriad of details automatically, permitting the programmer to focus on the task, not the details of implementation.

Great power usually comes with great responsibility. In the case of AI, it can also come with great confusion, as new platforms proliferate at an ever-increasing rate. We'll take a survey of a few of the most prominent players in this arena.

AI at the Center; AI at the Edge

IBM Senior VP David Kenny likens Apple Siri to a browser; a tool used at the edges of the web to interact with intelligent services at the "center," i.e., large server farms. (Of course, Kenny prefers the idea that the intelligent service will be IBM's own Watson).

IBM's Watson is not an API as much as it is a set of APIs, each focusing on some important aspect of machine learning, ranging from business analytics to the extraction of expert knowledge from unstructured text. Similarly, Amazon Web Services Machine Learning (AWS ML) provides a high-level API giving access to pre-trained models for common AI tasks such as image recognition, natural language understanding , and chatbots. AWS also provides support for models created with any of the commonly used AI frameworks.

While Microsoft's Azure ML and Google's Cloud ML provide some pre-trained AI models that developers can use immediately, their web services do not provide as rich a high-level API and are more aptly described as hosted environments for AI models created using one of the frameworks we will talk about shortly.

Not only software companies, but hardware manufacturers as well recognize that AI opportunities are not limited to cloud services and large data centers. The first steps in AI applications will execute at the "edge," the devices in our homes, offices, and automobiles. These, of course, will not be limited to smartphones and the like, but will include vast numbers of connected devices often referred to as the "Internet of Things."

APIs and Frameworks: A Bit about Terminology

Strictly speaking, any set of functions in a software library exposed for use by developers is an API or application program interface. Words are not always used precisely, however, and in the world of artificial intelligence "API" is often used not in its most general sense, but rather to refer to a high-level API used to access great functionality using few function calls. Some people use the term as a shortening of "RESTful API," referring specifically to the popular web-based interface.

In contrast, the term "framework" applies to a programming environment that provides all the support functionality likely to be required by software developers.

We cannot take the time for an examination of all the existing AI and machine learning frameworks; this list is limited to those supported by major organizations. I apologize if your personal favorite is not on this list.


Google's prominence as an industry leader has given its machine learning framework, Tensorflow, high visibility and a great deal of "buzz." The popularity of Tensorflow has resulted in the creation of many books and tutorials, which recommends Tensorflow as a good place to get started for newcomers to the field.


Now officially called the Microsoft Cognitive Toolkit, most folks seem to still be calling it by its original name, CNTK. Like Google's Tensorflow, Microsoft's CNTK primarily targets developers using Python to assemble their machine learning systems.


MXNET is an Apache project with similar function to CNTK and Tensorflow. One interesting feature, however, is that MXNET provides built-in support for symbolic programming as well as the conventional imperative style of programming supported by most tools. Symbolic programming has been around for many decades but has always filled a niche role. It will be interesting to see if MXNET starts bringing symbolic programming into the mainstream.


Adding to the confusion about the plethora of frameworks are Caffe and Caffe2. Caffe, which stands for the ponderous "Convolutional Architecture for Fast Feature Embedding" started at UC Berkeley's Berkeley Vision Center. But Caffe2 is not the next version following Caffe 1.0. Facebook decided to craft a different version of Caffe but did not want to break existing Caffe models, so they forked Caffe, and called the product Caffe2. Confused? Perhaps I shouldn't mention that as of March 2018 Caffe2 is part of the PyTorch distribution of ML and statistical tools.

Keras - The API for the Framework

Keras provides a high-level API for developers who wish to use Tensorflow or CNTK but who wish not to concern themselves with the nitty-gritty details necessary to program these systems in Python or C++. One of the more important aspects of Keras, in my opinion, is the ability of Keras to provide an interface between R and AI frameworks. After installing the Keras R package, R developers can develop for Tensorflow or CNTK in their favorite environment.

AI on Your Local Servers

In contrast with ML APIs like Watson, the frameworks mentioned can be installed and run on servers on your own local area network. This might well be the best choice for new projects where much testing and research will be necessary before the requirements for deployment are fully understood.

Hardware Abstraction

A primary benefit of the ML frameworks, impossible to overestimate, is that a model developed using, say, Tensorflow, is oblivious to the actual hardware that will run it. Since machine learning is always hungry for more computational resources, it is indispensable to have the ability to run models using hardware acceleration such as GPUs. Graphic processor units (GPU) were designed for avid video gamers, but where the gamer sees a way for faster action, the scientist sees many multiple floating point processor cores. Over the last decade, many computationally intensive tasks, including machine learning, have come to rely on the GPU. Programming a GPU directly requires new skills for a developer, but ML frameworks render direct GPU programming unnecessary. The ML model developer need have no understanding how the GPU is being used; the framework takes care of those details by itself.

Google has taken the idea of hardware acceleration to the next logical step by developing a dedicated chip, dubbed the "TPU" or tensor processing unit, to accelerate machine learning tasks. It seems, however, that this chip will only appear in Google's own Cloud ML hardware and will not be made generally available.

AI of Things

Devices at the edge, i.e., handheld devices and smart appliances, do not have the computational power to support the AI development, yet they can possess an intriguing relationship with APIs and frameworks. For example, Intel has acquired Movidius, makes of the Myriad Neural Compute Engine chips. While it is not possible to create a model on such a chip, it is possible to create a model using Tensorflow or Caffe and then load the model onto the Myriad chip. In this way, visual recognition AI can be installed not only on smart devices like phones but also on vision-critical appliances such as self-guiding drones.


We have seen that even when one considers only a subset of machine-learning platforms, there are many options to choose from. While each platform has its own strengths and weaknesses, all perform the set of core algorithms currently required for "state-of-the-art" artificial intelligence. Of course, in this field, "state-of-the-art" is a rapidly moving target.

Author: Dan Buskirk

Related Training:
Business Intelligence
Big Data