The term API is mostly popular due to the rapid expansion of the Internet- and cloud-based information services available for online consumers. The abbreviation stands for Application Programming Interface — the original purpose of which was to expose a functional programmatic surface to another program or IT system. Historically, the first APIs are found in the earliest operating systems (OS) like Unix when those were written in the C programming language as a main application development instrument. The C programmers needed a way to instruct the services that were part of the OS kernel (core process programs) to manipulate screen output, keyboard input, file I/O and network communication. The building-block API element was a signature of a C function so groups of such functions were used to program each functional aspect of the operating system. This is also known as a direct-call API (in-memory, or inter-process communication). This API usage was mostly limited to applications that were residing inside one single computer.

Data exchange contract

As information technology evolved, distributed software solutions began to run on different computers called servers or hosts. Those hosts were initially connected to closed area networks. But the rapid expansion of the Internet solidified the world network as the main digital transport infrastructure for online digital information services. In order to “talk” to each other, those applications need a way to describe the information exchange between them when a specific functionality or data is requested. Thus the description of this message exchange is defined as API. The term here is mostly used for “indirect” communication between software systems because the transport of messages is done via Internet network TCP/IP and mostly through the application level of the protocol stack defined as HTTP. In that regard, the API term in the cloud- and Internet- network is mostly transactional message processing defined by the provider software and used by a consumer software in order to accomplish digital integration. Practically the API message exchange is a mnemonic (text) code that was embedded in the HTTP protocol payload section, encrypted with private-public key data scrambling algorithm (SHA-256, RSA, DES). The most popular payload format is the JSON notation (JavaScript object serialisation notation), but XML-derived API protocols like SOAP and RPC are also widely used especially in banking and institutional cloud services.

API is the functional programming “contract” defined by a software team developing the provider (server) cloud application so other software developers can implement calls to the API. This allows client (consumer) software, usually a website or a mobile application, to enhance its technology capabilities. Most web-oriented development frameworks and coding languages provide the functionality to implement an API usage to such extent that, when a provider of a cloud service describes their API, the documentation usually include examples for the most popular development platforms (e.g. curl, Ruby, Python, PHP, Java, Node, .NET).

Protocol and schema

Unlike natural language exchange, the API needs to be defined in a more rigorous manner in order to strictly specify how the consumer application would “ask” for a certain functionality, what response(s) to be expected and what impact the API call would create in the provider’s software. Most cloud-based APIs are conversation- and transaction-based JSON exchanges where the “asking” entity is filling up a text similar to:

{ "object": "credit-card", "available": [ { "amount": 52425466481, "currency": "usd" } ] }

After a short time interval, they would receive a response with the same notation. When a response is not received both entities are considering the transaction as failed (timeout).

A typical API definition would look like a question-answer conversation exchange:

Caller: "Sending user ID"

Server: "OK. Accepted"

Caller: "Give me the balance of credit card xxxx-xxxx-xxxx-xxxx"

Server: "Balance is xxx"

The system of rules and restrictions what data to be exchanged and how to interpret the text (or mnemonics) of that data is called a schema. The schema is a standard for rules that is well-adopted and documented in the Internet community usually in the form of a RFC document that is approved by the Internet Engineering Task Force (IETF) regulator entity.

When JSON notation is used for message exchange the API provider should publish a JSON schema document so the consumer application can automatically validate all data traffic during API transactional conversation. This approach significantly shortens the time to integrate a consumer application to the provider service. The schema is the way to digitally express the rules and the validity of each API message so the information required from the provider is properly “understood” and acted upon properly. This is a critical human-technology borderline that defines the distinction between the “meaning” and the “instructions” when a digital service is running in the cloud.

Transactions and security

One critical aspect when sensitive data is exchanged on the network is the information security. The TCP/IP network protocol is a well-defined byte stream machine-to-machine messaging standard and this opens the possibility for unwanted “listeners” in the distributed cloud network. Such entities (applications or humans) can inspect the information exchanged between two applications and use it for improper purposes. This is the reason why API messages should be encrypted. The most wide-spread encryption model is the public-private key paradigm where the provider application holds the private and the consumer holds the public part of a binary file called “key”. Both applications use their key part to encode/decode the messages so the actual data is not plainly visible when messages are transmitted in the network.

When an API message is sent over the network — the typical HTTP JSON API payload will look like:

UwDPglyJu9LOnkBAf4vxSpQgQZltcz7LWwEquhdm5kSQIkQlZtfxtSTsmaw q6gVH8SimlC3W6TDOhhL2FdgvdIC7sDv7G1Z7pCNzFLp0lgB9ACm8r5RZOBi N5ske9cBVjlVfgmQ9VpFzSwzLLODhCU7/2THg2iDrW3NGQZfz3SSWviwCe7G mNIvp5jEkGPCGcla4Fgdp/xuyewPk6NDlBewftLtHJVf =PAb3

Once it arrives on the other side the data is again decrypted into a usable format.

There is a refined API protocol definition widely known as the RESTful ideology (Representational State Transfer). It is a set of principles, key protocol model features and ways of message composition in order to explore the functionality of a server (provider) cloud application as a set of “resources” that can be uniquely identified, added, modified and removed from a virtual database of objects. These objects (entities) represent the business domain elements of the provider’s service — like lists of icons/images, articles, addresses, etc. The simplistic nature of the REST model is based on a stateless transactional (get->receive, update->confirm) exchange. That feature along with the underlying idea of domain “objects” comes very naturally for software developers to express the business model in the functional world of object-oriented programming languages and tools. In its current definition REST is tied to the HTTP protocol both as transport and transactional layer. HTTP’s “GET”, “POST”, “UPDATE” and “DELETE” actions are similar to basic programming actions of a hierarchical object database.

eCollect’s API is using the RESTful underlying principles while adding own enhancements to the ideology in order to offer to the clients a more robust and reliable way of exchanging the sensitive financial information. Having a long experience with distributed architectures and a micro-services application approach, eCollect’s IT teams are constantly refining and re-inventing the API paradigm and its key principles to make the integration with other systems as efficient as possible.