The technical components of interoperability as a tool for competition regulation

Thanks again to Jonathan Todd for putting together this three-page summary of my “technical components of interoperability” preprint! The final report will be published next week by Open Forum Europe.

Interoperability is a technical mechanism for computing systems to work together – even if they are from competing firms. To encourage competition in digital markets, the European Commission has suggested an interoperability requirement for large online “gatekeeper” platforms as an ex ante (up-front rule) mechanism in its proposed EU Digital Markets Act (DMA). The goal is to increase choice and quality for users, and the ability of competitors to succeed with better services.

The interoperability requirement would apply to large online platforms such as social media (e.g. Facebook), search engines (e.g. Google), e-commerce marketplaces (e.g. Amazon), smartphone operating systems (e.g. Android/iOS), and their ancillary services, such as payment and app stores.

Studies by the UK Competition and Markets’ Authority (CMA) and the French Conseil national du numérique (CNNum) confirm that interoperability regulation would be necessary to specify which functions and services would be covered, and what would be accessible, to whom, under which conditions.

On a technical level, interoperability depends on Application Programming Interfaces (APIs) and communication protocols. An API is a computing interface allowing access to a technical system and defining the conditions under which the system can be used. APIs typically intermediate in a standardised manner a series of data flows between computing systems. A communication protocol similarly defines a set of messages that can be sent between two or more systems to share information and invoke features – via the Internet or other networks (or even on the same computing system). 

Technical interoperability requires both syntactic (systems have a “language” to speak to each other, to request information and action) and semantic interoperability (the meaning of the information exchanged is understood by both parties).

Interoperability standards and standards definition organisations

APIs and protocols use standards, either: 

  • de facto, set by their main developer, with more or less transparency and stability (or alternatively competitors discovering for themselves the details of interfaces using reverse engineering), and sometimes informal cooperation with other later users of those APIs/protocols, or
  • formal, set by standards bodies such as the European Telecommunications Standards Institute (ETSI, e.g. the GSM and 3G mobile standards), the Internet Engineering Task Force (IETF, e.g. email), and the World Wide Web Consortium (W3C, e.g. HTML and other Web protocols, and social web standards ActivityPub and ActivityStreams, used by the interoperable Twitter-like service Mastodon and other connected services, such as the French PeerTube video sharing site).

The European Commission already recognises that common, open technical standards ensuring interoperability between digital technologies are the foundation of an effective Digital Single Market, by guaranteeing that technologies work smoothly and reliably together, providing economies of scale, fostering research and innovation and keeping markets open to competition. Extensive and long-standing EU law provides an existing framework for standard setting. However, interviewees expressed concern the primary body responsible, the European Telecommunications Standards Institute (ETSI), is slow and vulnerable to capture by commercial interests.

At the world level, some interviewees were concerned the Internet Engineering Task Force (IETF) is US-dominated and reluctant to set standards for policy reasons (e.g. promoting competition), while the World Wide Web Consortium (W3C) is dominated by big platforms.

The European Commission’s Multi Stakeholder Platform on ICT Standardisation, an expert advisory group where Member State and Commission representatives meet technical standards bodies four times each year, would be an ideal venue to plan standardisation support for interoperability requirements in digital markets. Moreover, to ensure that interoperable infrastructure exists to use and apply new standards and protocols, the EU should provide financial support to software and services to underpin such infrastructure.

Risks for standard setting include hindering innovation by ossifying functionality; large company incentives to minimise the functionality standardised; and the risk of capture by vested interests. To counter this, it would be necessary for regulators to define a set of core interoperable features as well as a standard interface or protocol. Such features need to be defined by a standard “ontology” (technical dictionary) that defines properties and how they are related with defined concepts and categories. The European Commission has extensive experience with these processes in the public administration and law enforcement sectors.

For interoperability to work well in practice, it would be important to put in place validation mechanisms to verify that different companies’ systems meet basic standards, such as the UK’s Open Banking Implementation Entity. Without such verification, incompatible software and/or systems can frustrate interoperability in practice. It is also important that interoperability standards are accompanied by validation tools, and that parties provide details on how they comply with the standard.

A sliding scale of interoperability obligations

There is a sliding scale of interoperability obligations that could be imposed by competition regulators on gatekeeper platforms. The greater the obligations:

  • the more freedom users get in terms of services and software they can use to interact with platforms and other users, but also
  • the greater the requirements for regulatory action/market intervention and technical complexity.

The levels of interoperability obligations can be summarised as follows, starting from the status quo at level 0:

  1. Platform-permissioned vertical interoperability: users can connect their own account on complementary services from a third party to a platform, with its express permission. Users can use major platform IDs to log in to other services. Competition regulators might still impose transparency and stability requirements, and limit self-preferencing.
  2. Open vertical interoperability: users can connect their own accounts and open IDs on complementary services, or apps, from a third party, to a platform, without the platform’s permission. This would enable real-time data portability.
  3. Public horizontal interaction (no external user authorisation needed), for publication and messaging with competing services.
  4. Private horizontal interaction (external user authorisation needed at this and higher levels):
    • Sharing – Platform users can share resources (such as a feed) with a limited number of readers (who should not need an account on that platform).
    • Messaging – an account owner can authorise any other user to send them (or groups they administer) messages or other types of content. 
    • Social graph: a platform user can authorise a third-party service to access enough details of their contact list to identify contacts present on both.
  5. Seamless horizontal interoperability: users have the ability to use directly competing services to a platform’s own for: 
    • Componentisation – to replace components on a platform.
    • Seamless interaction with its users.

It may be appropriate to impose lower level interoperability obligations on firms with substantial market shares than on dominant or “gatekeeper” firms that would be subject to higher level obligations.

An important aspect of ensuring interoperability in practice is ensuring compliance with security and privacy obligations with regard to personal data held by platforms.  One option could be to limit interoperability to firms and organisations that have contractually agreed to honour security and privacy requirements, and be independently certified to do so. However, such security and privacy requirements must not be used as a pretext by large platforms to refuse to interoperate, or to limit interoperability to a few large players that could form a cartel.

Open authorisation and identities

Open authorisation and identity protocols allow a user to securely authorise a third-party service to access resources belonging to their account on another platform, rather than having to share passwords or other security-critical information. This is currently possible using the IETF OAuth 2.0 protocol and is widely used by sites, including Twitter, to enable access by complementary services. It can also be used by a substitute service where a user already has an account with the original platform and wishes to multi-home.

Open identities are also important for users that wish to take advantage of single sign on solutions to third-party services, with extra security features (such as multi-factor authentication), but without sharing so much information with the platforms whose ID systems are currently widely supported (principally Facebook and Google).

The EU’s electronic identification and trust services (eIDAS) framework currently focuses on ensuring public administrations in the EU’s 27 Member States will accept state-backed digital proof of identity on the same basis as their own. The framework also creates a European internal market for Trust Services (electronic signatures, electronic seals, time stamp, electronic delivery service and website authentication) by ensuring that they will work across borders and have the same legal status as their paper-based equivalents. It could provide another mechanism for EU countries to increase the interoperability of identity technologies, but is currently complex and bureaucratic.