If quantum surveillance, by leveraging quantum algorithms and unlimited access to metadata, could identify individuals in society who seek to do harm – and do so in a way that ensures the innocent were never abused and information was never compromised – is this something society should adopt?

Despite weekly claims that some nation, corporation, or institute has discovered the Holy Grail that is a functioning quantum computer, it’s never too long before these hopes come crashing back to earth as data is re-examined and physical limitations once again appreciated.

More promising, however, is the field of machine learning, where quantum algorithms can be leveraged within existing computational systems to enhance pattern recognition. Two years ago, a joint Canada and U.S. research team, led by Nathan Killoran of Xanadu, developed a model of continuous-variable architecture that invited the evolution from existing linear neural networks to the nonlinear transformations necessary for a quantum neural network. A neural network is a sophisticated circuitry. A collection of active nodes connected to one another. Traditionally, these networks are ‘feedforward,’ meaning the nodes communicate in one direction only: information comes into the system and travels ‘across’ the network (hence ‘linear’). The quantum advantage enables these nodes to communicate with an unlimited degree of freedom, interchanging signals between nodes and throughout the system in all directions at any time (hence ‘nonlinear’). Although a somewhat whimsical analogy, think of a linear neural network like an old-style pasta maker: dough goes in and pasta comes out, shaped according to the setting. Now imagine if dough went in but kept spinning around inside the pasta maker at hyper speed while the settings continuously updated, based on the evolving shape of the pasta inside. Behold the gastronomic glories of the ‘quantum pasta maker’!

One area where quantum algorithms and quantum neural networks have a distinct advantage is the field of ‘data-fitting’: taking prohibitively large amounts of quantitative data and generating highly-accurate probabilistic inferences; fitting enormous amounts of disparate data to a curve. In other words, quantum algorithms would make it possible to leverage metadata to anticipate human behaviour, raising the question, do we stand on the threshold of predictive surveillance, i.e., will we shortly enter the era of ‘quantum surveillance’?

In the era of quantum surveillance, domestic terrorists can be identified based on digital impressions before they commit acts of violence or insurrection. Terrorist networks will be dismantled before they metastasise and become formidable. In the era of quantum surveillance, threats are diffused before they become threatening.

This raises the second question: should we?

Typically, arguments against the wide-scale use of metadata for surveillance have hinged on the issue of error: algorithms are imperfect and generate unanticipated harm. Professor Tal Zarsky, argues that questions of fairness and efficiency render the use of algorithms highly problematic. Numerous studies have shown that prejudice, racism, and other forms of bias are “baked into the data collection and sorting processes.” Next come warnings that the innocent will be swept up with the guilty, that unrelated private information will be exposed to abuse, or that metadata analysis reinforces the friend-enemy distinction, thus perpetuating what Michel Foucault called a “continuation of war by other means.” Finally, critics of quantum surveillance rightly ask, ‘quis custodiet ipsos custodes?’ (who will watch the watchers?) Who establishes the criteria for surveillance, who retains possession of the metadata, and how do we address the weight of corporate influence? Indeed, many of these issues were raised around 2015, when the Australian government passed its metadata retention scheme.

Critically, these arguments all presuppose one important fact: quantum surveillance is already in place. These are all arguments against quantum surveillance made from a consequentialist perspective. The arguments take the form of philosophical ‘thought experiment’ wherein the objector assumes quantum surveillance is in operation, and take pains to point out all the awful things that will happen. The problem with arguing this way is that if all the ‘consequences’ can be addressed, presumably we have to accept ‘consequence free’ surveillance? If the arguments against a technology are made on the basis of ‘bad’ outcomes, once the bad has been eliminated and nothing but ‘good’ outcomes are generated, we ought to adopt the technology, right? Right?

This form of philosophical dilemma is an entirely realistic prospect. It is highly probable that quantum algorithms will eliminate error – insofar as error has been described above – and thus, neutralise consequentialist arguments.

Consider the situation another way. If quantum surveillance, by leveraging quantum algorithms and unlimited access to metadata, could identify individuals in society who sought to do harm – and it did so in a way that ensured the innocent were never abused and information was never compromised – is this something society should adopt? Some bright spark will even propose a form of institutional transparency and accountability that mollifies those worried about watchers. At present, 56% of Americans consider it unacceptable to use algorithm decision-making for “criminal risk assessment for people up for parole.” Generally speaking, people seem uncomfortable handing this kind of power over to a ‘machine.’ Given the ‘no harm’ parameters, however, it is highly-likely that number would fall precipitously: the idea of ‘consequence free’ quantum surveillance will appeal to a great many people.

This is why I prefer to consider quantum surveillance from a deontological position. What principles does quantum surveillance violate, by its very nature? Do we consider those principles essential? If the principles quantum surveillance violates are considered essential, quantum surveillance ought to be rejected, irrespective of whether or not it is ‘consequence free.’ Deontology thus allows us to reject quantum surveillance in principle, without engaging in consequentialist bargaining.

Here are some essential principles that quantum surveillance violates:

  1. Limited power:
    • The power to anticipate individual behaviour on the basis of thought and word is too great for any institution or authority; and,
    • The power to constrain individual behaviour on the basis of thought and word is too great for any institution or authority.
  2. Freedom:
    • The degree of social control and structuring implied by quantum surveillance is incompatible with human agency and autonomy.
  3. Progress:
    • The imposition of order and elimination of uncertainty implied by quantum surveillance runs contrary to the human tendency toward adaption and dynamic change.

In light of these principles, the calculous used to consider quantum surveillance is markedly different. The degree to which you accept quantum surveillance stands in inverse relation to the degree to which you value the principles. The less one values limited power, freedom, and progress, the more permissive one will be of quantum surveillance – and vice versa.

I have presented the question of quantum surveillance in this way – identifying a distinction between consequentialist and deontological considerations – because the deontological position is only viable before quantum surveillance is upon us. Based on prior histories of emerging technologies, once quantum surveillance is a reality, political, social, and economic exigencies and incentives will ensure the only perspective heard is that of the consequentialist. Advocates of quantum surveillance will appeal to society’s positive gains: safety, security, peace, and order. Objectors will raise the spectre of negative outcomes: algorithmic errors and institutional bias. Predictably, the institutions advancing quantum surveillance – the self-same institutions that covet any mechanism that immediately and necessarily reinforces their power – will offer various ad hoc fixes to placate the naysayers. Following this procedural bargaining – in fact, in no time at all – quantum surveillance will become de rigueur. In the final analysis – when it comes to matters of institutional politics – consequentialism is a loser’s gambit because it necessarily presupposes the enemy is within the gates.  It is far better – as the maxim goes – to fortify one’s position before the threat is upon you, remembering what the defences are protecting in the first place.

Given that quantum surveillance is only a matter of time, now is the best opportunity to consider whether we value limited power, freedom, and progress – or, not.