Three powerful trends from Future ID3
I spent the first half of this week at Cambridge University with leading academics on identification from across the world. Over two days, we had nearly 35 paper presentations on topics ranging from KYC regulations in Africa to China’s much-discussed social credit system. Thanks to the generosity of our hosts, the Webb Library at Jesus College turned into a cauldron of ideas and opinions on both civil registration and digital identification. While I learnt a lot about systems in different parts of the world, three trends stood out as demanding particular attention.
The rise of digital physiognomy. The practice of judging people’s character and personality from their faces is as old as humankind. But, as Edward Higgs’ paper showed, the practice is increasingly being automated, setting it up for use at an industrial scale. There are two consequences that are especially worth considering. Firstly, identification of vulnerable groups that are otherwise indistinguishable from the general population. For example, the ‘gaydar’ developed at Stanford University was able to predict people’s sexual orientation based on their facial features. Algorithms like these could be used to identify and prosecute vulnerable groups by a rogue state. Second, physiognomy has been used to ‘determine’ traits such as warmth, emotional stability, rule consciousness, self-reliance, and openness to change. These could be used for everyday transactions ranging from jobs to credit, and throws up several issues. First, the ‘accuracy’ of these systems when used for subjective traits will always be suspect, but there’s a risk that they will be adopted anyway, leading to unaccountable and unjustified exclusions of individuals from many markets. Second, even if one were to assume these to be accurate, we need to draw a line on discriminations that are permissible and those that aren’t. I fear that with the rise of facial recognition and AI, automated physiognomy will move too fast, without the necessary normative framing. I find it deeply troubling, because our face is our most unprotected biometric — making it easier to circumvent consent. We need to get ahead of this issue before mass deployment begins.
The conflation of facial biometrics and psychometric testing in digital physiognomy is potentially highly problematic — Edward Higgs, University of Essex
New narratives on data nationalism. Amba Kak presented interesting thoughts on the rising trend towards data nationalism, especially in influential countries such as China, Russia and India. She asserted that the hegemony of US-based platforms like Facebook, Google and Amazon is being seen as a form of data colonialism. Therefore, many countries are finding ways to reduce the power of these platforms, including through mechanisms like data localisation. These are wider geo-political moves that have the power to shape the way data and information is organised and shared. The narrative in these countries thus far has been driven more by political and nationalistic ideologies. However, something can probably be said about the very strong first-mover advantage that the Silicon Valley firms enjoy. To borrow Shoshana Zuboff’s terminology, these firms have already mastered the art of collecting and monetising behavioral surplus, and firms in other markets simply cannot compete with them. Therefore, we need to think about ways to deal with that absolute market power and its perverse effects on jurisdictional and tax-related issues. We discussed some ideas like new ways of redistributing profits geographically and enforcing legitimate law enforcement requests. We need more ideas — and soon, before data nationalism tears apart the free, open internet.
The securitisation of the state undermines the goal of inclusiveness — Ursula Rao, University of Leipzig
Pushback against fetishisation of technology. The idea that technologies such as biometrics and point-of-sale devices are being rolled out without enough thinking about their utility was a recurring theme in many presentations. In particular, I could pick up three major categories of concerns. Firstly, that these technologies simply don’t work in many situations. While this is an important and pressing issue (often a matter of life and death for vulnerable groups), it might be solved over time due to better connectivity, technology and processes. The second concern was that it shifts the ‘burden of accountability’ from institutions to individuals. The individual must prove that she is eligible for food rations (by making her biometrics work), rather than the institution proving that she isn’t. This is wrong for several reasons that I won’t get into, except to note that the state should be seen as serving its sovereigns (especially in the public provision of private goods) and that accountability for individuals leads to a massive collective action problem. A paper on the 1952 Indian elections made this point well — that the Indian state went door-to-door, village-to-village to find every last voter, rather than the voter having to come and register herself. Finally, the notion that identification serves societies rather than individuals came up obliquely a few times. There is much merit to the argument — after all, every act of identifying an individual is an erosion of privacy and access to services. Yet, societies (governments and businesses) sometimes have a need and an obligation to identify people. We need to rethink where such identification is necessary, and what form it should take. In the absence of such thinking, we will once again yield to the unchecked use of (increasingly digital) identification!