Welfare Automation in the Shadow of the Indian Constitution
Updated: Feb 7
‘How can my constitutional or statutory rights be probabilistic?’
This was the question posed by Shyam Divan, counsel for a petitioner challenging the Aadhaar system, before the Supreme Court of India. The Unique Identification (UID) or Aadhaar, the Government of India’s massive infrastructural project to capture biometric data for all of India’s residents, brought to the fore the challenges of ‘datafication’ and of automation. In the context of Aadhaar, datafication refers to the quantification of individual identity through the information collected by the Aadhaar system – primarily biometric information, but also increasingly various linked databases comprising information collected in a number of contexts.
This automation has resulted in human decision-making being supplanted by algorithmic decision-making systems fed by such data. In the context of a citizen-state relationship increasingly mediated through the creation and computation of data about citizens, Divan’s plea encapsulates one of the core challenges of a data-fied society – how should our constitutional and legal frameworks address the implications of individuals and communities being increasingly surveilled, measured and governed by algorithmic logic, statistical inferences and predictive analytics?
The Aadhaar project is also an example of social welfare systems as a site of technological experimentation, where modern tools for the collection and analysis of vast amounts of data about citizens are seen as a panacea for structural problems of socio-economic exclusion. Similar data-fied and automated technologies are being increasingly embedded in society, in particular, through technologies variously referred to as ‘Artificial Intelligence’ or ‘Big Data Analytics’. These technologies rely upon probabilistic techniques to classify, rank, sort and make predictions about individuals and communities by relying upon assemblages of data, computational power and algorithmic systems. They are ‘probabilistic’ either through the deliberate use of statistical inference techniques, or through the failure of the technology to be accurately deployed in a deterministic manner. The deployment of these systems within pervasive government architectures is particularly concerning, as it reconfigures the citizen-state relationship and exposes tensions between the state’s capabilities and duties and the claims of civil liberties and social justice.
This essay is an attempt to examine the socio-legal contours of the datafication and automation of the public sector in India, by examining two related texts. Automating Inequality, Dr. Virginia Eubank’s ethnographic study of welfare automation in the United States of America; and the landmark dissent of Justice D.Y. Chandrachud in the Aadhaar case, which relies, inter alia, upon Eubank’s work to locate how automated technologies compromise constitutional values. While important literature exists on the impact of Aadhaar on welfare entitlements as well as on the structural underpinnings of the digital welfare state, this essay locates the challenges posed by datafication and automation within the Indian constitutional framework, taking the Aadhaar case as a starting point.
The first part of this essay charts the use of automated decision-making tools in the context of the delivery of public services, and argues that it is necessary to centre the political and economic context of the deployment of these tools in any debate considering its governance or regulation. The second part of this essay will apply this frame of analysis to examine the Indian Supreme Court’s judgment on the constitutionality of the Aadhaar system. I argue that the majority judgment failed to take account of the constitutional implications of the digital welfare state or to provide a framework to act as precedent for legal claims against automated and data-fied systems in governance, and that, rather, the dissenting opinion in Aadhaar provides a preliminary basis for assessing such claims.
The Rise of the Technological Welfare State
Automating Inequality studies the application of automated decision making systems embedded within US government agencies charged with providing social services or welfare to individuals, families and communities living in poverty, through three case studies – a healthcare eligibility and fraud detection system; a system that classifies children at risk of harm and triggers differing levels of state intervention; and a system for the allocation of housing benefits to vulnerable communities. Each of these systems relies upon various modes of datafication and automation and follows a particular logic. Data is collected and collated about individuals and communities, usually without their knowledge or informed consent; this data is analysed through ‘expert systems’ or computers utilising algorithmic systems, which are generally inscrutable to both government workers as well as affected persons; and the resulting inferences inform administrative decisions made about persons, with little due process or means to ensure accountability or redressal of grievances. Each study serves to display the destructive effects of automated systems on individuals and communities in precarious circumstances – the loss of a home; the punitive gaze of law enforcement and, in a catastrophic example, the death of a welfare dependent. Eubanks refers to the inherent attributes of these tools as mechanisms of surveilling, classifying and punishing the poor, and locates it within the historical continuum of political techniques which seek to manage poverty as an unfortunate but inevitable result of individual choice and social circumstance.
These case studies not only seek to highlight the technical glitches or failures to make a case for adopting newer, improved or more efficient technologies to aid welfare, but firmly locate the deployment of automated systems and the rise of the digital welfare state within their historical and socio-political context. As Eubanks’ work typifies, the deployment of such systems is often a by-product of austerity policies, with claims of efficiency shrouding the fact that often these systems are means to decrease overall welfare spending, or cost-cutting by automating potentially costly or time-consuming administrative systems. Eubanks not only debunks the idea of modern computational systems as free from biases and neutral in their deployment, but demonstrates that the discriminatory and harmful effect of automation inhere within the manner and context in which these systems are deployed.
This is different from a frame of analysis which assumes that improving the functioning of technologies can fix the harms arising from their deployment. By centring the rise of welfare automation within the political and social choices made for ‘managing’ poverty, Eubanks’ work challenges narratives of technological determinism which view the deployment of these technologies as unaffected by the political economy while informing its shape and form. Automating Inequality is an important reminder that the rise of pervasive technologies, including AI and the datafication of government services, is not an inevitable outcome of the forward march of scientific progress, but is an active social and political choice, and a site of political struggle over whether and how certain technologies are adapted to societies.
Automation is wreaking havoc in welfare systems beyond the USA, as well. A recent report by Philip Alston, the United National Special Rapporteur on Extreme Poverty and Human Rights, discusses the application of automated decisions in welfare systems across the world. In Australia, an algorithmically determined debt recovery system for overpayments was found to be incorrectly identifying and penalizing beneficiaries of welfare payments. In Ontario, the automated Social Assistance Management System has left caseworkers perplexed and unable to transition to the automated system, leaving welfare dependents in the lurch.
In India, similarly, Aadhaar has resulted in the world’s largest automated identification system, leading to documented exclusion from welfare systems, apart from fears of private and state surveillance. For example, Reetika Kheera’s work documents the contribution of Aadhaar in systematic exclusion of migrants, manual labourers and aged populations, due to the nature of the biometric identification systems. Multiple issues such as data leaks and unauthorized database-linking imply the lack of security and the inability of individuals to effectively control how their information and identity is being used and often weaponised by state and private players. For instance, in Telangana, through the State Resident Data Hub Project, voter disenfranchisement has been caused due to Aadhaar-linked automation of voting rolls without due process or notice.
Beneath the shiny veneer of the technological welfare state and a discourse replete with references to modernity and efficiency, lies, as Philip Alston notes, ‘the Trojan horse of neoliberal hostility towards welfare and regulation.’ The deployment of these systems has defied traditional accounts of legal accountability. This happens wilfully, by claiming that regulation injures innovation; inherently, due to the challenges of these systems in ensuring the comprehensibility and veracity of data and the algorithmic assemblages that they employ; and structurally, in their deployment as ‘public private partnerships’. The latter is a particularly nefarious outcome of modern automated systems being deployed within government infrastructures. The knowledge over the domain of modern technological systems as well its capital is possessed almost exclusively by private, for-profit firms. As a consequence, the development and management of these systems relies heavily upon the integration of private players into governance systems, who then become responsible for mediating the citizen-state relationship, without the traditional forms of accountability that accompany that relationship, such as requirements of openness, transparency or non-discrimination.
The deployment of Aadhaar in India exemplifies many of the concerns that Automating Inequality outlines. Aadhaar, with its ecosystem of quasi-governmental structures and public-private partnerships; its explicit focus on data-fied managerial and technical processes aimed at welfare entitlements and the consequent claims of surveillance and exclusion, is a fitting extension to Eubanks’ investigations and claims about the digital welfare state. It is a mark of the importance of this scholarship that the Supreme Court relied upon Automating Inequality in its deliberation on Aadhaar – albeit in the dissenting judgment of Mr. Justice D.Y. Chandrachud.
How to Build a Constitutional Machine: Justice KS Puttaswamy v. Union of India
‘Constitutional guarantees cannot be subject to the vicissitudes of technology’
On September 26, 2018, in Justice K.S. Puttaswamy v Union of India, four judges of a five-judge bench of the Supreme Court of India upheld the constitutionality of the Aadhaar system, subject to a few caveats relating to the manner of its deployment and its scope. While the immediate impact of the judgment on the Aadhaar system, as well as its development of data protection and privacy law has been studied, this essay will focus particularly on the claims relating to datafication and its effect on welfare entitlements. The Aadhaar case presented the Court with a unique opportunity to deliberate upon the relationship between data and the welfare state, as well as on the constitutional status of probabilistic algorithmic systems. Exploring how these questions were answered can help us understand how lawmakers and courts may approach questions fundamental to such technologies – questions of access, equity, discrimination, transparency and agency in the future.
The majority judgment of three Supreme Court judges upheld the Aadhaar system despite adjudging that it encroached upon the right to privacy. In undertaking a proportionality assessment of Aadhaar, the majority judgment proceeded to endorse the constitutionality of the Aadhaar system as a matter of balancing conflicting rights, namely the right to life with dignity (granted by access to economic and social goods via Aadhaar) against what the court declared to be ‘minimal restrictions’ placed upon the right to liberty and privacy imposed by the Aadhaar system. The majority held that the technical and legal limitations imposed on data collection were sufficient safeguards for privacy when weighed against the public benefit that will result from better identification of beneficiaries and the cost-saving benefit from stemming leakages in welfare delivery systems.
The contrast between the Court’s analysis of the privacy harms of datafication of welfare recipients, as opposed to other applications of Aadhaar-based authentication is also noteworthy – the Court explicitly struck down the requirements of biometric authentication for the purchase of SIM cards as well as for the operation of bank accounts. It noted that such a requirement would have a disproportionate impact on individual privacy, and demanded greater justification from the government for imposing such measures. On the other hand, the Court’s proportionality assessment of the privacy impact of Aadhaar on welfare recipients is couched in a relativistic and utilitarian logic of serving the ‘larger public interest’ without challenging the technological and economic claims of the system in greater detail or examining less intrusive measures to achieve the desired ends.
This utilitarian logic was more explicit in the majority’s analysis of the petitioner’s claims that the structural exclusion caused by the probabilistic nature of Aadhaar was discriminatory and arbitrary, and hence violated Article 14 of the Constitution of India. The petitioners argued that the Act creates an unreasonable classification between those who possess Aadhar and are able to authenticate using Aadhaar, and those who are not. On this basis they argued that the system was discriminatory and arbitrary, in light of significant evidence that the technical failure of Aadhaar was leading to exclusion of welfare from entitled beneficiaries. Even as per the Government’s claim, which was contested, the authentication mechanism failed 0.23% of the time. Further, evidence was led to show that the exclusion significantly affected certain categories of persons – manual labourers and the aged, for whom the vagaries of the biometric authentication mechanism were most apparent – violating Article 14’s protection against unintelligible criterion and unreasonable classifications being created by the state to deliver welfare.
The majority held that this claim was not tenable on two grounds. First, on the question of exclusions the Court found that the Aadhaar Act provided for alternative means to establish identification in case of authentication failure. This was based upon a reading of Section 7 of the Aadhaar Act which mandates that, where an Aadhaar number has not been assigned, individuals shall be granted an ‘alternate and viable’ means of identification to receive a welfare entitlement. Second, the Court once again justified exclusion by pointing to the claims of the ‘public purpose’, stating that –
In a situation like this where the Act is aimed at achieving the aforesaid public purpose, striving to benefit millions of deserving people, can it be invalidated only on the ground that there is a possibility of exclusion of some of the seekers of these welfare schemes? Answer has to be in the negative.
In coming to this conclusion, the Court omitted an analysis of the probabilistic nature of the technology underlying Aadhaar and its impact on the requirements of reasonableness and non-arbitrariness of state action under the Constitution. The argument of specific and arbitrary exclusion arising from technological design (for manual workers and the elderly) were unacknowledged. The underlying and structural reasons why alternate forms of identification were insufficient within the Aadhaar ecosystem, leading to welfare exclusions, also went unnoticed by the Court. The Court’s utilitarian justifications overrode concerns around the impact of Aadhaar on the ‘inviolable sphere’ of individual privacy and autonomy protected by fundamental constitutional rights and potentially provides precedent for justifying any structurally or systematically exclusionary technology on similar grounds.
Justice D.Y. Chandrachud’s dissenting opinion stands in stark contrast to the majority’s reasoning and conclusion, both in its assessment of proportionality as well as on arbitrariness and discrimination caused by the Aadhaar system. At the outset, the judgment addresses the tension between constitutional values and technology, stating that -
Our decision must address the dialogue between technology and power. The decision will analyse the extent to which technology has reconfigured the role of the state and has the potential to reset the lines which mark off no-fly zones: areas where the sanctity of the individual is inviolable. Our path will define our commitment to limited government. Technology confronts the future of freedom itself.
Crucially, the judgment also locates the datafication and automation of governance processes in light of existing structural inequalities and barriers to access that welfare recipients face and notes the disproportionate effect on individuals and communities facing marginalization as a result of the ‘digital divide’ created by such technologies. Justice Chandrachud writes,
‘If access to welfare entitlements is tagged to unique data sets, skewed access to informational resources should not lead to perpetuating the pre-existing inequalities of access to public resources’.
Approaching datafication in welfare systems from this critical perspective, the dissenting judgment’s assessment of Aadhaar’s impact on constitutional rights was markedly different.
First, the judgment holds that Aadhaar did not meet the proportionality test for its impact on the right to privacy, centring individual choice and the concept of ‘informational self-determination’ as a facet of the right to privacy. Invoking theories of political and social justice from Ambedkar, the Declaration of Human Rights and Martha Nussbaum, the judgment frames the right to privacy in relation to, and as necessary for, claiming socio-economic rights and freedoms that a welfare state must provide - as opposed to the majority’s claims that privacy and welfare are antagonistic and relativistic to achieving the state’s goals of providing welfare. In this formulation, ‘political’ rights, including the right to privacy are necessary for the realisation of ‘socio-economic’ rights such as economic dignity, and both are instrumental to achieving each other. This upends the conception that privacy must be sacrificed at the altar of efficiency or socio-economic freedom. In this frame of analysis, the judgment holds that the Aadhaar scheme operates not as a means for individuals to claim a self-determined identity, but as a method for non-consensual identification by the state and private actors. This flips the majority’s narrative of ‘minimal intrusion’ by locating the privacy harm from Aadhaar as a transgression on informational self-determination. This informational self-determination, centred on individual choice, is crucial for the realization of the right to dignity of individuals under the constitutional scheme, which includes the right to self-determination over the manner in which an individual’s identity is disclosed to, and quantified by, the state.
Second, the judgment holds that welfare exclusions caused by the probabilistic nature of biometric technology are constitutionally impermissible. After quoting several studies documenting welfare exclusions arising out of the Aadhaar system, the judgment holds that technological systems must comply with structural due process under Articles 14 and 21 of the Constitution to pass judicial review. In striking down Aadhaar’s exclusions as unconstitutional, the judgment held that Aadhaar’s legal and technological design did not provide for protections against exclusion arising from technological failure or lack of literacy and awareness. Further, it held that the design of such a system would comply with structural due process only when it is responsive to its deficiencies (or exclusions) and makes the state accountable for ensuring that welfare entitlements reach the intended recipients, instead of placing the individual in an oppressive position where she is subject to the vicissitudes of technological systems.
The majority judgment’s endorsement of Aadhaar, and its antagonistic reading of the right to privacy and welfare, encapsulates Eubanks’ critique of modern digital welfare systems – that such systems are designed to exclude the poor from welfare entitlements, by building layers of classifications for deserving and undeserving poor – those who are able to cooperate with the claims and demands of the technological systems and those who are not. These are justified by moral claims of efficiency and stemming corruption, and are enforced through surveillance and punitive measures for non-compliance with digital systems. The utilitarian approach towards automated discrimination, which prioritises a nebulous ‘public good’ over concrete instances of exclusion, further reinforces this critique.
The dissenting judgment, however, provides us with a fuller account of the constitutional implications of automated decision-making, despite its lack of binding or precedential value. First, the judgment reconciles notions of socio-economic justice and the right to privacy as being relational and not antagonistic. Second, the judgment addresses how automated decisions based upon personal data impact decisional and informational autonomy, by centring informational self-determination as central to right to privacy in the context of data-fied systems. This forms an important frame of analysis for claims against increasingly pervasive machine learning technologies which rely largely upon vast aggregations of personal information, without allowing individuals sufficient visibility or choice on how such information impacts consequential decisions made about them. Third, the judgment reads in a crucial requirement in automated decision-making, that of structural due process to be built within the design of the technological system, to counteract both the opacity and the exclusionary nature of automated decisions.
Conclusion: Whither ‘The Future of Freedom’?
Automated systems – based upon large assemblages of data and inferences made through algorithmic logics – are increasingly being adopted to make consequential decisions about individuals and communities. In India, these systems are already being implemented in fields ranging from policing to welfare delivery. Government policy has hailed these systems as cutting edge implementations of ‘artificial intelligence’ and ‘big data analytics’, without taking note of the risks and harms inherent within such technologies. The Aadhaar system is a precursor to many of the tensions emerging from the datafication and automation of society in general, and in the public sector in particular. Beyond Aadhaar, for example, welfare schemes are utilising ‘big data analytics’ and ‘artificial intelligence’ to justify surveillance and probabilistic analyses of vulnerable populations, including, for example, pregnant women in the healthcare system; farm loan waiver assessments; and health insurance fraud in the ‘Ayushman Bharat’ scheme.
This essay juxtaposed two important texts which are crucial to understand these technologies. Automating Inequality forces us to question our assumptions about the value-neutrality of embedding automated systems in welfare, and demonstrates the debilitative impact these systems can have on disenfranchised people and communities. The Supreme Court’s judgment(s) in Aadhaar typifies these concerns. The majority judgment’s assumptions about welfare and the effects of automation on the poor result in a very different assessment of the constitutionality of data-fied and automated systems from the dissenting judgment, which centres the relation between human development and individual rights and freedoms. The latter develops a much clearer framework for assessing and governing these emerging technologies.
The efficiencies and affordances that these technologies create are not neutral. They are inextricable from the political and economic systems in which they are embedded and through which they have been created – from the techniques of data collection to the modelling of machine learning systems to generate patterns and inferences about people. Our legal systems must explicitly account for this relationship, and must foreground the regulation of technology in an understanding of how it reifies and reproduces economic and social injustices. If, as Justice D.Y. Chandrachud notes, ‘technology confronts the future of freedom itself’, it also indulges the injustices of our present.
*Divij Joshi is an independent legal researcher and a Mozilla Tech Policy Fellow.
 Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (Picador, St Martins Press 2019).
 UNGA, ‘Report Of The Special Rapporteur On Extreme Poverty And Human Rights’, A/74/493, (October 11 2019).
 Paul Karp, ‘Government Admits Robodebt Was Unlawful As It Settles Legal Challenge’, The Guardian, (Nov 27 2019).
 By Richard J. Brennan & Donovan Vincent, ‘Problem-plagued welfare computer system has ‘overwhelmed’ staff, report says’, Toronto Star, (April 1 2015).
 Justice K.S. Puttaswamy v Union of India, (2019) 1 SCC 1.
 Reetika Khera, ‘Aadhaar Failures: A Tragedy of Errors’, EPW Engage (April 5, 2019), <https://www.epw.in/engage/article/aadhaar-failures-food-services-welfare.>
 G.V. Bhatnagar, Lakhs of Voters Deleted Without Proper Verification in Andhra, Telangana, The Wire, (Feb 26 2019) <https://thewire.in/rights/lakhs-of-voters-deleted-without-proper-verification-in-andhra-telangana>
 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information, (Harvard University Press, 2015).
 Ambika Tandon and Ayush Rathi, ‘The Mother and Child Tracking System - understanding data trail in the Indian healthcare systems’, Centre for Internet and Society (October 18 2019) https://cis-india.org/raw/big-data-reproductive-health-india-mcts.
 Bharath Joshi, ‘Meet the code, coder behind crop loan waiver software’, Deccan Herald, <www.deccanherald.com/state/meet-code-coder-behind-hdk-s-711566.html.
 Machine Learning and AI to strengthen implementation of Ayushman Bharat scheme; NHA, Google sign SoI’, Newsly, (October 5 2019) <https://newsly.live/technology/machine-learning-and-ai-to-strengthen-implementation-of-ayushman-bharat-scheme-nha-google-sign-soi/>
Taylor L, ‘What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally’ (2017) 4 Big Data & Society.
Danielle Keats Citron, ‘Technological Due Process’, 2007 Wash UL Rev 1249.
Mireille Hildebrandt , ‘Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning’, (2019) 20 Theoretical Inquiries in Law 83.
Justice KS Puttaswamy v. Union of India, 2017 SCC OnLine SC 996.