The previous section provided ground for granting individuals a right not to be subjected to AI profiling. This article seeks, however, to sustain the right in relation to personal data made publicly available online, e.g. on social media platforms such as Twitter and Instagram. Notably, the availability of personal data on such platforms is to a large extent influenced by the data subjects themselves. If an individual refrains from any use of online services and social media platforms, little personal data would ceteris paribus be publicly available. In so doing individuals would therefore—presumably to a large extent—protect themselves against AI profiling based on online available, personal data. But if individuals may protect themselves against AI profiling by disengaging from online data sharing, then the right not to be AI profiled based on publicly available data seems to have been rendered somewhat superfluous. This section therefore explores, firstly, the reasons for protecting and promoting online data sharing and, secondly, the reasons for allowing AI profiling based on such data.
5.1 Social and Democratic Potentials of Online Data Sharing
The online sharing of data may serve important social and democratic purposes.
Socially, it can be used to build and maintain relationships. Research finds that socializing is a primary motivation for information sharing on social networking sites (SNS) (Kümpel et al., 2015). Studies also suggest that the use of SNS is positively related not only to the quality of friendships defined as satisfaction with friends, but also to social capital defined as the resources that accrue to an individual by virtue of his or her membership of a social network, e.g. the feeling of belonging to a community, the forming of new relationships, the feedback and support of other people in relation to various issues, and loneliness (Ahn, 2012; Antheunis et al., 2016; Brandtzæg, 2012; Ellison et al., 2007). Other studies indicate that social media use with close friends is positively associated with the experience of feeling close to those friends (Kahlow et al., 2020; Pouwels et al., 2021).
Democratically, it may be used to form and express opinions, to share information, and to engage in collective reasoning involving decision-makers at all levels of society, and thus, it may ultimately increase participation in democratic processes. A meta-analysis found that social media use generally is positively—but modestly—related to various forms of both online and offline political participation, e.g. voting, supporting campaigns, and protest activities (Skoric et al., 2016). Consistent with the findings of a previous meta-analysis (Boulianne, 2009), it also found a positive—and moderate—relation between (1) the use of social media for informational purposes, i.e. seeking, gathering, and sharing of various kinds of information, and political participation, and (2) the use of social media for expressivist purposes, i.e. expressing oneself, articulating opinions, ideas, and thoughts, and political participation. Another meta-analysis similarly established a positive relationship between social media use and participation in political and civic life. However, the meta-analysis found it to be questionable whether such participation is a causal effect of social media use (Boulianne, 2015). Other studies suggest that online viewing and sharing of news are positively related to political knowledge (Beam, 2016).
Minimally, the evidence cited suggests that certain kinds of online data sharing can, under certain conditions, serve valuable social and democratic purposes. If so, then we have reason to protect—perhaps even promote—these kinds of online data sharing. However, this requires that privacy concerns be accommodated because such concerns may lead to withdrawal from online data sharing. A recent study found that privacy concerns defined as concerns about losing control of personal data on social networks are negatively related to social media participation measured as the frequency of interactions on SNS (Bright et al., 2021). A similar study found that privacy concerns in relation to publicly available data are negatively associated with the amount and depth of personal data sharing on SNS (Gruzd & Hernández-García, 2018).
What has been attempted here is to underpin two simple claims, namely that online engagement in the form of data sharing can serve valuable social and democratic purposes and that privacy concerns can lead to disengagement from online data sharing. If both claims are true, they provide us with a reason to address the concerns of individuals regarding privacy. They imply that disengagement from online data sharing is an undesirable solution to such privacy concerns. The upshot for AI profiling based on publicly available data is this. If individuals sometime in the future, when AI predictive profiling has become more widespread, become aware that their publicly available data can be used for profiling in unpredictable ways, there is a risk that this may lead to disengagement from online data sharing, also by prospective politicians and decision-makers. This is undesirable as it may impede the full realization of the social and democratic benefits of data sharing.
While we have focused here on social and democratic aspects, there is also an extant literature on the health-related effects of social media usage. A recent umbrella review comparing five meta-analyses found that social media use is associated with higher levels of both well-being, i.e. happiness, positive affect, life satisfaction, and ill-being, i.e. i.e. depression, anxiety disorder, distress, and negative affect (Valkenburg et al., 2022). While such studies certainly add important nuances to the evidence on the effects of social media use, they do not rule the possibility that social media use may serve valuable social and democratic purposes. If privacy concerns—and in particular concerns related to AI profiling—can lead to disengagement from the relevant kinds of data sharing, there are substantial reasons for protecting such data sharing from AI profiling.
5.2 Other Trumping Concerns
To fully substantiate the right not to be subjected to AI profiling based on publicly available, personal data, it remains to be considered whether other concerns may outweigh this right. These concerns may be related to specific purposes and contexts of AI profiling. Revisiting the three cases may therefore be helpful.
AI predictive profiling makes an individual significantly more exposed to unacceptable forms of social control and stigmatization and self-stigmatization, and it may lead to withdrawal from online data sharing otherwise conducive to social thriving and democratic health. As such, AI predictive profiling is a threat to basic human autonomy and wellbeing, as well as social life and democracy. These harms impose a burden of proof on the defender of the right to conduct such profiling in the three cases. For each case, the defender of AI profiling based on publicly available data must show either (1) that the harms of AI profiling are unlikely to obtain in the three cases and/or (2) that AI profiling without consent is a proportional measure, where this requires (A) that the benefits of doing AI profiling without consent outweigh the potential harms and (B) that AI profiling without consent is strictly necessary in order to obtain the benefits.
Consider the friends case. That household profiling should be harmless is not a credible proposition. There are no a priori reasons to think that profiling of friends and family members is not as likely to lead to overreaching social control and stigmatization as in any other case. Note that the effects of stigmatization do not require stigmatizing behaviour by friends and family. It only requires that a certain feature is stigmatized in the wider society. Consider now the benefits and the necessity of conducting the AI profiling. The benefit of the profiling is an accurate diagnosis that A may use to motivate B and B’s family to seek further health care assistance. Note that this benefit could likely have been achieved by less invasive alternative means. A could simply have presented B and B’s family with the suspicion of B having mental health issues. Note also that whether or not the AI profiling is necessary, it is still an open question, who should decide the weight of the benefits and harms? There are at least two reasons why B should decide the weight of the benefits and harms, i.e. that B should have a right to provide or deny informed consent to AI profiling based on publicly available data. Firstly, because the benefits and harms of the AI profiling are relative to B’s interests in getting the profiling data or maintaining privacy, B is ceteris paribus the best positioned to determine B’s interests and their weight. Secondly, because the potential harms to B—i.e. social control and stigmatization—for all we know are significant harms. They are ways of impacting others that run counter to fundamental values in our society, and it seems to be an equally fundamental principle that generally—if not always—when individuals are at risk of suffering significant harms, they should have the right to protect themselves against these harms.
In the public servant case, there is also the risk of overreaching social control and stigmatization. Psychiatric diagnoses are sticky labels that will form the way individuals are handled in the public system in the short- and long-term, and this carries a latent risk of producing overreaching social control. It may also lead to stigmatization and not least self-stigmatization. In this case, however, there are potential benefits both for the profiled individual and the public authorities. Thus, the diagnosis may not only come to benefit client B in terms of more adequate health care and social benefits, but it may also increase the efficiency of the social services by making more readily available personal health care information needed for an adequate distribution of social benefits. AI profiling based on publicly available data may be considered necessary in the sense of being a precondition of a maximally efficient public administration, but it is not necessary in the sense that there are no alternative ways of getting access to the information produced by the AI profiling. Thus, the benefits of such AI profiling for public administration and the wider society may ultimately be marginal. In any case, it seems as if such benefits are morally incomparable to the list of potential harms of AI profiling, including the potential harms to B. As argued above, the potential harms to client B speaks in favour of granting B the right to provide or deny informed consent to such profiling.
The prime minister case may for two interrelated reasons be taken to present a more fundamental problem for the attempt to ground a sui generis right not to be AI profiled. Thus, it may be argued that information about the mental health of the prime minister candidate is of public interest and that the public therefore have a right to this information. Relatedly, it may be argued that the AI profiling is covered by the right to freedom of expression. After all the concerned citizen is in this case profiling the candidate with the intent of voicing a concern over the fitness of the candidate for political office. Access to information of public interest and the right to free speech are undeniably instrumental to a flourishing democracy. However, what we have been arguing in this article is that AI profiling is not only a continuous threat to individual autonomy, wellbeing, and social life, but that it may also threaten democratic processes by potentially leading people to withdraw from the use of social media. Decision-makers, including the prime minister candidate, may withdraw from social media exchanges with the public, and in the longer run, the threat of being AI profiled for all sorts of dispositions may prevent people from engaging in politics altogether. Therefore, it remains to be shown how and why a right not to be AI profiled based on publicly available data limits freedom of expression and information in any profound sense, and why such limitation outweighs the negative effects of AI profiling on human autonomy and wellbeing as well as on social flourishing and democracy. Simply flagging the right to freedom of expression and information cannot reasonably be thought to do the job.
Relevant for the attempt to weigh benefits and harms in all three cases is the accuracy of the predictions of mental disorders AI model Deepmood. Thus, it may be asserted that a predictive accuracy, i.e. sensitivity and specificity, above that of health care professionals could and would increase the benefits to individuals and society in all three cases—and vice versa. In short, the accuracy of AI models should be a parameter for any attempt to weigh benefits and harms. The evidence cited in the opening lines of the article suggests that available models for predicting mental disorders perform better than unassisted physicians. A recent meta-analysis of the accuracy of AI diagnostic systems in the context of medical imaging and histopathology found the performance of deep-learning models to be equivalent to that of healthcare professionals (Liu et al., 2019). Others have pointed out that the presumption of accuracy may not hold (Hofmann, 2022). Two observations should be made here. Firstly, as argued in a previous section, there are reasons to believe that an increased accuracy will likely drive stronger attempts of social control and increased stigmatization. One may also hypothesize that increased accuracy will drive increased disengagement from online data sharing. Whether or not the benefits and harms will increase with an increased accuracy is ultimately to be settled empirically. For present purposes, it should simply be noted that an increased accuracy does not necessarily make a difference for the balancing of benefits and harms in the three examples. Secondly, as argued right above, there are strong reasons for believing that the weighing of benefits and harms should in many cases befall the individual. The weight of benefits and harms—including the chance of obtaining the benefits and suffering the harms (accuracy)—cannot be separated from the interests of individuals and not in cases where the individual may come to suffer significant harms.
In conclusion, the arguments presented here suggest that individuals should have a right not to be AI profiled in all three cases. While the right not to be AI profiled based on publicly available data admittedly is a pro tanto right, what is claimed here is that none of the three cases seem to qualify as exemptions from this right.