Paper & Position-paper dates extended: 28th January 2019 (see the dates page for more details).

This workshop will run as part of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2019) annual convention.

Artificially intelligent machines are becoming increasingly prevalent in modern society and are likely to play an important, even ubiquitous, role in future everyday decision making. This is a trend that is likely to accelerate as new techniques for automated-reasoning and machine- learning are applied to decision making within real-world domains. That these machines will have a great impact upon human society is beyond doubt. There is the potential for such machines to improve nearly every aspect of human life, particularly when artificial intelligence can overcome the well known shortcomings in human decision making such as those identified by behavioural economists. Insights from behavioural economics are behind the rise of ‘nudge’ initiatives, and are in themselves subject to a critique of their ethics. However there is also the potential for AI machines to act to the detriment of people. For every cancer successfully detected at an early stage, there could be a bank computer denying (or approving) a mortgage, or the consequences of an autonomous vehicle that makes a poor decision about whether to evade an obstacle or emergency brake. This is not to ascribe explicitly malicious intent, but merely to recognise that most current, and likely future, machine systems will be as imperfect as those who have created them.

Additional complexities can stem from the interplay between intelligent machines and human society. A further layer of risk and complexity is added once humans with malicious intent are included. Whilst a machine can be used to help recognise poor behaviours, for example eating excess junk food, and can in turn help manage that person’s behaviour in order to form better habits, such an approach could be used in the absence of informed consent. This raises the possibility that a sufficiently motivated organisation could attempt to manage the behaviour of whole electorates and so influence the political direction of a nation. This may sound far fetched but it is merely an automated version of the recent application of “nudge” techniques to politics in the United Kingdom. Whilst machines can help people to live better lives, or reach their full potential, there is also the mirror scenario of machines being used to manage an individual’s behaviour to the detriment of that individual. Thus the study of how machines, in particular intelligent machines that can learn to recognise behaviours and respond accordingly, interact with humans, and how the behaviour of humans can be directly or indirectly affected as a result, is a topic of timely and deep importance.

For this workshop we solicit contributions on the topic of how AI can affect human behaviour. These can include, but are not limited to the use of AI in Captology, digital persuasion, behaviour change, and gamification. We are interested not only in focused reports concerning research into the applications of these techniques to specific problems, such as within healthcare and transport behaviour, but also in more general consideration of the risks posed and benefits gained from application of these techniques within human society. Of specific interest are contributions addressing the dark side of these interactions, examining how techniques can be misused and how such misuse can be defended against.