3 research outputs found

    Submission on the Attorney-General's Department Privacy Act Review Report 2022: Exceptions for 'Socially Beneficial Content'

    No full text
    This is a submission to the Attorney-General's Department Privacy Act Review Report 2022. We focus on the proposed exception to the prohibition of advertising based on sensitive personal information where the targeting concerns 'socially beneficial content'. </p

    Alternatives to Coercion in Mental Health Settings: A Literature Review

    No full text
       This systematic literature review examines empirical research on efforts to reduce, end and prevent coercion in the mental health context, whether in hospitals in high-income countries, or in family homes and remote communities in rural parts of low- and mlddle-income countries. We will use the term, 'persons with mental health conditions or psychosocial disabilities', as adopted in the aforementioned Human Rights Council Resolution. We have sought to undertake a comprehensive review of the scholarly research drawn from a range of different disciplinary backgrounds and experiences. However, the review cannot claim to be exhaustive. For example, time and language limitations prevented a more expansive inquiry. Indeed, we recommend that a more expansive inquiry is required to uncover empirical research and exploratory reports of progressive efforts, particularly in non-English-speaking regions. Nevertheless, this review encompasses 169 studies from many parts of the world (including 48 reviews and notable grey literature reports), offering valuable insights into the state of research on finding alternatives to reduce, end and prevent coercion of people with mental health conditions and psychosocial disabilities. </p

    Digital Futures in Mind: Reflecting on Technological Experiments in Mental Health and Crisis Support

    No full text
     Urgent public attention  is needed to make sense of the expanding use of algorithmic and  data-driven technologies in the mental health context. On the one hand,  well-designed digital technologies that offer high degrees of public  involvement and can be used to promote good mental health and crisis  support in communities. They can be employed safely, reliably and in a  trustworthy way, including to help build relationships, allocate  resources, and promote human flourishing. On the other hand,  there is clear potential for harm. The list of ‘data harms’ in the  mental health context is growing longer, in which people are in worse  shape than they would be had the activity not occurred.  Examples in  this report include the hacking of psychotherapeutic records and the  extortion of victims, algorithmic hiring programs that discriminate  against people with histories of mental healthcare, and criminal justice  and border agencies weaponising data concerning mental health against  individuals. Issues also come up not where technologies are misused or  faulty, but where technologies like biometric monitoring or surveillance  work as intended, and where the very process of ‘datafying’ and  digitising individuals’ behaviour – observing, recording and logging  them to an excessive degree – carry the potential for inherent harm. Part 1 of this report charts the rise  of algorithmic and data-driven technology in the mental health context.  It outlines issues which make mental health unique in legal and policy  terms, particularly the significance of involuntary or coercive  psychiatric interventions in any analysis of mental health and  technology. The section makes a case for elevating the perspective of  people with lived experience of profound psychological distress, mental  health conditions, psychosocial disabilities, and so on, in all activity  concerning mental health and technology. Part 2 looks at  prominent themes of accountability. Eight key themes are discussed –  fairness and non-discrimination, human control of technology,  professional responsibility, privacy, accountability, safety and  security, transparency and explainability, and promotion of public  interest. International law, and particularly the Convention on the  Rights of Persons with Disabilities, is also discussed as a source of  data governance. Case studies throughout show the diversity of  technological developments and draw attention to their real-life  implications. Many case studies demonstrate instances of harm. The case  studies also seek to ground discussion in the actual agonies of existing  technology rather than speculative worries about technology whose  technical feasibility is often exaggerated in misleading and harmful  ways (for example, Elon Musk’s claim that his ‘AI-brain chips will  “solve” autism and schizophrenia’).     </p
    corecore