22 research outputs found

    Social forecasting: a literature review of research promoted by the United States National Security System to model human behavior

    Get PDF
    The development of new information and communication technologies increased the volume of information flows within society. For the security forces, this phenomenon presents new opportunities for collecting, processing and analyzing information linked with the opportunity to collect a vast and diverse amount data, and at the same time it requires new organizational and individual competences to deal with the new forms and huge volumes of information. Our study aimed to outline the research areas funded by the US defense and intelligence agencies with respect to social forecasting. Based on bibliometric techniques, we clustered 2688 articles funded by US defense or intelligence agencies in five research areas: a) Complex networks, b) Social networks, c) Human reasoning, d) Optimization algorithms, and e) Neuroscience. After that, we analyzed qualitatively the most cited papers in each area. Our analysis identified that the research areas are compatible with the US intelligence doctrine. Besides that, we considered that the research areas could be incorporated in the work of security forces provided that basic training is offered. The basic training would not only enhance capabilities of law enforcement agencies but also help safeguard against (unwitting) biases and mistakes in the analysis of data

    Artificial intelligence for social impact: Learning and planning in the data-to-deployment pipeline

    Get PDF
    With the maturing of artificial intelligence (AI) and multiagent systems research, we have a tremendous opportunity to direct these advances toward addressing complex societal problems. In pursuit of this goal of AI for social impact, we as AI researchers must go beyond improvements in computational methodology; it is important to step out in the field to demonstrate social impact. To this end, we focus on the problems of public safety and security, wildlife conservation, and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. We present case studies from our deployments around the world as well as lessons learned that we hope are of use to researchers who are interested in AI for social impact. In pushing this research agenda, we believe AI can indeed play an important role in fighting social injustice and improving society

    AI for Social Impact: Learning and Planning in the Data-to-Deployment Pipeline

    Full text link
    With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. In pursuit of this goal of AI for Social Impact, we as AI researchers must go beyond improvements in computational methodology; it is important to step out in the field to demonstrate social impact. To this end, we focus on the problems of public safety and security, wildlife conservation, and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. We present case studies from our deployments around the world as well as lessons learned that we hope are of use to researchers who are interested in AI for Social Impact. In pushing this research agenda, we believe AI can indeed play an important role in fighting social injustice and improving society.Comment: To appear, AI Magazin

    Difference in risk perception of onboard security threats by aircrew and aviation security experts

    Get PDF
    Airlines are increasingly relying on non-security personnel such as cabin crews and pilots to perform a security function when dealing with potential onboard security threats. The training aircrews receive on security threat assessment is considered by many to be inadequate. The way aircrews respond to potential onboard threats can have life and death consequences for passengers and other aircrew. How these potential threats are handled can also cause significant financial loss to the airlines through loss of productivity, passenger claims or even legal liability. For this reason, it is imperative we understand how aircrews perceive security risk in order to make appropriate risk assessments. This study examines if aircrew perceive security risks the same as aviation security experts. Five scenarios representing actual potential onboard security threats were given to a group of 67 pilots, cabin crew and aviation security experts. The participants were asked a series of questions about the scenarios that measured how they perceived the potential threat as well as other questions to determine how prepared they were to deal with each scenario. The results showed that aircrews perceive and assess risk associated with onboard security threats significantly different than aviation security experts
    corecore