700 research outputs found
Кибербезопасность в образовательных сетях
The paper discusses the possible impact of digital space on a human, as well as human-related directions in cyber-security analysis in the education: levels of cyber-security, social engineering role in cyber-security of education, “cognitive vaccination”. “A Human” is considered in general meaning, mainly as a learner. The analysis is provided on the basis of experience of hybrid war in Ukraine that have demonstrated the change of the target of military operations from military personnel and critical infrastructure to a human in general. Young people are the vulnerable group that can be the main goal of cognitive operations in long-term perspective, and they are the weakest link of the System.У статті обговорюється можливий вплив цифрового простору на людину, а також пов'язані з людиною напрямки кібербезпеки в освіті: рівні кібербезпеки, роль соціального інжинірингу в кібербезпеці освіти, «когнітивна вакцинація». «Людина» розглядається в загальному значенні, головним чином як та, що навчається. Аналіз надається на основі досвіду гібридної війни в Україні, яка продемонструвала зміну цілей військових операцій з військовослужбовців та критичної інфраструктури на людину загалом. Молодь - це вразлива група, яка може бути основною метою таких операцій в довгостроковій перспективі, і вони є найслабшою ланкою системи.В документе обсуждается возможное влияние цифрового пространства на человека, а также связанные с ним направления в анализе кибербезопасности в образовании: уровни кибербезопасности, роль социальной инженерии в кибербезопасности образования, «когнитивная вакцинация». «Человек» рассматривается в общем смысле, в основном как ученик. Анализ представлен на основе опыта гибридной войны в Украине, которая продемонстрировала изменение цели военных действий с военного персонала и критической инфраструктуры на человека в целом. Молодые люди являются уязвимой группой, которая может быть главной целью когнитивных операций в долгосрочной перспективе, и они являются самым слабым звеном Систем
Effectiveness of Organizational Mitigations for Cybersecurity, Privacy, and IT Failure Risks of Artificial Intelligence
Emerging cybersecurity, privacy, and IT failure risks of Artificial intelligence (AI) threaten AI’s business value potential and performance of organizations that develop and use AI. Current research on mitigations for these AI risks is limited to technical and data science level mitigations. There is limited research on organizational mitigations for AI risks. We address this gap by framing organizational mitigations for AI’s cybersecurity, privacy, and IT failure risks and test their effectiveness in a sample of 498 AI algorithms. Developer organizations, which design AI, and user organizations which use AI, are able to reduce the likelihood and the impact of AI’s cybersecurity breach, privacy breach, and IT failure risks if they collaborate to jointly institute organizational mitigations over AI’s risks
Autonomous Exchanges: Human-Machine Autonomy in the Automated Media Economy
Contemporary discourses and representations of automation stress the impending “autonomy” of automated technologies. From pop culture depictions to corporate white papers, the notion of autonomous technologies tends to enliven dystopic fears about the threat to human autonomy or utopian potentials to help humans experience unrealized forms of autonomy. This project offers a more nuanced perspective, rejecting contemporary notions of automation as inevitably vanquishing or enhancing human autonomy. Through a discursive analysis of industrial “deep texts” that offer considerable insights into the material development of automated media technologies, I argue for contemporary automation to be understood as a field for the exchange of autonomy, a human-machine autonomy in which autonomy is exchanged as cultural and economic value. Human-machine autonomy is a shared condition among humans and intelligent machines shaped by economic, legal, and political paradigms with a stake in the cultural uses of automated media technologies. By understanding human-machine autonomy, this project illuminates complications of autonomy emerging from interactions with automated media technologies across a range of cultural contexts
Sustainability and Trust for Artificial Intelligence Technologies
Hammer B, van der Aalst W, Bauckhage C, et al. Sustainability and Trust for Artificial Intelligence Technologies.; 2020
Towards a Solution to Create, Test and Publish Mixed Reality Experiences for Occupational Safety and Health Learning: Training-MR
Artificial intelligence, Internet of Things, Human Augmentation, virtual reality, or mixed reality have been rapidly implemented in Industry 4.0, as they improve the productivity of workers. This productivity improvement can come largely from modernizing tools, improving training, and implementing safer working methods. Human Augmentation is helping to place workers in unique environments through virtual reality or mixed reality, by applying them to training actions in a totally innovative way. Science still has to overcome several technological challenges to achieve widespread application of these tools. One of them is the democratisation of these experiences, for which is essential to make them more accessible, reducing the cost of creation that is the main barrier to entry. The cost of these mixed reality experiences lies in the effort required to design and build these mixed reality training experiences. Nevertheless, the tool presented in this paper is a solution to these current limitations. A solution for designing, building and publishing experiences is presented in this paper. With the solution, content creators will be able to create their own training experiences in a semiassisted way and eventually publish them in the Cloud. Students will be able to access this training offered as a service, using Microsoft HoloLens2. In this paper, the reader will find technical details of the Training-MR, its architecture, mode of operation and communicatio
Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions
In the last years, AI safety gained international recognition in the light of
heterogeneous safety-critical and ethical issues that risk overshadowing the
broad beneficial impacts of AI. In this context, the implementation of AI
observatory endeavors represents one key research direction. This paper
motivates the need for an inherently transdisciplinary AI observatory approach
integrating diverse retrospective and counterfactual views. We delineate aims
and limitations while providing hands-on-advice utilizing concrete practical
examples. Distinguishing between unintentionally and intentionally triggered AI
risks with diverse socio-psycho-technological impacts, we exemplify a
retrospective descriptive analysis followed by a retrospective counterfactual
risk analysis. Building on these AI observatory tools, we present near-term
transdisciplinary guidelines for AI safety. As further contribution, we discuss
differentiated and tailored long-term directions through the lens of two
disparate modern AI safety paradigms. For simplicity, we refer to these two
different paradigms with the terms artificial stupidity (AS) and eternal
creativity (EC) respectively. While both AS and EC acknowledge the need for a
hybrid cognitive-affective approach to AI safety and overlap with regard to
many short-term considerations, they differ fundamentally in the nature of
multiple envisaged long-term solution patterns. By compiling relevant
underlying contradistinctions, we aim to provide future-oriented incentives for
constructive dialectics in practical and theoretical AI safety research
A Cyberpunk 2077 perspective on the prediction and understanding of future technology
Science fiction and video games have long served as valuable tools for
envisioning and inspiring future technological advancements. This position
paper investigates the potential of Cyberpunk 2077, a popular science fiction
video game, to shed light on the future of technology, particularly in the
areas of artificial intelligence, edge computing, augmented humans, and
biotechnology. By analyzing the game's portrayal of these technologies and
their implications, we aim to understand the possibilities and challenges that
lie ahead. We discuss key themes such as neurolink and brain-computer
interfaces, multimodal recording systems, virtual and simulated reality,
digital representation of the physical world, augmented and AI-based home
appliances, smart clothing, and autonomous vehicles. The paper highlights the
importance of designing technologies that can coexist with existing preferences
and systems, considering the uneven adoption of new technologies. Through this
exploration, we emphasize the potential of science fiction and video games like
Cyberpunk 2077 as tools for guiding future technological advancements and
shaping public perception of emerging innovations.Comment: 12 pages, 7 figure
- …