16 research outputs found
Architectural decisions in AI-based systems: an ontological view
Architecting AI-based systems entails making some decisions that are particular to this type of systems. Therefore, it becomes necessary to gather all necessary knowledge to inform such decisions, and to articulate this knowledge in a form that facilitates knowledge transfer among different AI projects. In this exploratory paper, we first present the main results of a literature survey in the area, and then we propose a preliminary ontology for architectural decision making, which we exemplify using a subset of the papers selected in the literature review. In the discussion, we remark on the variety of decision types and system contexts, highlighting the need to further investigate the current state of research and practice in this area. Besides, we summarize our plans to move along this research area by widening the literature review and incorporating more AI-related concepts to this first version of the ontology.This paper has been funded by the Spanish Ministerio de Ciencia e InnovaciΓ³n under project / funding scheme PID2020-117191RB-I00 / AEI/10.13039/501100011033.Peer ReviewedPostprint (author's final draft
ΠΠ΅ΡΠ²Π°Π·ΠΈΠ²Π½ΠΎΡΡΡ ΠΊΠ°ΠΊ Π½Π΅ΠΎΡΡΠ΅ΠΌΠ»Π΅ΠΌΠΎΠ΅ ΡΠ²ΠΎΠΉΡΡΠ²ΠΎ ΡΠΎΡΠΈΠΎΠΊΡΠ»ΡΡΡΡΠ½ΠΎΠ³ΠΎ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π°
The socio-cultural space in the information society is subject to the so-called "digital transformations", the basis of which is the large-scale use of information and communication technologies. This space is fully informational, it is permeated with information streams between elements and actors. At the same time, streams exist both in the traditional "analog" form and in digital. The digital existence of socio-cultural information is based on the widespread use of information systems, which are based on the principles of decentralization and distribution. The socio-cultural environment is saturated with cultural objects that exist both in real and digital form. They can be either an addition to real culture or "digital doubles" of real cultural objects, or they can be independent cultural phenomena that have no analogues in the real cultural space. Digital data is not stored in one information system, but is distributed among all digital components of the socio-cultural space. These data are both generated by systems and come from various sensors that saturate the physical environment of the socio-cultural space. In the world scientific discourse, the term "pervasiveness" is used to refer to such systems, which refers both to the computing systems themselves and to processes. The concepts of "pervasive computing", "pervasive systems" and "pervasive environments" are used as stable. Based on their analysis of these concepts, it is proposed to use the term "pervasiveness" to designate as one of the main characteristics of the socio-cultural space itself, the development of which in the information age is based on the widespread use of information and communication technologies.Π‘ΠΎΡΠΈΠΎΠΊΡΠ»ΡΡΡΡΠ½ΠΎΠ΅ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²ΠΎ Π² ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎΠΌ ΠΎΠ±ΡΠ΅ΡΡΠ²Π΅ ΠΏΠΎΠ΄Π²Π΅ΡΠΆΠ΅Π½ΠΎ ΡΠ°ΠΊ Π½Π°Π·ΡΠ²Π°Π΅ΠΌΡΠΌ Β«ΡΠΈΡΡΠΎΠ²ΡΠΌ ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΠΈΡΠΌΒ», ΠΎΡΠ½ΠΎΠ²ΠΎΠΉ ΠΊΠΎΡΠΎΡΡΡ
ΡΠ²Π»ΡΠ΅ΡΡΡ ΡΠΈΡΠΎΠΊΠΎΠΌΠ°ΡΡΡΠ°Π±Π½ΠΎΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎ-ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΎΠ½Π½ΡΡ
ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΉ. Π ΠΏΠΎΠ»Π½ΠΎΠΉ ΠΌΠ΅ΡΠ΅ ΡΡΠΎ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²ΠΎ ΡΠ²Π»ΡΠ΅ΡΡΡ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΠΌ, ΠΎΠ½ΠΎ ΠΏΡΠΎΠ½ΠΈΠ·Π°Π½ΠΎ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΠΌΠΈ ΠΏΠΎΡΠΎΠΊΠ°ΠΌΠΈ ΠΌΠ΅ΠΆΠ΄Ρ ΡΠ»Π΅ΠΌΠ΅Π½ΡΠ°ΠΌΠΈ ΠΈ Π°ΠΊΡΠΎΡΠ°ΠΌΠΈ. ΠΡΠΈ ΡΡΠΎΠΌ ΠΏΠΎΡΠΎΠΊΠΈ ΡΡΡΠ΅ΡΡΠ²ΡΡΡ ΠΊΠ°ΠΊ Π² ΡΡΠ°Π΄ΠΈΡΠΈΠΎΠ½Π½ΠΎΠΉ Β«Π°Π½Π°Π»ΠΎΠ³ΠΎΠ²ΠΎΠΉΒ» ΡΠΎΡΠΌΠ΅, ΡΠ°ΠΊ ΠΈ Π² ΡΠΈΡΡΠΎΠ²ΠΎΠΉ. Π¦ΠΈΡΡΠΎΠ²Π°Ρ Π±ΡΡΠΈΠΉΠ½ΠΎΡΡΡ ΡΠΎΡΠΈΠΎΠΊΡΠ»ΡΡΡΡΠ½ΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΈ ΠΎΡΠ½ΠΎΠ²Π°Π½Π° Π½Π° ΠΏΠΎΠ²ΡΠ΅ΠΌΠ΅ΡΡΠ½ΠΎΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠΈ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΡ
ΡΠΈΡΡΠ΅ΠΌ, ΠΊΠΎΡΠΎΡΡΠ΅ ΡΡΡΠΎΡΡΡΡ Π½Π° ΠΎΡΠ½ΠΎΠ²Π°Ρ
Π΄Π΅ΡΠ΅Π½ΡΡΠ°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΈ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»ΡΠ½Π½ΠΎΡΡΠΈ. Π‘ΠΎΡΠΈΠΎΠΊΡΠ»ΡΡΡΡΠ½Π°Ρ ΡΡΠ΅Π΄Π° Π½Π°ΡΡΡΠ΅Π½Π° ΠΎΠ±ΡΠ΅ΠΊΡΠ°ΠΌΠΈ ΠΊΡΠ»ΡΡΡΡΡ, ΠΊΠΎΡΠΎΡΡΠ΅ ΡΡΡΠ΅ΡΡΠ²ΡΡΡ ΠΊΠ°ΠΊ Π² ΡΠ΅Π°Π»ΡΠ½ΠΎΠΉ, ΡΠ°ΠΊ ΠΈ Π² ΡΠΈΡΡΠΎΠ²ΠΎΠΉ ΡΠΎΡΠΌΠ΅. ΠΠ½ΠΈ ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ ΠΊΠ°ΠΊ Π΄ΠΎΠΏΠΎΠ»Π½Π΅Π½ΠΈΠ΅ΠΌ ΡΠ΅Π°Π»ΡΠ½ΠΎΠΉ ΠΊΡΠ»ΡΡΡΡΡ ΠΈΠ»ΠΈ Β«ΡΠΈΡΡΠΎΠ²ΡΠΌΠΈ Π΄Π²ΠΎΠΉΠ½ΠΈΠΊΠ°ΠΌΠΈΒ» ΡΠ΅Π°Π»ΡΠ½ΡΡ
ΠΎΠ±ΡΠ΅ΠΊΡΠΎΠ² ΠΊΡΠ»ΡΡΡΡΡ, ΡΠ°ΠΊ ΠΈ Π±ΡΡΡ ΡΠ°ΠΌΠΎΡΡΠΎΡΡΠ΅Π»ΡΠ½ΡΠΌΠΈ ΠΊΡΠ»ΡΡΡΡΠ½ΡΠΌΠΈ ΡΠ΅Π½ΠΎΠΌΠ΅Π½Π°ΠΌΠΈ, Π½Π΅ ΠΈΠΌΠ΅ΡΡΠΈΠΌΠΈ Π°Π½Π°Π»ΠΎΠ³ΠΎΠ² Π² ΡΠ΅Π°Π»ΡΠ½ΠΎΠΌ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π΅ ΠΊΡΠ»ΡΡΡΡΡ. Π¦ΠΈΡΡΠΎΠ²ΡΠ΅ Π΄Π°Π½Π½ΡΠ΅ Π½Π΅ Ρ
ΡΠ°Π½ΡΡΡΡ Π² ΠΎΠ΄Π½ΠΎΠΉ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΠ΅, Π° ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»Π΅Π½Ρ ΠΌΠ΅ΠΆΠ΄Ρ Π²ΡΠ΅ΠΌΠΈ ΡΠΈΡΡΠΎΠ²ΡΠΌΠΈ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠ°ΠΌΠΈ ΡΠΎΡΠΈΠΎΠΊΡΠ»ΡΡΡΡΠ½ΠΎΠ³ΠΎ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π°. ΠΡΠΈ Π΄Π°Π½Π½ΡΠ΅ ΠΊΠ°ΠΊ Π³Π΅Π½Π΅ΡΠΈΡΡΡΡΡΡ ΡΠΈΡΡΠ΅ΠΌΠ°ΠΌΠΈ, ΡΠ°ΠΊ ΠΈ ΠΏΠΎΡΡΡΠΏΠ°ΡΡ Ρ ΡΠ°Π·Π»ΠΈΡΠ½ΡΡ
Π΄Π°ΡΡΠΈΠΊΠΎΠ², ΠΊΠΎΡΠΎΡΡΠΌΠΈ Π½Π°ΡΡΡΠ΅Π½Π° ΡΠΈΠ·ΠΈΡΠ΅ΡΠΊΠ°Ρ ΡΡΠ΅Π΄Π° ΡΠΎΡΠΈΠΎΠΊΡΠ»ΡΡΡΡΠ½ΠΎΠ³ΠΎ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π°. Π ΠΌΠΈΡΠΎΠ²ΠΎΠΌ Π½Π°ΡΡΠ½ΠΎΠΌ Π΄ΠΈΡΠΊΡΡΡΠ΅ Π΄Π»Ρ ΠΎΠ±ΠΎΠ·Π½Π°ΡΠ΅Π½ΠΈΡ ΡΠ°ΠΊΠΈΡ
ΡΠΈΡΡΠ΅ΠΌ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΡΠ΅ΡΠΌΠΈΠ½ Β«ΠΏΠ΅ΡΠ²Π°Π·ΠΈΠ²Π½ΠΎΡΡΡΒ», ΠΊΠΎΡΠΎΡΡΠΉ ΠΎΡΠ½ΠΎΡΠΈΡΡΡ ΠΊΠ°ΠΊ ΠΊ ΡΠ°ΠΌΠΈΠΌ Π²ΡΡΠΈΡΠ»ΠΈΡΠ΅Π»ΡΠ½ΡΠΌ ΡΠΈΡΡΠ΅ΠΌΠ°ΠΌ, ΡΠ°ΠΊ ΠΈ ΠΊ ΠΏΡΠΎΡΠ΅ΡΡΠ°ΠΌ. ΠΠ°ΠΊ ΡΡΡΠΎΠΉΡΠΈΠ²ΡΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡΡΡ ΠΏΠΎΠ½ΡΡΠΈΡ Β«ΠΏΠ΅ΡΠ²Π°Π·ΠΈΠ²Π½ΡΠ΅ Π²ΡΡΠΈΡΠ»Π΅Π½ΠΈΡΒ», Β«ΠΏΠ΅ΡΠ²Π°Π·ΠΈΠ²Π½ΡΠ΅ ΡΠΈΡΡΠ΅ΠΌΡΒ» ΠΈ Β«ΠΏΠ΅ΡΠ²Π°Π·ΠΈΠ²Π½ΡΠ΅ ΡΡΠ΅Π΄ΡΒ». ΠΡΡ
ΠΎΠ΄Ρ ΠΈΡ
Π°Π½Π°Π»ΠΈΠ·Π° ΡΡΠΈΡ
ΠΏΠΎΠ½ΡΡΠΈΠΉ ΠΏΡΠ΅Π΄Π»Π°Π³Π°Π΅ΡΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ ΡΠ΅ΡΠΌΠΈΠ½ Β«ΠΏΠ΅ΡΠ²Π°Π·ΠΈΠ²Π½ΠΎΡΡΡΒ» Π΄Π»Ρ ΠΎΠ±ΠΎΠ·Π½Π°ΡΠ΅Π½ΠΈΡ ΠΊΠ°ΠΊ ΠΎΠ΄Π½ΠΎΠΉ ΠΈΠ· ΠΎΡΠ½ΠΎΠ²Π½ΡΡ
Ρ
Π°ΡΠ°ΠΊΡΠ΅ΡΠΈΡΡΠΈΠΊ ΡΠ°ΠΌΠΎΠ³ΠΎ ΡΠΎΡΠΈΠΎΠΊΡΠ»ΡΡΡΡΠ½ΠΎΠ³ΠΎ ΠΏΡΠΎΡΡΡΠ°Π½ΡΡΠ²Π°, ΡΠ°Π·Π²ΠΈΡΠΈΠ΅ ΠΊΠΎΡΠΎΡΠΎΠ³ΠΎ Π² ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΡΡ ΡΠΏΠΎΡ
Ρ ΠΎΡΠ½ΠΎΠ²ΡΠ²Π°Π΅ΡΡΡ Π½Π° ΠΏΠΎΠ²ΡΠ΅ΠΌΠ΅ΡΡΠ½ΠΎΠΌ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΠΈ ΠΈΠ½ΡΠΎΡΠΌΠ°ΡΠΈΠΎΠ½Π½ΠΎ-ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΎΠ½Π½Ρ
ΡΠ΅Ρ
Π½ΠΎΠ»ΠΎΠ³ΠΈΠΉ
Towards Specifying And Evaluating The Trustworthiness Of An AI-Enabled System
Applied AI has shown promise in the data processing of key industries and government agencies to extract actionable information used to make important strategical decisions. One of the core features of AI-enabled systems is the trustworthiness of these systems which has an important implication for the robustness and full acceptance of these systems. In this paper, we explain what trustworthiness in AI-enabled systems means, and the key technical challenges of specifying, and verifying trustworthiness. Toward solving these technical challenges, we propose a method to specify and evaluate the trustworthiness of AI-based systems using quality-attribute scenarios and design tactics. Using our trustworthiness scenarios and design tactics, we can analyze the architectural design of AI-enabled systems to ensure that trustworthiness has been properly expressed and achieved.The contributions of the thesis include (i) the identification of the trustworthiness sub-attributes that affect the trustworthiness of AI systems (ii) the proposal of trustworthiness scenarios to specify trustworthiness in an AI system (iii) a design checklist to support the analysis of the trustworthiness of AI systems and (iv) the identification of design tactics that can be used to achieve trustworthiness in an AI system
Explainable Information Security: Development of a Construct and Instrument
Despite the increasing efforts to encourage information security (InfoSec) compliance, employeesβ refusal to follow and adopt InfoSec remains a challenge for organisations. Advancements in the behavioural InfoSec field have recently highlighted the importance of developing usable and employeecentric InfoSec that can motivate InfoSec compliance more effectively. In this research, we conceptualise the theoretical structure for a new concept called explainable InfoSec and develop a research instrument for collecting data about this concept. Data was then collected from 724 office workers via an online survey. Exploratory and confirmatory factor analyses were performed to validate the theoretical structure of the explainable InfoSec construct, and we performed structural equation modelling to examine the constructβs impact on intention to comply with organisational InfoSec. The validated theoretical structure of explainable InfoSec consists of two dimensions, fairness and transparency, and the construct was found to positively influence compliance intention
Perspectives on Computing Ethics: a Multi-Stakeholder Analysis
Purpose:
Computing ethics represents a long established, yet rapidly evolving, discipline that grows in complexity and scope on a near-daily basis. Therefore, to help understand some of that scope it is essential to incorporate a range of perspectives, from a range of stakeholders, on current and emerging ethical challenges associated with computer technology. This study aims to achieve this by using, a three-pronged, stakeholder analysis of Computer Science academics, ICT industry professionals, and citizen groups was undertaken to explore what they consider to be crucial computing ethics concerns. The overlap between these stakeholder groups are explored, as well as whether their concerns are reflected in the existing literature.
Design/methodology/approach:
Data collection was performed using focus groups, and the data was analysed using a thematic analysis. The data was also analysed to determine if there were overlaps between the literature and the stakeholdersβ concerns and attitudes towards computing ethics.
Findings:
The results of the focus group analysis show a mixture of overlapping concerns between the different groups, as well as some concerns that are unique to each of the specific groups. All groups stressed the importance of data as a key topic in computing ethics. This includes concerns around the accuracy, completeness and representativeness of datasets used to develop computing applications. Academics were concerned with the best ways to teach computing ethics to university students. Industry professionals believed that a lack of diversity in software teams resulted in important questions not being asked during design and development. Citizens discussed at length the negative and unexpected impacts of social media applications. These are all topics that have gained broad coverage in the literature.
Originality:
The multi-stakeholder analysis provides individual and differing perspectives on the issues related to the rapidly evolving discipline of computing ethics.
Social implications:
In recent years, the impact of Information and Communication Technologies (ICT) on society and the environment at large has grown tremendously. From this fast-paced growth, a myriad of ethical concerns have arisen. Our analysis aims to shed light on what a diverse group of stakeholders consider the most important social impacts of technology and whether these concerns are reflected in the literature on computing ethics. The outcomes of this analysis will form the basis for new teaching content that will be developed in future to help illuminate and address these concerns
From P4 medicine to P5 medicine: transitional times for a more human-centric approach to AI-based tools for hospitals of tomorrow
Within the debate on shaping future clinical services, where different
robotics and artificial intelligence (AI) based technologies are
integrated to perform tasks, the authors take the chance to provide
an interdisciplinary analysis required to validate a tool aiming at
supporting the melanoma cancer diagnosis. In particular, they focus
on the ethical-legal and technical requirements needed to address the
Assessment List on Trustworthy AI (ALTAI), highlighting some pros
and cons of the adopted self-assessment checklist. The dialogue
stimulates additionally remarks on the EU regulatory initiatives on AI
in the healthcare systems
Reframing data ethics in research methods education: a pathway to critical data literacy
This paper presents an ethical framework designed to support the development of critical data literacy for research methods courses and data training programmes in higher education. The framework we present draws upon our reviews of literature, course syllabi and existing frameworks on data ethics. For this research we reviewed 250 research methods syllabi from across the disciplines, as well as 80 syllabi from data science programmes to understand how or if data ethics was taught. We also reviewed 12 data ethics frameworks drawn from different sectors. Finally, we reviewed an extensive and diverse body of literature about data practices, research ethics, data ethics and critical data literacy, in order to develop a transversal model that can be adopted across higher education. To promote and support ethical approaches to the collection and use of data, ethics training must go beyond securing informed consent to enable a critical understanding of the techno-centric environment and the intersecting hierarchies of power embedded in technology and data. By fostering ethics as a method, educators can enable research that protects vulnerable groups and empower communities
Reframing data ethics in research methods education: a pathway to critical data literacy
This paper presents an ethical framework designed to support the development of critical data literacy for research methods courses and data training programmes in higher education. The framework we present draws upon our reviews of literature, course syllabi and existing frameworks on data ethics. For this research we reviewed 250 research methods syllabi from across the disciplines, as well as 80 syllabi from data science programmes to understand how or if data ethics was taught. We also reviewed 12 data ethics frameworks drawn from different sectors. Finally, we reviewed an extensive and diverse body of literature about data practices, research ethics, data ethics and critical data literacy, in order to develop a transversal model that can be adopted across higher education. To promote and support ethical approaches to the collection and use of data, ethics training must go beyond securing informed consent to enable a critical understanding of the techno-centric environment and the intersecting hierarchies of power embedded in technology and data. By fostering ethics as a method, educators can enable research that protects vulnerable groups and empower communities