18 research outputs found
The Digital Classicist 2013
This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels. This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies. As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly
The Digital Classicist 2013
This edited volume collects together peer-reviewed papers that initially emanated from presentations at Digital Classicist seminars and conference panels.
This wide-ranging volume showcases exemplary applications of digital scholarship to the ancient world and critically examines the many challenges and opportunities afforded by such research. The chapters included here demonstrate innovative approaches that drive forward the research interests of both humanists and technologists while showing that rigorous scholarship is as central to digital research as it is to mainstream classical studies.
As with the earlier Digital Classicist publications, our aim is not to give a broad overview of the field of digital classics; rather, we present here a snapshot of some of the varied research of our members in order to engage with and contribute to the development of scholarship both in the fields of classical antiquity and Digital Humanities more broadly
On data-driven systems analyzing, supporting and enhancing users’ interaction and experience
[EN]The research areas of Human-Computer Interaction and Software Architectures have
been traditionally treated separately, but in the literature, many authors made efforts to
merge them to build better software systems. One of the common gaps between software
engineering and usability is the lack of strategies to apply usability principles in the initial
design of software architectures. Including these principles since the early phases of software
design would help to avoid later architectural changes to include user experience
requirements. The combination of both fields (software architectures and Human-Computer
Interaction) would contribute to building better interactive software that should include the
best from both the systems and user-centered designs. In that combination, the software
architectures should enclose the fundamental structure and ideas of the system to offer the
desired quality based on sound design decisions.
Moreover, the information kept within a system is an opportunity to extract knowledge
about the system itself, its components, the software included, the users or the interaction
occurring inside. The knowledge gained from the information generated in a software
environment can be used to improve the system itself, its software, the users’ experience, and
the results. So, the combination of the areas of Knowledge Discovery and Human-Computer
Interaction offers ideal conditions to address Human-Computer-Interaction-related
challenges. The Human-Computer Interaction focuses on human intelligence, the Knowledge
Discovery in computational intelligence, and the combination of both can raise the support
of human intelligence with machine intelligence to discover new insights in a world crowded
of data.
This Ph.D. Thesis deals with these kinds of challenges: how approaches like data-driven
software architectures (using Knowledge Discovery techniques) can help to improve the users'
interaction and experience within an interactive system. Specifically, it deals with how to
improve the human-computer interaction processes of different kind of stakeholders to
improve different aspects such as the user experience or the easiness to accomplish a specific
task.
Several research actions and experiments support this investigation. These research
actions included performing a systematic literature review and mapping of the literature that
was aimed at finding how the software architectures in the literature have been used to
support, analyze or enhance the human-computer interaction. Also, the actions included work
on four different research scenarios that presented common challenges in the Human-
Computer Interaction knowledge area. The case studies that fit into the scenarios selected
were chosen based on the Human-Computer Interaction challenges they present, and on the
authors’ accessibility to them. The four case studies were: an educational laboratory virtual world, a Massive Open Online Course and the social networks where the students discuss
and learn, a system that includes very large web forms, and an environment where
programmers develop code in the context of quantum computing. The development of the
experiences involved the review of more than 2700 papers (only in the literature review
phase), the analysis of the interaction of 6000 users in four different contexts or the analysis
of 500,000 quantum computing programs.
As outcomes from the experiences, some solutions are presented regarding the minimal
software artifacts to include in software architectures, the behavior they should exhibit, the
features desired in the extended software architecture, some analytic workflows and
approaches to use, or the different kinds of feedback needed to reinforce the users’ interaction
and experience.
The results achieved led to the conclusion that, despite this is not a standard practice in
the literature, the software environments should embrace Knowledge Discovery and datadriven
principles to analyze and respond appropriately to the users’ needs and improve or
support the interaction. To adopt Knowledge Discovery and data-driven principles, the
software environments need to extend their software architectures to cover also the challenges
related to Human-Computer Interaction. Finally, to tackle the current challenges related to
the users’ interaction and experience and aiming to automate the software response to users’
actions, desires, and behaviors, the interactive systems should also include intelligent
behaviors through embracing the Artificial Intelligence procedures and techniques
On Data-driven systems analyzing, supporting and enhancing users’ interaction and experience
Tesis doctoral en inglés y resumen extendido en español[EN] The research areas of Human-Computer Interaction and Software Architectures have been traditionally treated separately, but in the literature, many authors made efforts to merge them to build better software systems. One of the common gaps between software engineering and usability is the lack of strategies to apply usability principles in the initial design of software architectures. Including these principles since the early phases of software design would help to avoid later architectural changes to include user experience requirements. The combination of both fields (software architectures and Human-Computer Interaction) would contribute to building better interactive software that should include the best from both the systems and user-centered designs. In that combination, the software architectures should enclose the fundamental structure and ideas of the system to offer the desired quality based on sound design decisions.
Moreover, the information kept within a system is an opportunity to extract knowledge about the system itself, its components, the software included, the users or the interaction occurring inside. The knowledge gained from the information generated in a software environment can be used to improve the system itself, its software, the users’ experience, and the results. So, the combination of the areas of Knowledge Discovery and Human-Computer Interaction offers ideal conditions to address Human-Computer-Interaction-related challenges. The Human-Computer Interaction focuses on human intelligence, the Knowledge Discovery in computational intelligence, and the combination of both can raise the support of human intelligence with machine intelligence to discover new insights in a world crowded of data.
This Ph.D. Thesis deals with these kinds of challenges: how approaches like data-driven software architectures (using Knowledge Discovery techniques) can help to improve the users' interaction and experience within an interactive system. Specifically, it deals with how to improve the human-computer interaction processes of different kind of stakeholders to improve different aspects such as the user experience or the easiness to accomplish a specific task.
Several research actions and experiments support this investigation. These research actions included performing a systematic literature review and mapping of the literature that was aimed at finding how the software architectures in the literature have been used to support, analyze or enhance the human-computer interaction. Also, the actions included work on four different research scenarios that presented common challenges in the Human-Computer Interaction knowledge area. The case studies that fit into the scenarios selected were chosen based on the Human-Computer Interaction challenges they present, and on the authors’ accessibility to them. The four case studies were: an educational laboratory virtual world, a Massive Open Online Course and the social networks where the students discuss and learn, a system that includes very large web forms, and an environment where programmers develop code in the context of quantum computing. The development of the experiences involved the review of more than 2700 papers (only in the literature review phase), the analysis of the interaction of 6000 users in four different contexts or the analysis of 500,000 quantum computing programs.
As outcomes from the experiences, some solutions are presented regarding the minimal software artifacts to include in software architectures, the behavior they should exhibit, the features desired in the extended software architecture, some analytic workflows and approaches to use, or the different kinds of feedback needed to reinforce the users’ interaction and experience.
The results achieved led to the conclusion that, despite this is not a standard practice in the literature, the software environments should embrace Knowledge Discovery and data-driven principles to analyze and respond appropriately to the users’ needs and improve or support the interaction. To adopt Knowledge Discovery and data-driven principles, the software environments need to extend their software architectures to cover also the challenges related to Human-Computer Interaction. Finally, to tackle the current challenges related to the users’ interaction and experience and aiming to automate the software response to users’ actions, desires, and behaviors, the interactive systems should also include intelligent behaviors through embracing the Artificial Intelligence procedures and techniques
Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty
Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison
Geographic information extraction from texts
A large volume of unstructured texts, containing valuable geographic information, is available online. This information – provided implicitly or explicitly – is useful not only for scientific studies (e.g., spatial humanities) but also for many practical applications (e.g., geographic information retrieval). Although large progress has been achieved in geographic information extraction from texts, there are still unsolved challenges and issues, ranging from methods, systems, and data, to applications and privacy. Therefore, this workshop will provide a timely opportunity to discuss the recent advances, new ideas, and concepts but also identify research gaps in geographic information extraction
Recommended from our members
AIRM: a new AI Recruiting Model for the Saudi Arabian labour market
One of the goals of Saudi Vision 2030 is to keep the unemployment rate at the lowest level to empower the economy. Prior research has shown that an increase in unemployment has a negative effect on a country’s Gross Domestic Product. This research aims to utilise cutting-edge technology such as Data Lake (DL), Machine Learning (ML) and Artificial Intelligence (AI) to assist the Saudi labour market bymatching job seekers with vacant positions. Currently, human experts carry out this process; however, this is time consuming and labour intensive. Moreover, in the Saudi labour market, this process does not use a cohesive data centre to monitor, integrate, or analyse labour market data, resulting in inefficiencies, such as bias and latency. These inefficiencies arise from a lack of technologies and, more importantly, from having an open labour market without a national labour market data centre. This research proposes a new AI Recruiting Model (AIRM) architecture that exploits DLs, ML and AI to rapidly and efficiently match job seekers to vacant positions in the Saudi labour market. A Minimum Viable Product (MVP) is employed to test the proposed AIRM architecture using a labour market dataset simulation corpus for training purposes; the architecture is further evaluated against three research-collaborative Human Resources (HR) professionals. As this research is data-driven in nature, it requires collaboration from domain experts. The first layer of the AIRM architecture uses balanced iterative reducing and clustering using hierarchies (BIRCH) as a clustering algorithm for the initial screening layer. The mapping layer uses sentence transformers with a robustly optimised BERTt pre-training approach (RoBERTa) as the base model, and ranking is carried out using the Facebook AI Similarity Search (FAISS). Finally, the preferences layer takes the user’s preferences as a list and sorts the results using the pre-trained cross-encoders model, considering the weight of the more important words. This new AIRM has yielded favourable outcomes: This research considered accepting an AIRM selection ratified by at least one HR expert to account for the subjective character of the selection process when exclusively handled by human HR experts. The research evaluated the AIRM using two metrics: accuracy and time. The AIRM had an overall matching accuracy of 84%, with at least one expert agreeing with the system’s output. Furthermore, it completed the task in 2.4 minutes, whereas human experts took more than six days on average. Overall, the AIRM outperforms humans in task execution, making it useful in pre-selecting a group of applicants and positions. The AIRM is not limited to government services. It can also help any commercial business that uses Big Data