1,065,697 research outputs found
Business Domain Modelling using an Integrated Framework
This paper presents an application of a âSystematic Soft Domain Driven Design Frameworkâ as a soft systems approach to domain-driven design of information systems development. The framework combining techniques from Soft Systems Methodology (SSM), the Unified Modelling Language (UML), and an implementation pattern known as âNaked Objectsâ. This framework have been used in action research projects that have involved the investigation and modelling of business processes using object-oriented domain models and the implementation of software systems based on those domain models. Within this framework, Soft Systems Methodology (SSM) is used as a guiding methodology to explore the problem situation and to develop the domain model using UML for the given business domain. The framework is proposed and evaluated in our previous works, and a real case study âInformation Retrieval System for academic researchâ is used, in this paper, to show further practice and evaluation of the framework in different business domain. We argue that there are advantages from combining and using techniques from different methodologies in this way for business domain modelling. The framework is overviewed and justified as multimethodology using Mingers multimethodology ideas
Supporting decision making process with "Ideal" software agents: what do business executives want?
According to Simonâs (1977) decision making theory, intelligence is the first and most important phase in the decision making process. With the escalation of information resources available to business executives, it is becoming imperative to explore the potential and challenges of using agent-based systems to support the intelligence phase of decision-making. This research examines UK executivesâ perceptions of using agent-based support systems and the criteria for design and development of their âidealâ intelligent software agents. The study adopted an inductive approach using focus groups to generate a preliminary set of design criteria of âidealâ agents. It then followed a deductive approach using semi-structured interviews to validate and enhance the criteria. This qualitative research has generated unique insights into executivesâ perceptions of the design and use of agent-based support systems. The systematic content analysis of qualitative data led to the proposal and validation of design criteria at three levels. The findings revealed the most desirable criteria for agent based support systems from the end usersâ point view. The design criteria can be used not only to guide intelligent agent system design but also system evaluation
A maturity based qualitative information systems effectiveness evaluation of a public organization in Turkey
Among the top issues of information systems (IS) management is measuring and improving IS effectiveness. In that, the concept of IS effectiveness is widely accepted throughout IS research as the principle criterion for evaluating information systems. IS effectiveness is defined as the extent to which a given information system actually contributes to achieving organizational goals.. As the society moves from the industrial era to the information age,,ıs role in any organization shifts from efficiency to effectiveness. This case study presents an IS effectiveness evaluation methodology applied on a public organization. The paper supports in general the claim that the use of a systematic IS effectiveness evaluation approach can improve organizational performance
A multi-method approach to evaluate health information systems
Systematic evaluation of the introduction and impact of health information systems (HIS) is a challenging task. As the implementation is a dynamic process, with diverse issues emerge at various stages of system introduction, it is challenge to weigh the contribution of various factors and differentiate the critical ones. A conceptual framework will be helpful in guiding the evaluation effort; otherwise data collection may not be comprehensive and accurate. This may again lead to inadequate interpretation of the phenomena under study. Based on comprehensive literature research and own practice of evaluating health information systems, the author proposes a multimethod approach that incorporates both quantitative and qualitative measurement and centered around DeLone and McLean Information System Success Model. This approach aims to quantify the performance of HIS and its impact, and provide comprehensive and accurate explanations about the casual relationships of the different factors. This approach will provide decision makers with accurate and actionable information for improving the performance of the introduced HIS
Challenges in Evaluating Development Effectiveness
Evaluation quality is a function of methodological and data inputs. This paper argues that there has been inadequate investment in methodology, often resulting in low quality evaluation outputs. With an increased focus on results, evaluation needs to deliver credible information on the role of developmentsupported interventions in improving the lives of poor people, so attention to sound methodology matters. This paper explores three areas in which evaluation can be improved. First, reporting agency-wide performance through monitoring systems that satisfy the Triple-A criteria of aggregation, attribution and alignment; which includes procedures for the systematic summary of qualitative data. Second, more attention need to be paid to measuring impact, both through the use of randomisation where possible and appropriate, or through quasi-experimental methods. However, analysis of impact needs to be firmly embedded in a theory-based approach which maps the causal chain from inputs to impacts. Finally, analysis of sustainability needs to move beyond its current crude and cursory treatment to embrace the tools readily available to the discipline.Evaluation, development effectiveness, World Bank
VoMBaT: A Tool for Visualising Evaluation Measure Behaviour in High-Recall Search Tasks
The objective of High-Recall Information Retrieval (HRIR) is to retrieve as many relevant documents as possible for a given search topic. One approach to HRIR is Technology-Assisted Review (TAR), which uses information retrieval and machine learning techniques to aid the review of large document collections. TAR systems are commonly used in legal eDiscovery and systematic literature reviews. Successful TAR systems are able to find the majority of relevant documents using the least number of assessments. Commonly used retrospective evaluation assumes that the system achieves a specific, fixed recall level first, and then measures the precision or work saved (e.g., precision at r% recall). This approach can cause problems related to understanding the behaviour of evaluation measures in a fixed recall setting. It is also problematic when estimating time and money savings during technology-assisted reviews.
This paper presents a new visual analytics tool to explore the dynamics of evaluation measures depending on recall level. We implemented 18 evaluation measures based on the confusion matrix terms, both from general IR tasks and specific to TAR. The tool allows for a comparison of the behaviour of these measures in a fixed recall evaluation setting. It can also simulate savings in time and money and a count of manual vs automatic assessments for different datasets depending on the model quality. The tool is open-source, and the demo is available under the following URL: https://vombat.streamlit.app
Analytics Use Cases for Mass Customization â A Process-based Approach for Systematic Discovery
Nowadays, mass customization (MC) is shaped by the application of digital technologies like computer-aided design, computer aided manufacturing, and distribution planning. Within a MC process, various data is created, which can be used to gain knowledge about past and future business activities by means of modern data analytics methods. The paper at hand applies design science research and presents a process-based approach for identifying potential analytics use cases for MC. For this purpose, a generic MC process is derived from previous literature and a systematic analysis is carried out using the work systems method. The resulting artifact offers a differentiated view on customers, products, activities, participants, technologies, and information as well as on the information flows within the MC process. It enables manufacturers to identify valuable opportunities for analytics and to optimize current MC processes. Furthermore, it can be used to develop a systematic process for the discovery and evaluation of analytics use cases and novel business models in the future
A Review of Evaluation Practices of Gesture Generation in Embodied Conversational Agents
Embodied Conversational Agents (ECA) take on different forms, including
virtual avatars or physical agents, such as a humanoid robot. ECAs are often
designed to produce nonverbal behaviour to complement or enhance its verbal
communication. One form of nonverbal behaviour is co-speech gesturing, which
involves movements that the agent makes with its arms and hands that is paired
with verbal communication. Co-speech gestures for ECAs can be created using
different generation methods, such as rule-based and data-driven processes.
However, reports on gesture generation methods use a variety of evaluation
measures, which hinders comparison. To address this, we conducted a systematic
review on co-speech gesture generation methods for iconic, metaphoric, deictic
or beat gestures, including their evaluation methods. We reviewed 22 studies
that had an ECA with a human-like upper body that used co-speech gesturing in a
social human-agent interaction, including a user study to evaluate its
performance. We found most studies used a within-subject design and relied on a
form of subjective evaluation, but lacked a systematic approach. Overall,
methodological quality was low-to-moderate and few systematic conclusions could
be drawn. We argue that the field requires rigorous and uniform tools for the
evaluation of co-speech gesture systems. We have proposed recommendations for
future empirical evaluation, including standardised phrases and test scenarios
to test generative models. We have proposed a research checklist that can be
used to report relevant information for the evaluation of generative models as
well as to evaluate co-speech gesture use.Comment: 9 page
Making the Case: The Systems Project Case Study as Storytelling
Term projects based on case studies are a common way to engage students in the concepts and skills introduced by information-systems development courses. The case study works as a pedagogical approach in great part because it tells a story, which taps into the centrality of narrative in human cognition. This paper draws on thinking in the area of narrative studies in order to develop a systematic perspective on the task of writing systems-project case studies. A general narrative framework for the case study is proposed, with the goal of providing structured guidance in the development, use, and evaluation of cases. The authorâs recent experience in developing a case study for use in database-management courses illustrates key points
- âŠ