3,058 research outputs found

    Measuring IT System Value with Entity-Specific Factors Influencing Process Runtime Efficiency

    Get PDF
    This empirical study explores the different factors influencing the runtime of a booking process in the middle office of a major European investment company. The study is based on mixed methods research that combines qualitative research data collection techniques with a quantitative research approach. On the basis of semi-structured interviews, we first develop and present a model of the business process and control points to assess booking runtime. We further identify relevant and IT-related factors such as process automation that may affect booking runtime. Then, we analyze a unique dataset of historical process cycles providing details on booking runtime. Our quantitative analysis of this data provides new insights into how to identify most relevant factors affecting runtime of transaction processes in the financial industry. From a methodological perspective, our study illustrates how to conduct mixed methods research in the field of process performance evaluations

    Research in software allocation for advanced manned mission communications and tracking systems

    Get PDF
    An assessment of the planned processing hardware and software/firmware for the Communications and Tracking System of the Space Station Freedom (SSF) was performed. The intent of the assessment was to determine the optimum distribution of software/firmware in the processing hardware for maximum throughput with minimum required memory. As a product of the assessment process an assessment methodology was to be developed that could be used for similar assessments of future manned spacecraft system designs. The assessment process was hampered by changing requirements for the Space Station. As a result, the initial objective of determining the optimum software/firmware allocation was not fulfilled, but several useful conclusions and recommendations resulted from the assessment. It was concluded that the assessment process would not be completely successful for a system with changing requirements. It was also concluded that memory requirements and hardware requirements were being modified to fit as a consequence of the change process, and although throughput could not be quantitized, potential problem areas could be identified. Finally, inherent flexibility of the system design was essential for the success of a system design with changing requirements. Recommendations resulting from the assessment included development of common software for some embedded controller functions, reduction of embedded processor requirements by hardwiring some Orbital Replacement Units (ORUs) to make better use of processor capabilities, and improvement in communications between software development personnel to enhance the integration process. Lastly, a critical observation was made regarding the software integration tasks did not appear to be addressed in the design process to the degree necessary for successful satisfaction of the system requirements

    Quality measurement in agile and rapid software development: A systematic mapping

    Get PDF
    Context: In despite of agile and rapid software development (ARSD) being researched and applied extensively, managing quality requirements (QRs) are still challenging. As ARSD processes produce a large amount of data, measurement has become a strategy to facilitate QR management. Objective: This study aims to survey the literature related to QR management through metrics in ARSD, focusing on: bibliometrics, QR metrics, and quality-related indicators used in quality management. Method: The study design includes the definition of research questions, selection criteria, and snowballing as search strategy. Results: We selected 61 primary studies (2001-2019). Despite a large body of knowledge and standards, there is no consensus regarding QR measurement. Terminology is varying as are the measuring models. However, seemingly different measurement models do contain similarities. Conclusion: The industrial relevance of the primary studies shows that practitioners have a need to improve quality measurement. Our collection of measures and data sources can serve as a starting point for practitioners to include quality measurement into their decision-making processes. Researchers could benefit from the identified similarities to start building a common framework for quality measurement. In addition, this could help researchers identify what quality aspects need more focus, e.g., security and usability with few metrics reported.This work has been funded by the European Union’s Horizon 2020 research and innovation program through the Q-Rapids project (grant no. 732253). This research was also partially supported by the Spanish Ministerio de Economía, Industria y Competitividad through the DOGO4ML project (grant PID2020-117191RB-I00). Silverio Martínez-Fernández worked in Fraunhofer IESE before January 2020.Peer ReviewedPostprint (published version

    Autonomic Performance-Aware Resource Management in Dynamic IT Service Infrastructures

    Get PDF
    Model-based techniques are a powerful approach to engineering autonomic and self-adaptive systems. This thesis presents a model-based approach for proactive and autonomic performance-aware resource management in dynamic IT infrastructures. Core of the approach is an architecture-level modeling language to describe performance and resource management related aspects in such environments. With this approach, it is possible to autonomically find suitable system configurations at the model level

    Mining complex trees for hidden fruit : a graph–based computational solution to detect latent criminal networks : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Technology at Massey University, Albany, New Zealand.

    Get PDF
    The detection of crime is a complex and difficult endeavour. Public and private organisations – focusing on law enforcement, intelligence, and compliance – commonly apply the rational isolated actor approach premised on observability and materiality. This is manifested largely as conducting entity-level risk management sourcing ‘leads’ from reactive covert human intelligence sources and/or proactive sources by applying simple rules-based models. Focusing on discrete observable and material actors simply ignores that criminal activity exists within a complex system deriving its fundamental structural fabric from the complex interactions between actors - with those most unobservable likely to be both criminally proficient and influential. The graph-based computational solution developed to detect latent criminal networks is a response to the inadequacy of the rational isolated actor approach that ignores the connectedness and complexity of criminality. The core computational solution, written in the R language, consists of novel entity resolution, link discovery, and knowledge discovery technology. Entity resolution enables the fusion of multiple datasets with high accuracy (mean F-measure of 0.986 versus competitors 0.872), generating a graph-based expressive view of the problem. Link discovery is comprised of link prediction and link inference, enabling the high-performance detection (accuracy of ~0.8 versus relevant published models ~0.45) of unobserved relationships such as identity fraud. Knowledge discovery uses the fused graph generated and applies the “GraphExtract” algorithm to create a set of subgraphs representing latent functional criminal groups, and a mesoscopic graph representing how this set of criminal groups are interconnected. Latent knowledge is generated from a range of metrics including the “Super-broker” metric and attitude prediction. The computational solution has been evaluated on a range of datasets that mimic an applied setting, demonstrating a scalable (tested on ~18 million node graphs) and performant (~33 hours runtime on a non-distributed platform) solution that successfully detects relevant latent functional criminal groups in around 90% of cases sampled and enables the contextual understanding of the broader criminal system through the mesoscopic graph and associated metadata. The augmented data assets generated provide a multi-perspective systems view of criminal activity that enable advanced informed decision making across the microscopic mesoscopic macroscopic spectrum

    A survey on software testability

    Full text link
    Context: Software testability is the degree to which a software system or a unit under test supports its own testing. To predict and improve software testability, a large number of techniques and metrics have been proposed by both practitioners and researchers in the last several decades. Reviewing and getting an overview of the entire state-of-the-art and state-of-the-practice in this area is often challenging for a practitioner or a new researcher. Objective: Our objective is to summarize the body of knowledge in this area and to benefit the readers (both practitioners and researchers) in preparing, measuring and improving software testability. Method: To address the above need, the authors conducted a survey in the form of a systematic literature mapping (classification) to find out what we as a community know about this topic. After compiling an initial pool of 303 papers, and applying a set of inclusion/exclusion criteria, our final pool included 208 papers. Results: The area of software testability has been comprehensively studied by researchers and practitioners. Approaches for measurement of testability and improvement of testability are the most-frequently addressed in the papers. The two most often mentioned factors affecting testability are observability and controllability. Common ways to improve testability are testability transformation, improving observability, adding assertions, and improving controllability. Conclusion: This paper serves for both researchers and practitioners as an "index" to the vast body of knowledge in the area of testability. The results could help practitioners measure and improve software testability in their projects

    Empirical Insights on Financial Intermediary Services - How Order Slicing and Modification impacts Order Executions Times

    Get PDF
    Information Technology heavily transforms the financial industry and already changed the intermediating services of brokers. This class of service providers has to fulfill the trading of especially large orders with best possible executions and within a given time frame. Based on previous research in order executions, we develop a model for survival analysis of orders and estimate the influence of multiple factor groups and various factors in these groups to estimate how slicing, modification and the overall specification influence the likelihood of execution. Using a unique message-based dataset of Deutsche Boerse AG, we find empirical evidence on the influence of factors whether brokers and their traders can execute the overall order in a definable time frame. Finally, we discuss the observed coefficients and show how brokers and their traders can use and aggregate these coefficients for decision-support on how to slice modify and specify large client orders
    corecore