40 research outputs found

    Business-driven IT Management

    Get PDF
    Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business results, and vice versa. In this dissertation, we review the state of the art of BDIM research and we position our intended contribution within the BDIM research space along the dimensions of decision support (as opposed of automation) and its application to IT service management processes. Within these research dimensions, we advance the state of the art by 1) contributing a decision theoretical framework for BDIM and 2) presenting two novel BDIM solutions in the IT service management space. First we present a simpler BDIM solution for prioritizing incidents, which can be used as a template for creating BDIM solutions in other IT service management processes. Then, we present a more comprehensive solution for optimizing the business-related performance of an IT support organization in dealing with incidents. Our decision theoretical framework and models for BDIM bring the concepts of business impact and risk to the fore, and are able to cope with both monetizable and intangible aspects of business impact. We start from a constructive and quantitative re-definition of some terms that are widely used in IT service management but for which was never given a rigorous decision: business impact, cost, benefit, risk and urgency. On top of that, we build a coherent methodology for linking IT-level metrics with business level metrics and make progress toward solving the business-IT alignment problem. Our methodology uses a constructive and quantitative definition of alignment with business objectives, taken as the likelihood – to the best of one’s knowledge – that such objectives will be met. That is used as the basis for building an engine for business impact calculation that is in fact an alignment computation engine. We show a sample BDIM solution for incident prioritization that is built using the decision theoretical framework, the methodology and the tools developed. We show how the sample BDIM solution could be used as a blueprint to build BDIM solutions for decision support in other IT service management processes, such as change management for example. However, the full power of BDIM can be best understood by studying the second fully fledged BDIM application that we present in this thesis. While incident management is used as a scenario for this second application as well, the main contribution that it brings about is really to provide a solution for business-driven organizational redesign to optimize the performance of an IT support organization. The solution is quite rich, and features components that orchestrate together advanced techniques in visualization, simulation, data mining and operations research. We show that the techniques we use - in particular the simulation of an IT organization enacting the incident management process – bring considerable benefits both when the performance is measured in terms of traditional IT metrics (mean time to resolution of incidents), and even more so when business impact metrics are brought into the picture, thereby providing a justification for investing time and effort in creating BDIM solutions. In terms of impact, the work presented in this thesis produced about twenty conference and journal publications, and resulted so far in three patent applications. Moreover this work has greatly influenced the design and implementation of Business Impact Optimization module of HP DecisionCenter™: a leading commercial software product for IT optimization, whose core has been re-designed to work as described here

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    SURFO Technical Report No. 2001-02

    Get PDF
    The 2001 technical reports written by undergraduate students participating in the SURFO (Summer Undergraduate Research Fellowships in Oceanography) Program while at the University of Rhode Island

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Composite structural materials

    Get PDF
    The development of composite materials for aircraft applications is addressed with specific consideration of physical properties, structural concepts and analysis, manufacturing, reliability, and life prediction. The design and flight testing of composite ultralight gliders is documented. Advances in computer aided design and methods for nondestructive testing are also discussed

    Exponential Qubit Reduction in Optimization for Financial Transaction Settlement

    Full text link
    We extend the qubit-efficient encoding presented in [Tan et al., Quantum 5, 454 (2021)] and apply it to instances of the financial transaction settlement problem constructed from data provided by a regulated financial exchange. Our methods are directly applicable to any QUBO problem with linear inequality constraints. Our extension of previously proposed methods consists of a simplification in varying the number of qubits used to encode correlations as well as a new class of variational circuits which incorporate symmetries, thereby reducing sampling overhead, improving numerical stability and recovering the expression of the cost objective as a Hermitian observable. We also propose optimality-preserving methods to reduce variance in real-world data and substitute continuous slack variables. We benchmark our methods against standard QAOA for problems consisting of 16 transactions and obtain competitive results. Our newly proposed variational ansatz performs best overall. We demonstrate tackling problems with 128 transactions on real quantum hardware, exceeding previous results bounded by NISQ hardware by almost two orders of magnitude.Comment: 16 pages, 8 figure

    Business-driven IT Management

    Get PDF
    Business-driven IT management (BDIM) aims at ensuring successful alignment of business and IT through thorough understanding of the impact of IT on business results, and vice versa. In this dissertation, we review the state of the art of BDIM research and we position our intended contribution within the BDIM research space along the dimensions of decision support (as opposed of automation) and its application to IT service management processes. Within these research dimensions, we advance the state of the art by 1) contributing a decision theoretical framework for BDIM and 2) presenting two novel BDIM solutions in the IT service management space. First we present a simpler BDIM solution for prioritizing incidents, which can be used as a template for creating BDIM solutions in other IT service management processes. Then, we present a more comprehensive solution for optimizing the business-related performance of an IT support organization in dealing with incidents. Our decision theoretical framework and models for BDIM bring the concepts of business impact and risk to the fore, and are able to cope with both monetizable and intangible aspects of business impact. We start from a constructive and quantitative re-definition of some terms that are widely used in IT service management but for which was never given a rigorous decision: business impact, cost, benefit, risk and urgency. On top of that, we build a coherent methodology for linking IT-level metrics with business level metrics and make progress toward solving the business-IT alignment problem. Our methodology uses a constructive and quantitative definition of alignment with business objectives, taken as the likelihood – to the best of one’s knowledge – that such objectives will be met. That is used as the basis for building an engine for business impact calculation that is in fact an alignment computation engine. We show a sample BDIM solution for incident prioritization that is built using the decision theoretical framework, the methodology and the tools developed. We show how the sample BDIM solution could be used as a blueprint to build BDIM solutions for decision support in other IT service management processes, such as change management for example. However, the full power of BDIM can be best understood by studying the second fully fledged BDIM application that we present in this thesis. While incident management is used as a scenario for this second application as well, the main contribution that it brings about is really to provide a solution for business-driven organizational redesign to optimize the performance of an IT support organization. The solution is quite rich, and features components that orchestrate together advanced techniques in visualization, simulation, data mining and operations research. We show that the techniques we use - in particular the simulation of an IT organization enacting the incident management process – bring considerable benefits both when the performance is measured in terms of traditional IT metrics (mean time to resolution of incidents), and even more so when business impact metrics are brought into the picture, thereby providing a justification for investing time and effort in creating BDIM solutions. In terms of impact, the work presented in this thesis produced about twenty conference and journal publications, and resulted so far in three patent applications. Moreover this work has greatly influenced the design and implementation of Business Impact Optimization module of HP DecisionCenter™: a leading commercial software product for IT optimization, whose core has been re-designed to work as described here

    Generating patterns on clothing for seamless design

    Get PDF
    Symmetric patterns are used widely in clothing manufacture. However, the discontinuity of patterns at seams can disrupt the visual appeal of clothing. While it is possible to align patterns to conceal such pattern breaks, it is hard create a completely seamless garment in terms of pattern continuity. In this thesis, we explore computational methods to parameterize the clothing pieces relative to a pattern’s coordinate system to achieve pattern continuity over garments. We review previous work related to pattern alignment on clothing. We also review surface quadrangulation methods. With a suitable quadrangulation, we can map any planar pattern with fourfold rotations into each quad, and achieve a seamless design. With an understanding of previous work, we approached the problems from three angles. First, we mapped patterns with sixfold rotations onto clothing by triangulating the clothing pieces and ensuring consistency of triangle vertices on both sides of a seam. We also mapped patterns with fourfold rotations onto clothing by optimizing the shape of each clothing piece in the texture domain. Lastly, we performed quadrangulation guided by cross fields, and mapped fourfold pattern units into each quad. We assembled and simulated the texture mapped clothing in Blender to visualize the results

    Deep neural network generation for image classification within resource-constrained environments using evolutionary and hand-crafted processes

    Get PDF
    Constructing Convolutional Neural Networks (CNN) models is a manual process requiringexpert knowledge and trial and error. Background research highlights the following knowledge gaps. 1) existing efficiency-focused CNN models make design choices that impact model performance. Better ways are needed to construct accurate models for resourceconstrained environments that lack graphics processing units (GPU’s) to speed up model inference time such as CCTV cameras and IoT devices. 2) Existing methods for automatically designing CNN architectures do not explore the search space effectively for the best solution and 3) existing methods for automatically designing CNN architectures do not exploit modern model architecture design patterns such as residual connections. The lack of residual connections means the model depth is limited owing to the vanishing gradient problem. Furthermore, existing methods for automatically designing CNN architectures adopt search strategies that make them vulnerable to local minima traps. Better techniques to construct efficient CNN models, and automated approaches that can produce accurate deep model constructions advance many areas such as hazard detection, medical diagnosis and robotics in both academia and industry. The work undertaken during this research are 1) the proposal of an efficient and accurate CNN architecture for resource-constrained environments owing to a novel block structure containing 1x3 and 3x1 convolutions to save computational cost, 2) proposed a particle swarm optimization (PSO) method of automatically constructing efficient deep CNN architectures with greater accuracy by proposing a novel encoding and search strategy, 3) proposed a PSO based method of automatically constructing deeper CNN models with improved accuracy by proposing a novel encoding scheme that employs residual connections, in novel search mechanism that follows the global and neighbouring best leaders. The main findings of this research are 1) the proposed efficiency-focused CNN model outperformed MobileNetV2 by 13.43% in respect to accuracy, and 39.63% in respect to efficiency, measured in floating-point operations. A reduction in floating-point operations means the model has the potential for faster inference times which is beneficial to applications within resource-constrained environments without GPU’s such as CCTV cameras. 2) the proposed automatic CNN generation technique outperformed existing methods by 7.58% in respect to accuracy and a 63% improvement in search time efficiency owing to the proposal of more efficient architectures speeding up the search process and 3) the proposed automatic deep residual CNN generation method improved model accuracy by 4.43% when compared against related studies owing to deeper model construction and improvements in the search process. The proposed search process embeds human knowledge of constructing deep residual networks and provides constraint settings which can be used to limit the proposed models depth and width. The ability to constrain a models depth and width is important as it ensures the upper bounds of a proposed model will fit within the constraints of resource-constrained environments
    corecore