2,114 research outputs found

    Develop Guidelines for Pavement Preservation Treatments and for Building a Pavement Preservation Program Platform for Alaska

    Get PDF
    INE/AUTC 12.0

    Forecasting of commercial sales with large scale Gaussian Processes

    Full text link
    This paper argues that there has not been enough discussion in the field of applications of Gaussian Process for the fast moving consumer goods industry. Yet, this technique can be important as it e.g., can provide automatic feature relevance determination and the posterior mean can unlock insights on the data. Significant challenges are the large size and high dimensionality of commercial data at a point of sale. The study reviews approaches in the Gaussian Processes modeling for large data sets, evaluates their performance on commercial sales and shows value of this type of models as a decision-making tool for management.Comment: 1o pages, 5 figure

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    The Innovation Waltz: Unpacking Developers’ Response to Market Feedback and Its Effects on App Performance

    Get PDF
    To remain competitive in the intensely competitive mobile app market, developers often rely on user feedback to fuel the innovation process. Past studies, however, have rarely examined the impact of developers’ incremental innovation strategies by treating app innovation as a continuous process. This knowledge gap prompted us to advance a framework of developers’ incremental innovation strategies comprising four coping strategies: sailing, optimizing, supplementing, and patching. Employing a multi-state Markov model to capture the probability of a developer employing an incremental innovation strategy in response to distinct types of market feedback during the app innovation process, we analyze data sourced from the Android app store that consists of 4,583 apps, 29,307 updates, and 231,817 reviews. We discovered that market feedback affects the adoption of the four incremental innovation strategies differently. Additionally, we found that sailing, supplementing, and optimizing strategies boost app downloads, while supplementing, optimizing, and patching strategies improve app ratings

    A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities

    Get PDF
    The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia

    Recommended Selective Maintenance and Rehabilitation Treatment Approach for Air Force Primary Rigid Runway Pavement Systems

    Get PDF
    The Air Force is facing the challenge to preserve the current inventory of 154 million square yards of paved airfield assets while at the same time reducing the budget by $36.2 billion between fiscal years 2015-2019. This research sought to determine a selective maintenance and rehabilitation treatment approach that allocates resources efficiently to preserve the degrading pavement assets in the financially constrained environment. Air Force pavement inspection reports from the past five years provided 4289 observed pavement distress data points for this research. The data was inputted into the pavement management software, PAVER, to calculate the Pavement Condition Index (PCI) deduct values for every pavement distress combinations. A pavement distress prioritization list was created from the 111 PCI deduct value calculations to rank the impact that different distresses have on the condition of pavement systems. The analysis led to the recommended selective maintenance and rehabilitation treatment approach of treating all medium and high severity joint seal damage with joint seal repair, repairing all pavement slabs with slab replacement that had a PCI less than 70 and with a PCI deduct greater than 10, and using all remaining resources on the Air Force recommended treatments. The recommended approach minimizes the potential of Foreign Object Damage, uses corrective measures in the form of slab replacement to repair the worst conditioned and highest priority slabs, and reduces further pavement degradation with the Air Force recommended treatments

    Cloud engineering is search based software engineering too

    Get PDF
    Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ‘SBSE in the cloud’. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ‘SBSE for the cloud’, formulating cloud computing challenges in ways that can be addressed using SBSE

    Countering Cybersecurity Vulnerabilities in the Power System

    Get PDF
    Security vulnerabilities in software pose an important threat to power grid security, which can be exploited by attackers if not properly addressed. Every month, many vulnerabilities are discovered and all the vulnerabilities must be remediated in a timely manner to reduce the chance of being exploited by attackers. In current practice, security operators have to manually analyze each vulnerability present in their assets and determine the remediation actions in a short time period, which involves a tremendous amount of human resources for electric utilities. To solve this problem, we propose a machine learning-based automation framework to automate vulnerability analysis and determine the remediation actions for electric utilities. Then the determined remediation actions will be applied to the system to remediate vulnerabilities. However, not all vulnerabilities can be remediated quickly due to limited resources and the remediation action applying order will significantly affect the system\u27s risk level. Thus it is important to schedule which vulnerabilities should be remediated first. We will model this as a scheduling optimization problem to schedule the remediation action applying order to minimize the total risk by utilizing vulnerabilities\u27 impact and their probabilities of being exploited. Besides, an electric utility also needs to know whether vulnerabilities have already been exploited specifically in their own power system. If a vulnerability is exploited, it has to be addressed immediately. Thus, it is important to identify whether some vulnerabilities have been taken advantage of by attackers to launch attacks. Different vulnerabilities may require different identification methods. In this dissertation, we explore identifying exploited vulnerabilities by detecting and localizing false data injection attacks and give a case study in the Automatic Generation Control (AGC) system, which is a key control system to keep the power system\u27s balance. However, malicious measurements can be injected to exploited devices to mislead AGC to make false power generation adjustment which will harm power system operations. We propose Long Short Term Memory (LSTM) Neural Network-based methods and a Fourier Transform-based method to detect and localize such false data injection attacks. Detection and localization of such attacks could provide further information to better prioritize vulnerability remediation actions

    Decompose and Conquer: Addressing Evasive Errors in Systems on Chip

    Full text link
    Modern computer chips comprise many components, including microprocessor cores, memory modules, on-chip networks, and accelerators. Such system-on-chip (SoC) designs are deployed in a variety of computing devices: from internet-of-things, to smartphones, to personal computers, to data centers. In this dissertation, we discuss evasive errors in SoC designs and how these errors can be addressed efficiently. In particular, we focus on two types of errors: design bugs and permanent faults. Design bugs originate from the limited amount of time allowed for design verification and validation. Thus, they are often found in functional features that are rarely activated. Complete functional verification, which can eliminate design bugs, is extremely time-consuming, thus impractical in modern complex SoC designs. Permanent faults are caused by failures of fragile transistors in nano-scale semiconductor manufacturing processes. Indeed, weak transistors may wear out unexpectedly within the lifespan of the design. Hardware structures that reduce the occurrence of permanent faults incur significant silicon area or performance overheads, thus they are infeasible for most cost-sensitive SoC designs. To tackle and overcome these evasive errors efficiently, we propose to leverage the principle of decomposition to lower the complexity of the software analysis or the hardware structures involved. To this end, we present several decomposition techniques, specific to major SoC components. We first focus on microprocessor cores, by presenting a lightweight bug-masking analysis that decomposes a program into individual instructions to identify if a design bug would be masked by the program's execution. We then move to memory subsystems: there, we offer an efficient memory consistency testing framework to detect buggy memory-ordering behaviors, which decomposes the memory-ordering graph into small components based on incremental differences. We also propose a microarchitectural patching solution for memory subsystem bugs, which augments each core node with a small distributed programmable logic, instead of including a global patching module. In the context of on-chip networks, we propose two routing reconfiguration algorithms that bypass faulty network resources. The first computes short-term routes in a distributed fashion, localized to the fault region. The second decomposes application-aware routing computation into simple routing rules so to quickly find deadlock-free, application-optimized routes in a fault-ridden network. Finally, we consider general accelerator modules in SoC designs. When a system includes many accelerators, there are a variety of interactions among them that must be verified to catch buggy interactions. To this end, we decompose such inter-module communication into basic interaction elements, which can be reassembled into new, interesting tests. Overall, we show that the decomposition of complex software algorithms and hardware structures can significantly reduce overheads: up to three orders of magnitude in the bug-masking analysis and the application-aware routing, approximately 50 times in the routing reconfiguration latency, and 5 times on average in the memory-ordering graph checking. These overhead reductions come with losses in error coverage: 23% undetected bug-masking incidents, 39% non-patchable memory bugs, and occasionally we overlook rare patterns of multiple faults. In this dissertation, we discuss the ideas and their trade-offs, and present future research directions.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147637/1/doowon_1.pd
    • …
    corecore