19 research outputs found

    Size Matters: Microservices Research and Applications

    Full text link
    In this chapter we offer an overview of microservices providing the introductory information that a reader should know before continuing reading this book. We introduce the idea of microservices and we discuss some of the current research challenges and real-life software applications where the microservice paradigm play a key role. We have identified a set of areas where both researcher and developer can propose new ideas and technical solutions.Comment: arXiv admin note: text overlap with arXiv:1706.0735

    Synthesizing Adaptive Test Strategies from Temporal Logic Specifications

    Full text link
    Constructing good test cases is difficult and time-consuming, especially if the system under test is still under development and its exact behavior is not yet fixed. We propose a new approach to compute test strategies for reactive systems from a given temporal logic specification using formal methods. The computed strategies are guaranteed to reveal certain simple faults in every realization of the specification and for every behavior of the uncontrollable part of the system's environment. The proposed approach supports different assumptions on occurrences of faults (ranging from a single transient fault to a persistent fault) and by default aims at unveiling the weakest one. Based on well-established hypotheses from fault-based testing, we argue that such tests are also sensitive for more complex bugs. Since the specification may not define the system behavior completely, we use reactive synthesis algorithms with partial information. The computed strategies are adaptive test strategies that react to behavior at runtime. We work out the underlying theory of adaptive test strategy synthesis and present experiments for a safety-critical component of a real-world satellite system. We demonstrate that our approach can be applied to industrial specifications and that the synthesized test strategies are capable of detecting bugs that are hard to detect with random testing

    Annual Research Report, 2009-2010

    Get PDF
    Annual report of collaborative research projects of Old Dominion University faculty and students in partnership with business, industry and governmenthttps://digitalcommons.odu.edu/or_researchreports/1001/thumbnail.jp

    FIN-DM: finantsteenuste andmekaeve protsessi mudel

    Get PDF
    Andmekaeve hõlmab reeglite kogumit, protsesse ja algoritme, mis võimaldavad ettevõtetel iga päev kogutud andmetest rakendatavaid teadmisi ammutades suurendada tulusid, vähendada kulusid, optimeerida tooteid ja kliendisuhteid ning saavutada teisi eesmärke. Andmekaeves ja -analüütikas on vaja hästi määratletud metoodikat ja protsesse. Saadaval on mitu andmekaeve ja -analüütika standardset protsessimudelit. Kõige märkimisväärsem ja laialdaselt kasutusele võetud standardmudel on CRISP-DM. Tegu on tegevusalast sõltumatu protsessimudeliga, mida kohandatakse sageli sektorite erinõuetega. CRISP-DMi tegevusalast lähtuvaid kohandusi on pakutud mitmes valdkonnas, kaasa arvatud meditsiini-, haridus-, tööstus-, tarkvaraarendus- ja logistikavaldkonnas. Seni pole aga mudelit kohandatud finantsteenuste sektoris, millel on omad valdkonnapõhised erinõuded. Doktoritöös käsitletakse seda lünka finantsteenuste sektoripõhise andmekaeveprotsessi (FIN-DM) kavandamise, arendamise ja hindamise kaudu. Samuti uuritakse, kuidas kasutatakse andmekaeve standardprotsesse eri tegevussektorites ja finantsteenustes. Uurimise käigus tuvastati mitu tavapärase raamistiku kohandamise stsenaariumit. Lisaks ilmnes, et need meetodid ei keskendu piisavalt sellele, kuidas muuta andmekaevemudelid tarkvaratoodeteks, mida saab integreerida organisatsioonide IT-arhitektuuri ja äriprotsessi. Peamised finantsteenuste valdkonnas tuvastatud kohandamisstsenaariumid olid seotud andmekaeve tehnoloogiakesksete (skaleeritavus), ärikesksete (tegutsemisvõime) ja inimkesksete (diskrimineeriva mõju leevendus) aspektidega. Seejärel korraldati tegelikus finantsteenuste organisatsioonis juhtumiuuring, mis paljastas 18 tajutavat puudujääki CRISP- DMi protsessis. Uuringu andmete ja tulemuste abil esitatakse doktoritöös finantsvaldkonnale kohandatud CRISP-DM nimega FIN-DM ehk finantssektori andmekaeve protsess (Financial Industry Process for Data Mining). FIN-DM laiendab CRISP-DMi nii, et see toetab privaatsust säilitavat andmekaevet, ohjab tehisintellekti eetilisi ohte, täidab riskijuhtimisnõudeid ja hõlmab kvaliteedi tagamist kui osa andmekaeve elutsüklisData mining is a set of rules, processes, and algorithms that allow companies to increase revenues, reduce costs, optimize products and customer relationships, and achieve other business goals, by extracting actionable insights from the data they collect on a day-to-day basis. Data mining and analytics projects require well-defined methodology and processes. Several standard process models for conducting data mining and analytics projects are available. Among them, the most notable and widely adopted standard model is CRISP-DM. It is industry-agnostic and often is adapted to meet sector-specific requirements. Industry- specific adaptations of CRISP-DM have been proposed across several domains, including healthcare, education, industrial and software engineering, logistics, etc. However, until now, there is no existing adaptation of CRISP-DM for the financial services industry, which has its own set of domain-specific requirements. This PhD Thesis addresses this gap by designing, developing, and evaluating a sector-specific data mining process for financial services (FIN-DM). The PhD thesis investigates how standard data mining processes are used across various industry sectors and in financial services. The examination identified number of adaptations scenarios of traditional frameworks. It also suggested that these approaches do not pay sufficient attention to turning data mining models into software products integrated into the organizations' IT architectures and business processes. In the financial services domain, the main discovered adaptation scenarios concerned technology-centric aspects (scalability), business-centric aspects (actionability), and human-centric aspects (mitigating discriminatory effects) of data mining. Next, an examination by means of a case study in the actual financial services organization revealed 18 perceived gaps in the CRISP-DM process. Using the data and results from these studies, the PhD thesis outlines an adaptation of CRISP-DM for the financial sector, named the Financial Industry Process for Data Mining (FIN-DM). FIN-DM extends CRISP-DM to support privacy-compliant data mining, to tackle AI ethics risks, to fulfill risk management requirements, and to embed quality assurance as part of the data mining life-cyclehttps://www.ester.ee/record=b547227

    Large space structures and systems in the space station era: A bibliography with indexes

    Get PDF
    Bibliographies and abstracts are listed for 1372 reports, articles, and other documents introduced into the NASA scientific and technical information system between January 1, 1990 and June 30, 1990. Its purpose is to provide helpful information to the researcher, manager, and designer in technology development and mission design according to system, interactive analysis and design, structural and thermal analysis and design, structural concepts and control systems, electronics, advanced materials, assembly concepts, propulsion, and solar power satellite systems

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90)

    Get PDF
    The papers from the symposium are presented. Emphasis is placed on human factors engineering and space environment interactions. The technical areas covered in the human factors section include: satellite monitoring and control, man-computer interfaces, expert systems, AI/robotics interfaces, crew system dynamics, and display devices. The space environment interactions section presents the following topics: space plasma interaction, spacecraft contamination, space debris, and atomic oxygen interaction with materials. Some of the above topics are discussed in relation to the space station and space shuttle

    An Integrated Theoretical Model of Information Systems Success/Technology Adoption for Systems Used by Employees in the 4 And 5-Star Full-Service Hotel Sector in the UK

    Get PDF
    This study aspires to combine several components of extant theoretical frameworks of Information Systems (IS) evaluation and develop a new mechanism/model, the Integrated IS Success/Technology Adoption Model, which can be applied in the context of the 4 and 5-star UK hotel industry. It is hoped that this new model can reliably measure the IS Success and technology adoption of the technological innovations used by hotel employees. Current research tends to concentrate on general emerging IS trends such as Information Communication Technologies (ICTs), including mobile and virtual reality applications. Even though there is abundant research on Information Systems used by hotel customers, the numbers of available published material seem to diminish when it comes to IS evaluation from the viewpoint of hotel employees. To complicate matters even further, most hotel employee-related studies originate from the USA or Southeast Asia. Aiming to combat this distinct shortage in academic papers, the present thesis recognises the evident research gap and seeks to fill it by presenting a study that is pertinent to the realities of hotel employees working in 4 and 5-star fullservice hotels in the UK. A major difference between a customer/guest use of IS and an employee use is that the former does not have to use a hotel’s systems; however, this is not the same with employees, for whom daily system use is compulsory as part of their jobs. Therefore, different metrics apply for each subset. iii The secondary research makes every effort to showcase a comprehensive account of IS evaluation approaches, starting from general strategies and frameworks to the breakdown of specialised IS success and technology adoption models and their dimensions. The primary research incorporates 28 (two sets of 14) interviews with hotel department managers in order to corroborate existing or identify new IS evaluation dimensions and subthemes. The interview analysis produces two previously unexploited by the literature themes that have a major impact on System Quality, one of the central dimensions of IS Success. The key contribution of the current study is the Integrated IS Success/Technology Adoption Model, developed through corroborating the interview findings with the literature review outcomes. The Model is based on two prominent IS evaluation models, the IS Success Model (DeLone and McLean, 1992) and the Technology Acceptance Model (Davis, 1989). The originality of the Model springs from the fusion of these two frameworks, but also from the modifications added. For example, the proposed model features Social Norms, a dimension that permeates the Theory of Actioned Reason (Fishbein and Ajzen, 1975). Other additions include the use of IT training, senior management support, and facilitating conditions as external variables. Future research efforts could perhaps concentrate on testing and validating the proposed research model by use of quantitative methods in the form of a research questionnaire that would obtain the opinions of hotel line employees about the systems they work with on a daily basis

    Water rights and related water supply issues

    Get PDF
    Presented during the USCID water management conference held on October 13-16, 2004 in Salt Lake City, Utah. The theme of the conference was "Water rights and related water supply issues."Includes bibliographical references.Proceedings sponsored by the U.S. Department of the Interior, Central Utah Project Completion Act Office and the U.S. Committee on Irrigation and Drainage.Consensus building as a primary tool to resolve water supply conflicts -- Administration to Colorado River allocations: the Law of the River and the Colorado River Water Delivery Agreement of 2003 -- Irrigation management in Afghanistan: the tradition of Mirabs -- Institutional reforms in irrigation sector of Pakistan: an approach towards integrated water resource management -- On-line and real-time water right allocation in Utah's Sevier River basin -- Improving equity of water distribution: the challenge for farmer organizations in Sindh, Pakistan -- Impacts from transboundary water rights violations in South Asia -- Impacts of water conservation and Endangered Species Act on large water project planning, Utah Lake Drainage Basin Water Delivery System, Bonneville Unit of the Central Utah Project -- Economic importance and environmental challenges of the Awash River basin to Ethiopia -- Accomplishing the impossible: overcoming obstacles of a combined irrigation project -- Estimating actual evapotranspiration without land use classification -- Improving water management in irrigated agricultue -- Beneficial uses of treated drainage water -- Comparative assessment of risk mitigation options for irrigated agricutlrue -- A multi-variable approach for the command of Canal de Provence Aix Nord Water Supply Subsystem -- Hierarchical Bayesian Analysis and Statistical Learning Theory II: water management application -- Soil moisture data collection and water supply forecasting -- Development and implementation of a farm water conservation program within the Coachella Valley Water District, California -- Concepts of ground water recharge and well augmentation in northeastern Colorado -- Water banking in Colorado: an experiment in trouble? -- Estimating conservable water in the Klamath Irrigation Project -- Socio-economic impacts of land retirement in Westlands Water District -- EPDM rubber lining system chosen to save valuable irrigation water -- A user-centered approach to develop decision support systems for estimating pumping and augmentation needs in Colorado's South Platte basin -- Utah's Tri-County Automation Project -- Using HEC-RAS to model canal systems -- Potential water and energy conservation and improved flexibility for water users in the Oasis area of the Coachella Valley Water District, California

    Multifaceted Geotagging for Streaming News

    Get PDF
    News sources on the Web generate constant streams of information, describing the events that shape our world. In particular, geography plays a key role in the news, and understanding the geographic information present in news allows for its useful spatial browsing and retrieval. This process of understanding is called geotagging, and involves first finding in the document all textual references to geographic locations, known as toponyms, and second, assigning the correct lat/long values to each toponym, steps which are termed toponym recognition and toponym resolution, respectively. These steps are difficult due to ambiguities in natural language: some toponyms share names with non-location entities, and further, a given toponym can have many location interpretations. Removing these ambiguities is crucial for successful geotagging. To this end, geotagging methods are described which were developed for streaming news. First, a spatio-textual search engine named STEWARD, and an interactive map-based news browsing system named NewsStand are described, which feature geotaggers as central components, and served as motivating systems and experimental testbeds for developing geotagging methods. Next, a geotagging methodology is presented that follows a multifaceted approach involving a variety of techniques. First, a multifaceted toponym recognition process is described that uses both rule-based and machine learning–based methods to ensure high toponym recall. Next, various forms of toponym resolution evidence are explored. One such type of evidence is lists of toponyms, termed comma groups, whose toponyms share a common thread in their geographic properties that enables correct resolution. In addition to explicit evidence, authors take advantage of the implicit geographic knowledge of their audiences. Understanding the local places known by an audience, termed its local lexicon, affords great performance gains when geotagging articles from local newspapers, which account for the vast majority of news on the Web. Finally, considering windows of text of varying size around each toponym, termed adaptive context, allows for a tradeoff between geotagging execution speed and toponym resolution accuracy. Extensive experimental evaluations of all the above methods, using existing and two newly-created, large corpora of streaming news, show great performance gains over several competing prominent geotagging methods
    corecore