8,073 research outputs found

    Lazy validation of Experience Graphs

    Full text link

    The development of a simple basal area increment model

    Get PDF
    In most cases forest practice in Austria use yield tables to predict the growth of their forests. Common yield tables show the increment of pure even-aged stands which are treated in a way the table developer recommends. The usage of these tables in stands which are either uneven-aged, mixed or treated in another way, may lead to inaccurate predictions. To avoid these problems, forest growth models have been developed. Until now they are not widely used in Austria. One reason may be, that most of the models need some input parameters which are usually not gathered by companies. In this work a basal area increment per hectare model has been developed which is based on the input parameters: diameter at breast height, height to diameter ratio, top height at age 100 years and a selection out of several simple competition indices (growing space, basal area of larger trees, competing basal area, crown cross sectional area, crown competition factor, d/dg, d-dg, basal area and stand density index) which are distance independent. The model parametrization was done with seven different statistical methods (linear regression, linear mixed effect model, resistant linear regression, local polynomial regression, lazy learning model, random forest model and neural network model). By using only few input-parameters it should be possible to parametrize this model for many local areas by using inventory data sets of the specific region. The model works in pure and mixed stands of spruce and beech at the Rosaliengebirge. The observed average diameter increment per 5 years is 18.1 mm for spruce and 21.1 mm for beech. The average difference of the predicted and observed diameter-increment on a validation data-set is 0.3 mm for spruce and -0.3 mm for beech within 5 years and the estimated additional spread caused by the model is +-4.5 mm/5 years for spruce and +-4.0 mm/5 years for beech

    Basis Token Consistency: A Practical Mechanism for Strong Web Cache Consistency

    Full text link
    With web caching and cache-related services like CDNs and edge services playing an increasingly significant role in the modern internet, the problem of the weak consistency and coherence provisions in current web protocols is becoming increasingly significant and drawing the attention of the standards community [LCD01]. Toward this end, we present definitions of consistency and coherence for web-like environments, that is, distributed client-server information systems where the semantics of interactions with resource are more general than the read/write operations found in memory hierarchies and distributed file systems. We then present a brief review of proposed mechanisms which strengthen the consistency of caches in the web, focusing upon their conceptual contributions and their weaknesses in real-world practice. These insights motivate a new mechanism, which we call "Basis Token Consistency" or BTC; when implemented at the server, this mechanism allows any client (independent of the presence and conformity of any intermediaries) to maintain a self-consistent view of the server's state. This is accomplished by annotating responses with additional per-resource application information which allows client caches to recognize the obsolescence of currently cached entities and identify responses from other caches which are already stale in light of what has already been seen. The mechanism requires no deviation from the existing client-server communication model, and does not require servers to maintain any additional per-client state. We discuss how our mechanism could be integrated into a fragment-assembling Content Management System (CMS), and present a simulation-driven performance comparison between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic.National Science Foundation (ANI-9986397, ANI-0095988

    Node-Based Native Solution to Procedural Game Level Generation

    Get PDF
    A Geração Procedural de ConteĂșdo (PCG) aplicada ao domĂ­nio do desenvolvimento de jogos tem se tornado um tĂłpico proeminente, com um nĂșmero crescente de implementaçÔes e aplicaçÔes. SoluçÔes de PCG standalone e plugin, regidas por interfaces baseadas em nĂłs e outros modelos de alto nĂ­vel, enfrentam limitaçÔes em termos de integração, interatividade e responsividade quando inseridas no processo de desenvolvimento de jogos. Essas limitaçÔes afetam a experiĂȘncia do utilizador e inibem o verdadeiro potencial que estes sistemas podem oferecer. Adotando uma metodologia de Action-Research, realizou-se um estudo preliminar com entrevistas a especialistas da ĂĄrea. A avaliação da pertinĂȘncia desta metodologia nativa e da abordagem visual mais adequada para a sua interface foi efetuada atravĂ©s de uma sĂ©rie de protĂłtipos. Posteriormente, foi implementado um protĂłtipo funcional e conduzido um estudo de caso com uma amostra constituĂ­da por um grupo de especialistas em PCG e de desenvolvedores de jogos. Os participantes realizaram uma sĂ©rie de exercĂ­cios que estavam documentados com os respetivos tutoriais. ApĂłs a conclusĂŁo dos exercĂ­cios propostos, os participantes avaliaram a relevĂąncia da solução e da experiĂȘncia do utilizador atravĂ©s de um questionĂĄrio. No desenvolvimento de uma metodologia nativa de PCG baseado em nĂłs, integrado no motor de jogo, identificamos limitaçÔes e concluĂ­mos que existem diversos desafios ainda por superar no que diz respeito a uma implementação completa de um sistema complexo e amplo.Procedural Content Generation (PCG) applied to game development has become a prominent topic with increasing implementations and use cases. However, existing standalone and plugin PCG solutions, which use Node-based interfaces and other high-level approaches, face limitations in integration, interactivity, and responsiveness within the game development pipeline. These limitations hinder the overall user experience and restrain the true potential of PCG systems. Adopting an Action-Research methodology, a preliminary interview was conducted with experts in the field. The relevance assessment of this native methodology and the most suitable visual approach for its interface was carried out through a series of prototypes. Subsequently, a functional prototype was implemented, and a case study was conducted using a sample consisting of a group of PCG experts and game developers. The participants performed a series of exercises documented with the respective tutorials. After completing the exercises, the solution's relevancy and user experience were evaluated through a questionnaire. In developing a native node-based PCG methodology integrated into the game engine, we identified limitations. We concluded that several challenges are yet to be overcome regarding fully implementing a complex and extensive system

    JWalk: a tool for lazy, systematic testing of java classes by design introspection and user interaction

    Get PDF
    Popular software testing tools, such as JUnit, allow frequent retesting of modified code; yet the manually created test scripts are often seriously incomplete. A unit-testing tool called JWalk has therefore been developed to address the need for systematic unit testing within the context of agile methods. The tool operates directly on the compiled code for Java classes and uses a new lazy method for inducing the changing design of a class on the fly. This is achieved partly through introspection, using Java’s reflection capability, and partly through interaction with the user, constructing and saving test oracles on the fly. Predictive rules reduce the number of oracle values that must be confirmed by the tester. Without human intervention, JWalk performs bounded exhaustive exploration of the class’s method protocols and may be directed to explore the space of algebraic constructions, or the intended design state-space of the tested class. With some human interaction, JWalk performs up to the equivalent of fully automated state-based testing, from a specification that was acquired incrementally

    AIOps for a Cloud Object Storage Service

    Full text link
    With the growing reliance on the ubiquitous availability of IT systems and services, these systems become more global, scaled, and complex to operate. To maintain business viability, IT service providers must put in place reliable and cost efficient operations support. Artificial Intelligence for IT Operations (AIOps) is a promising technology for alleviating operational complexity of IT systems and services. AIOps platforms utilize big data, machine learning and other advanced analytics technologies to enhance IT operations with proactive actionable dynamic insight. In this paper we share our experience applying the AIOps approach to a production cloud object storage service to get actionable insights into system's behavior and health. We describe a real-life production cloud scale service and its operational data, present the AIOps platform we have created, and show how it has helped us resolving operational pain points.Comment: 5 page
    • 

    corecore