3,833 research outputs found

    A multidimensional data model with subcategories for flexibly capturing summarizability

    Full text link

    Difficult to Show Properties and Utility Maximizing Brokers

    Get PDF
    This article is the winner of the Real Estate and the Internet manuscript prize (sponsored by PricewaterhouseCoopers) presented at the American Real Estate Society Annual Meeting. Brokers have long believed that difficult to show properties sell at lower prices and take longer to sell. Where difficult to show properties are defined as those properties that present extraordinary difficulties for a broker in arranging or showing the listing to a particular buyer. Buyers’ recent access to online real estate applications may make the cost of avoiding these properties prohibitive to brokers. Employing a hedonic pricing model and duration modeling techniques, this study finds that property price and marketing time are not significantly affected for these properties. The results suggest that brokers possess limited market power.

    Sustainability and Resilience Assessment Methods: A Literature Review to Support the Decarbonization Target for the Construction Sector

    Get PDF
    It is a well-known issue that the 2050 target of carbon emissions neutrality will be reached only with the co-operation of all the interested sectors, and the construction sector could be one of the main contributors to this change. With the built environment globally responsible for about 40% of annual global energy-related CO2 emissions, the construction sector offers an important opportunity to drive transformative change and presents the most challenging mitigation potential among all industrial sectors, which also brings opportunities for adopting sustainability practices and increasing resilience. This paper presents a systematic literature review of those two pivotal concepts to reach the decarbonization goal: sustainability and resilience. Starting from an extensive literature review (2536 scientific documents) based on the PRISMA statement, the definitions and assessment methodologies of those concepts for the construction sector have been studied. The methodological approach followed for their analysis has been conducted on a first selection of 42 documents, further reduced to 12 by using clear inclusion criteria to identify the integrated assessment procedures. The main goal of this study is to clarify the correlation between sustainability and resilience concepts for constructions and their integrated assessment, in line with the latest regulations and market needs. The results show that, currently, sustainability and resilience are mainly evaluated in a distinct way to obtain building energy performance certificates, as well as to quantify the building market value and its complementary contribution to the ‘energy efficiency first’ principle and energy-saving targets towards the emergent issue of climate change. Few works focus on the integrated assessment of both concepts considering the construction industries’ point of view about materials and/or systems for buildings. The novelty of this study is the critical review of the current sustainability and resilience integrated assessment methods used for the construction value chain, declined for four main target groups. Researchers, policymakers, industries, and professionals could gain dedicated insights and practical suggestions to put in practice the elements of circular economy, ecological innovation, and cleaner production, which are essential in order to drive the decarbonization of the built environment

    Large Spatial Database Indexing with aX-tree

    Get PDF
    Spatial databases are optimized for the management of data stored based on their geometric space. Researchers through high degree scalability have proposed several spatial indexing structures towards this effect. Among these indexing structures is the X-tree. The existing X-trees and its variants are designed for dynamic environment, with the capability for handling insertions and deletions. Notwithstanding, the X-tree degrades on retrieval performance as dimensionality increases and brings about poor worst-case performance than sequential scan. We propose a new X-tree packing techniques for static spatial databases which performs better in space utilization through cautious packing. This new improved structure yields two basic advantage: It reduces the space overhead of the index and produces a better response time, because the aX-tree has a higher fan-out and so the tree always ends up shorter. New model for super-node construction and effective method for optimal packing using an improved str bulk-loading technique is proposed. The study reveals that proposed system performs better than many existing spatial indexing structure

    Optimizing and Implementing Repair Programs for Consistent Query Answering in Databases

    Get PDF
    Databases may not always satisfy their integrity constraints (ICs) and a number of different reasons can be held accountable for this. However, in most cases an important part of the data is still consistent with the ICs, and can still be retrieved through queries posed to the database. Consistent query answers are characterized as ordinary answers obtained from every minimally repaired and consistent version of the database. Database repairs wrt a wide class of ICs can be specified as stable models of disjunctive logic programs. Thus, Consistent Query Answering (CQA) for first-order queries is translated into cautious reasoning under the stable models semantics. The use of logic programs does not exceed the intrinsic complexity of CQA. However, using them in a straightforward manner is usually inefficient. The goal of this thesis is to develop optimized techniques to evaluate queries over inconsisten

    Cataloging Built Heritage: Methods of Recording Unit Masonry for the Future of Historic Preservation

    Get PDF
    There is a discrepancy between standardized and infield practices for documenting historic structures—from the surveyor’s intention of interpretation to how the project team chooses to adapt alongside a constantly evolving technology-dependent environment. A successful restoration project relies on comprehensive documentation and active communication among the entire project team including conservator, architect, engineer, contractor, (AEC), and client. If there is instability in creating a common language and method of disseminating information across these parties, the project suffers. The published literature on documentation techniques does not fully represent the work of practitioners on projects, specifically for unit masonry restoration. Recording a historic site is not static over time and should incorporate annotations of change. The standards in recording are not always suitable for the restoration contract and the project team is pressed to create their own standard of documentation. These proactive teams are turning to the advances in technology and digital collaboration (through tablets, mobile devices, databases, and scanners) to track the progress of each individual masonry unit from dismantlement to reinstallation, archive ongoing changes to conditions and site logistics, and reduce lag times in communication and review between project members. Should the restoration field choose to act now, they can have a voice in this technological transformation for recording historic sites. The ability to communicate within a unified platform for documenting and monitoring a restoration project from conditions survey through to cleaning and repairs and finally to project closeout will enrich the dialogue between the construction and conservation industries and ultimately save more cultural heritage sites

    Analyzing complex data using domain constraints

    Get PDF
    Data-driven research approaches are becoming increasingly popular in a growing number of scientific disciplines. While a data-driven research approach can yield superior results, generating the required data can be very costly. This frequently leads to small and complex data sets, in which it is impossible to rely on volume alone to compensate for all shortcomings of the data. To counter this problem, other reliable sources of information must be incorporated. In this work, domain knowledge, as a particularly reliable type of additional information, is used to inform data-driven analysis methods. This domain knowledge is represented as constraints on the possible solutions, which the presented methods can use to inform their analysis. It focusses on spatial constraints as a particularly common type of constraint, but the proposed techniques are general enough to be applied to other types of constraints. In this thesis, new methods using domain constraints for data-driven science applications are discussed. These methods have applications in feature evaluation, route database repair, and Gaussian Mixture modeling of spatial data. The first application focuses on feature evaluation. The presented method receives two representations of the same data: one as the intended target and the other for investigation. It calculates a score indicating how much the two representations agree. A presented application uses this technique to compare a reference attribute set with different subsets to determine the importance and relevance of individual attributes. A second technique analyzes route data for constraint compliance. The presented framework allows the user to specify constraints and possible actions to modify the data. The presented method then uses these inputs to generate a version of the data, which agrees with the constraints, while otherwise reducing the impact of the modifications as much as possible. Two extensions of this schema are presented: an extension to continuously valued costs, which are minimized, and an extension to constraints involving more than one moving object. Another addressed application area is modeling of multivariate measurement data, which was measured at spatially distributed locations. The spatial information recorded with the data can be used as the basis for constraints. This thesis presents multiple approaches to building a model of this kind of data while complying with spatial constraints. The first approach is an interactive tool, which allows domain scientists to generate a model of the data, which complies with their knowledge about the data. The second is a Monte Carlo approach, which generates a large number of possible models, tests them for compliance with the constraints, and returns the best one. The final two approaches are based on the EM algorithm and use different ways of incorporating the information into their models. At the end of the thesis, two applications of the models, which have been generated in the previous chapter, are presented. The first is prediction of the origin of samples and the other is the visual representation of the extracted models on a map. These tools can be used by domain scientists to augment their tried and tested tools. The developed techniques are applied to a real-world data set collected in the archaeobiological research project FOR 1670 (Transalpine mobility and cultural transfer) of the German Science Foundation. The data set contains isotope ratio measurements of samples, which were discovered at archaeological sites in the Alps region of central Europe. Using the presented data analysis methods, the data is analyzed to answer relevant domain questions. In a first application, the attributes of the measurements are analyzed for their relative importance and their ability to predict the spatial location of samples. Another presented application is the reconstruction of potential migration routes between the investigated sites. Then spatial models are built using the presented modeling approaches. Univariate outliers are determined and used to predict locations based on the generated models. These are cross-referenced with the recorded origins. Finally, maps of the isotope distribution in the investigated regions are presented. The described methods and demonstrated analyses show that domain knowledge can be used to formulate constraints that inform the data analysis process to yield valid models from relatively small data sets and support domain scientists in their analyses.Datengetriebene Forschungsansätze werden für eine wachsende Anzahl von wissenschaftlichen Disziplinen immer wichtiger. Obwohl ein datengetriebener Forschungsansatz bessere Ergebnisse erzielen kann, kann es sehr teuer sein die notwendigen Daten zu gewinnen. Dies hat häufig zur Folge, dass kleine und komplexe Datensätze entstehen, bei denen es nicht möglich ist sich auf die Menge der Datenpunkte zu verlassen um Probleme bei der Analyse auszugleichen. Um diesem Problem zu begegnen müssen andere Informationsquellen verwendet werden. Fachwissen als eine besonders zuverlässige Quelle solcher Informationen kann herangezogen werden, um die datengetriebenen Analysemethoden zu unterstützen. Dieses Fachwissen wird ausgedrückt als Constraints (Nebenbedingungen) der möglichen Lösungen, die die vorgestellten Methoden benutzen können um ihre Analyse zu steuern. Der Fokus liegt dabei auf räumlichen Constraints als eine besonders häufige Art von Constraints, aber die vorgeschlagenen Methoden sind allgemein genug um auf andere Arte von Constraints angewendet zu werden. Es werden neue Methoden diskutiert, die Fachwissen für datengetriebene wissenschaftliche Anwendungen verwenden. Diese Methoden haben Anwendungen auf Feature-Evaluation, die Reparatur von Bewegungsdatenbanken und auf Gaussian-Mixture-Modelle von räumlichen Daten. Die erste Anwendung betrifft Feature-Evaluation. Die vorgestellte Methode erhält zwei Repräsentationen der selben Daten: eine als Zielrepräsentation und eine zur Untersuchung. Sie berechnet einen Wert, der aussagt, wie einig sich die beiden Repräsentationen sind. Eine vorgestellte Anwendung benutzt diese Technik um eine Referenzmenge von Attributen mit verschiedenen Untermengen zu vergleichen, um die Wichtigkeit und Relevanz einzelner Attribute zu bestimmen. Eine zweite Technik analysiert die Einhaltung von Constraints in Bewegungsdaten. Das präsentierte Framework erlaubt dem Benutzer Constraints zu definieren und mögliche Aktionen zur Veränderung der Daten anzuwenden. Die präsentierte Methode benutzt diese Eingaben dann um eine neue Variante der Daten zu erstellen, die die Constraints erfüllt ohne die Datenbank mehr als notwendig zu verändern. Zwei Erweiterungen dieser Grundidee werden vorgestellt: eine Erweiterung auf stetige Kostenfunktionen, die minimiert werden, und eine Erweiterung auf Bedingungen, die mehr als ein bewegliches Objekt betreffen. Ein weiteres behandeltes Anwendungsgebiet ist die Modellierung von multivariaten Messungen, die an räumlich verteilten Orten gemessen wurden. Die räumliche Information, die zusammen mit diesen Daten erhoben wurde, kann als Grundlage genutzt werden um Constraints zu formulieren. Mehrere Ansätze zum Erstellen von Modellen auf dieser Art von Daten werden vorgestellt, die räumliche Constraints einhalten. Der erste dieser Ansätze ist ein interaktives Werkzeug, das Fachwissenschaftlern dabei hilft, Modelle der Daten zu erstellen, die mit ihrem Wissen über die Daten übereinstimmen. Der zweite ist eine Monte-Carlo-Simulation, die eine große Menge möglicher Modelle erstellt, testet ob sie mit den Constraints übereinstimmen und das beste Modell zurückgeben. Zwei letzte Ansätze basieren auf dem EM-Algorithmus und benutzen verschiedene Arten diese Information in das Modell zu integrieren. Am Ende werden zwei Anwendungen der gerade vorgestellten Modelle vorgestellt. Die erste ist die Vorhersage der Herkunft von Proben und die andere ist die grafische Darstellung der erstellten Modelle auf einer Karte. Diese Werkzeuge können von Fachwissenschaftlern benutzt werden um ihre bewährten Methoden zu unterstützen. Die entwickelten Methoden werden auf einen realen Datensatz angewendet, der von dem archäo-biologischen Forschungsprojekt FOR 1670 (Transalpine Mobilität und Kulturtransfer der Deutschen Forschungsgemeinschaft erhoben worden ist. Der Datensatz enthält Messungen von Isotopenverhältnissen von Proben, die in archäologischen Fundstellen in den zentraleuropäischen Alpen gefunden wurden. Die präsentierten Datenanalyse-Methoden werden verwendet um diese Daten zu analysieren und relevante Forschungsfragen zu klären. In einer ersten Anwendung werden die Attribute der Messungen analysiert um ihre relative Wichtigkeit und ihre Fähigkeit zu bewerten, die räumliche Herkunft der Proben vorherzusagen. Eine weitere vorgestellte Anwendung ist die Wiederherstellung von möglichen Migrationsrouten zwischen den untersuchten Fundstellen. Danach werden räumliche Modelle der Daten unter Verwendung der vorgestellten Methoden erstellt. Univariate Outlier werden bestimmt und ihre möglich Herkunft basierend auf der erstellten Karte wird bestimmt. Die vorhergesagte Herkunft wird mit der tatsächlichen Fundstelle verglichen. Zuletzt werden Karten der Isotopenverteilung der untersuchten Region vorgestellt. Die beschriebenen Methoden und vorgestellten Analysen zeigen, dass Fachwissen verwendet werden kann um Constraints zu formulieren, die den Datenanalyseprozess unterstützen, um gültige Modelle aus relativ kleinen Datensätzen zu erstellen und Fachwissenschaftler bei ihren Analysen zu unterstützen

    Practical Methods for Optimizing Equipment Maintenance Strategies Using an Analytic Hierarchy Process and Prognostic Algorithms

    Get PDF
    Many large organizations report limited success using Condition Based Maintenance (CbM). This work explains some of the causes for limited success, and recommends practical methods that enable the benefits of CbM. The backbone of CbM is a Prognostics and Health Management (PHM) system. Use of PHM alone does not ensure success; it needs to be integrated into enterprise level processes and culture, and aligned with customer expectations. To integrate PHM, this work recommends a novel life cycle framework, expanding the concept of maintenance into several levels beginning with an overarching maintenance strategy and subordinate policies, tactics, and PHM analytical methods. During the design and in-service phases of the equipment’s life, an organization must prove that a maintenance policy satisfies specific safety and technical requirements, business practices, and is supported by the logistic and resourcing plan to satisfy end-user needs and expectations. These factors often compete with each other because they are designed and considered separately, and serve disparate customers. This work recommends using the Analytic Hierarchy Process (AHP) as a practical method for consolidating input from stakeholders and quantifying the most preferred maintenance policy. AHP forces simultaneous consideration of all factors, resolving conflicts in the trade-space of the decision process. When used within the recommended life cycle framework, it is a vehicle for justifying the decision to transition from generalized high-level concepts down to specific lower-level actions. This work demonstrates AHP using degradation data, prognostic algorithms, cost data, and stakeholder input to select the most preferred maintenance policy for a paint coating system. It concludes the following for this particular system: A proactive maintenance policy is most preferred, and a predictive (CbM) policy is more preferred than predeterminative (time-directed) and corrective policies. A General Path prognostic Model with Bayesian updating (GPM) provides the most accurate prediction of the Remaining Useful Life (RUL). Long periods between inspections and use of categorical variables in inspection reports severely limit the accuracy in predicting the RUL. In summary, this work recommends using the proposed life cycle model, AHP, PHM, a GPM model, and embedded sensors to improve the success of a CbM policy

    Robust Speech Recognition for Adverse Environments

    Get PDF
    • …
    corecore