80 research outputs found

    Consistent proportional trade-offs in data envelopment analysis

    Get PDF
    Proportional trade-offs – as an enhanced form of the conventional absolute trade-offs – have recently been proposed as a method which can be used to incorporate prior views or information regarding the assessment of decision making units (DMUs) into relative efficiency measurement systems by Data Envelopment Analysis (DEA). A proportional trade-off is defined as a percentage change of the level of inputs/outputs so that the corresponding restriction is adapted with respect to the volume of the inputs and outputs of the DMUs in the analysis. It is well-known that the incorporation of trade-offs either in an absolute form or proportional form may lead in certain cases to serious problems such as infinity or even negative efficiency scores in the results. This phenomenon is often interpreted as a result of defining the set of trade-offs carelessly by the analyst. In this paper we show that this may not always be the case. The existing framework by which the trade-offs are combined mathematically to build a corresponding production technology may cause a problem rather than the definition of the trade-offs. We therefore develop analytical criteria and formulate computational methods that allow us to identify the above-mentioned problematic situations and test if all proportional trade-offs are consistent so that they can be applied simultaneously. We then propose a novel framework for aggregating local sets of trade-offs, which can be combined mathematically. The respective computational procedure is shown to be effectively done by a suggested algorithm. We also illustrate how the efficiency can be measured against an overall technology, which is formed by the union of these local sets. An empirical illustration in the context of engineering schools will be presented to explain the properties and features of the suggested approach

    Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains

    Get PDF
    Experimental studies have observed Long Term synaptic Potentiation (LTP) when a presynaptic neuron fires shortly before a postsynaptic neuron, and Long Term Depression (LTD) when the presynaptic neuron fires shortly after, a phenomenon known as Spike Timing Dependant Plasticity (STDP). When a neuron is presented successively with discrete volleys of input spikes STDP has been shown to learn ‘early spike patterns’, that is to concentrate synaptic weights on afferents that consistently fire early, with the result that the postsynaptic spike latency decreases, until it reaches a minimal and stable value. Here, we show that these results still stand in a continuous regime where afferents fire continuously with a constant population rate. As such, STDP is able to solve a very difficult computational problem: to localize a repeating spatio-temporal spike pattern embedded in equally dense ‘distractor’ spike trains. STDP thus enables some form of temporal coding, even in the absence of an explicit time reference. Given that the mechanism exposed here is simple and cheap it is hard to believe that the brain did not evolve to use it

    A WS-DAIR compatible interface for gLite-AMGA

    No full text
    AMGA is the gLite 3.1 Metadata catalogue and a widely used database access system by many groups and communities, ranging from High-Energy Physics to Biomedical and Earth Sciences. It recently started to offer the Web Service Data Access and Integration - The Relational realization (WS-DAIR) standard proposed by the Open Grid Forum. In our presentation we present the status of this work, which will greatly improve interoperability with other WS-DAI compliant components. The addition of a WS-DAIR interface to the gLite AMGA metadata service will greatly improve the extensibility and interoperability with other Data access services based on the Open Grid Service Architecture. As the standard also defines the interaction of relational database services among each other, it will allow the integration of data access services of different types. We will present as an example the Avian Flue Drug Discovery application implemented by Academia Sinica Grid Computing (ASGC), which has been used as a test case for validation and evaluating the new interface, compared to the older TCP-Socket based of AMGA with respect to performance, scalability, fault tolerance, interoperability and ease of use for Grid applications. The result of the evaluation has also been presented at SC '07. As AMGA is in fact the first metadata service to adapt the WS-DAIR standard, we will present our findings on the usability of this standard as well as on its overall design. Adapting WS-DAIR in AMGA, which began as an exploratory project by the EGEE user community, and now is a part of glite 3.1 release, is another step towards interconnecting this data access system with other similar services. In other words, AMGA can communicate with other database access services on the Grid which has adapted to the WS-DAIR and vice versa, improving interoperability among database access services on the Grid by defining standard operations and encoding format of data, separating the functionality of the data access service from its operational representation, using service oriented architecture. On the other side, clients can use the service based on their own business logic. This will greatly improve the freedom of application writers to choose among suitable Grid services without the need to adapt the application. In addition, data source that are newly introduced to the Grid will be readily accessible with existing clients. We intend to further intensify the collaboration with the OGF in order to improve the WS-DAIR standard as it has already started, making AMGA fully compatible with the standard, such as supporting the Web Service Resource Framework. Interoperability test with other implementations of the WS-DAIR standard should be done in the future, which should further strengthen the growing community working on relational database access on the Grid

    MapReduce Implementation of Prestack Kirchhoff Time Migration (PKTM) on Seismic Data

    Full text link
    The oil and gas industries have been great consumers of parallel and distributed computing systems, by frequently running technical applications with intensive processing of terabytes of data. By the emergence of cloud computing which gives the opportunity to hire high-throughput computing resources with lower operational costs, such industries have started to adopt their technical applications to be executed on such high-performance commodity systems. In this paper, we first give an overview of forward/inverse Prestack Kirchhoff Time Migration (PKTM) algorithm, as one of the well-known seismic imaging algorithms. Then we will explain our proposed approach to fit this algorithm for running on Google's MapReduce framework. Toward the end, we will analyse the relation between MapReduce-based PKTM completion time and the number of mappers/reducers on pseudo-distributed MapReduce mode
    • …
    corecore