3,461 research outputs found

    A Framework for Developing Real-Time OLAP algorithm using Multi-core processing and GPU: Heterogeneous Computing

    Full text link
    The overwhelmingly increasing amount of stored data has spurred researchers seeking different methods in order to optimally take advantage of it which mostly have faced a response time problem as a result of this enormous size of data. Most of solutions have suggested materialization as a favourite solution. However, such a solution cannot attain Real- Time answers anyhow. In this paper we propose a framework illustrating the barriers and suggested solutions in the way of achieving Real-Time OLAP answers that are significantly used in decision support systems and data warehouses

    From ”Sapienza” to “Sapienza, State Archives in Rome”. A looping effect bringing back to the original source communication and culture by innovative and low cost 3D surveying, imaging systems and GIS applications

    Get PDF
    Applicazione di tecnologie mensorie integrate Low Cost,web GIS,applicazione di tecniche di Computational photography per la comunicazione e condivisione dei dati, sistemi di Cloud computing.Archiviazione Grandi DatiHigh Quality survey models, realized by multiple Low Cost methods and technologies, as a container to sharing Cultural and Archival Heritage, this is the aim guiding our research, here described in its primary applications. The SAPIENZA building, a XVI century masterpiece that represented the first unified headquarters of University in Rome, plays since year 1936, when the University moved to its newly edified campus, the role of the main venue for the State Archives. By the collaboration of a group of students of the Architecture Faculty, some integrated survey methods were applied on the monument with success. The beginning was the topographic survey, creating a reference on ground and along the monument for the upcoming applications, a GNNS RTK survey followed georeferencing points on the internal courtyard. Dense stereo matching photogrammetry is nowadays an accepted method for generating 3D survey models, accurate and scalable; it often substitutes 3D laser scanning for its low cost, so that it became our choice. Some 360°shots were planned for creating panoramic views of the double portico from the courtyard, plus additional single shots of some lateral spans and of pillars facing the court, as a single operation with a double finality: to create linked panotours with hotspots to web-linked databases, and 3D textured and georeferenced surface models, allowing to study the harmonic proportions of the classical architectural order. The use of free web Gis platforms, to load the work in Google Earth and the realization of low cost 3D prototypes of some representative parts, has been even performed

    Virtual sensor networks: collaboration and resource sharing

    Get PDF
    This thesis contributes to the advancement of the Sensing as a Service (SeaaS), based on cloud infrastructures, through the development of models and algorithms that make an efficient use of both sensor and cloud resources while reducing the delay associated with the data flow between cloud and client sides, which results into a better quality of experience for users. The first models and algorithms developed are suitable for the case of mashups being managed at the client side, and then models and algorithms considering mashups managed at the cloud were developed. This requires solving multiple problems: i) clustering of compatible mashup elements; ii) allocation of devices to clusters, meaning that a device will serve multiple applications/mashups; iii) reduction of the amount of data flow between workplaces, and associated delay, which depends on clustering, device allocation and placement of workplaces. The developed strategies can be adopted by cloud service providers wishing to improve the performance of their clouds. Several steps towards an efficient Se-aaS business model were performed. A mathematical model was development to assess the impact (of resource allocations) on scalability, QoE and elasticity. Regarding the clustering of mashup elements, a first mathematical model was developed for the selection of the best pre-calculated clusters of mashup elements (virtual Things), and then a second model is proposed for the best virtual Things to be built (non pre-calculated clusters). Its evaluation is done through heuristic algorithms having such model as a basis. Such models and algorithms were first developed for the case of mashups managed at the client side, and after they were extended for the case of mashups being managed at the cloud. For the improvement of these last results, a mathematical programming optimization model was developed that allows optimal clustering and resource allocation solutions to be obtained. Although this is a computationally difficult approach, the added value of this process is that the problem is rigorously outlined, and such knowledge is used as a guide in the development of better a heuristic algorithm.Esta tese contribui para o avanço tecnológico do modelo de Sensing as a Service (Se-aaS), baseado em infraestrutura cloud, através do desenvolvimento de modelos e algoritmos que resolvem o problema da alocação eficiente de recursos, melhorando os métodos e técnicas atuais e reduzindo os tempos associados `a transferência dos dados entre a cloud e os clientes, com o objetivo de melhorar a qualidade da experiência dos seus utilizadores. Os primeiros modelos e algoritmos desenvolvidos são adequados para o caso em que as mashups são geridas pela aplicação cliente, e posteriormente foram desenvolvidos modelos e algoritmos para o caso em que as mashups são geridas pela cloud. Isto implica ter de resolver múltiplos problemas: i) Construção de clusters de elementos de mashup compatíveis; ii) Atribuição de dispositivos físicos aos clusters, acabando um dispositivo físico por servir m´ múltiplas aplicações/mashups; iii) Redução da quantidade de transferência de dados entre os diversos locais da cloud, e consequentes atrasos, o que dependente dos clusters construídos, dos dispositivos atribuídos aos clusters e dos locais da cloud escolhidos para realizar o processamento necessário. As diferentes estratégias podem ser adotadas por fornecedores de serviço cloud que queiram melhorar o desempenho dos seus serviços.(…

    PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems

    Full text link
    Machine Learning models are often composed of pipelines of transformations. While this design allows to efficiently execute single model components at training time, prediction serving has different requirements such as low latency, high throughput and graceful performance degradation under heavy load. Current prediction serving systems consider models as black boxes, whereby prediction-time-specific optimizations are ignored in favor of ease of deployment. In this paper, we present PRETZEL, a prediction serving system introducing a novel white box architecture enabling both end-to-end and multi-model optimizations. Using production-like model pipelines, our experiments show that PRETZEL is able to introduce performance improvements over different dimensions; compared to state-of-the-art approaches PRETZEL is on average able to reduce 99th percentile latency by 5.5x while reducing memory footprint by 25x, and increasing throughput by 4.7x.Comment: 16 pages, 14 figures, 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 201

    Resource allocation model for sensor clouds under the sensing as a service paradigm

    Get PDF
    The Sensing as a Service is emerging as a new Internet of Things (IoT) business model for sensors and data sharing in the cloud. Under this paradigm, a resource allocation model for the assignment of both sensors and cloud resources to clients/applications is proposed. This model, contrarily to previous approaches, is adequate for emerging IoT Sensing as a Service business models supporting multi-sensing applications and mashups of Things in the cloud. A heuristic algorithm is also proposed having this model as a basis. Results show that the approach is able to incorporate strategies that lead to the allocation of fewer devices, while selecting the most adequate ones for application needs.FCT (Foundation for Science and Technology) from Portugal within CEOT (Center for Electronic, Optoelectronic and Telecommunications) UID/MULTI/00631/2019info:eu-repo/semantics/publishedVersio

    Digitally interpreting traditional folk crafts

    Get PDF
    The cultural heritage preservation requires that objects persist throughout time to continue to communicate an intended meaning. The necessity of computer-based preservation and interpretation of traditional folk crafts is validated by the decreasing number of masters, fading technologies, and crafts losing economic ground. We present a long-term applied research project on the development of a mathematical basis, software tools, and technology for application of desktop or personal fabrication using compact, cheap, and environmentally friendly fabrication devices, including '3D printers', in traditional crafts. We illustrate the properties of this new modeling and fabrication system using several case studies involving the digital capture of traditional objects and craft patterns, which we also reuse in modern designs. The test application areas for the development are traditional crafts from different cultural backgrounds, namely Japanese lacquer ware and Norwegian carvings. Our project includes modeling existing artifacts, Web presentations of the models, automation of the models fabrication, and the experimental manufacturing of new designs and forms

    Consistent and efficient output-streams management in optimistic simulation platforms

    Get PDF
    Optimistic synchronization is considered an effective means for supporting Parallel Discrete Event Simulations. It relies on a speculative approach, where concurrent processes execute simulation events regardless of their safety, and consistency is ensured via proper rollback mechanisms, upon the a-posteriori detection of causal inconsistencies along the events' execution path. Interactions with the outside world (e.g. generation of output streams) are a well-known problem for rollback-based systems, since the outside world may have no notion of rollback. In this context, approaches for allowing the simulation modeler to generate consistent output rely on either the usage of ad-hoc APIs (which must be provided by the underlying simulation kernel) or temporary suspension of processing activities in order to wait for the final outcome (commit/rollback) associated with a speculatively-produced output. In this paper we present design indications and a reference implementation for an output streams' management subsystem which allows the simulation-model writer to rely on standard output-generation libraries (e.g. stdio) within code blocks associated with event processing. Further, the subsystem ensures that the produced output is consistent, namely associated with events that are eventually committed, and system-wide ordered along the simulation time axis. The above features jointly provide the illusion of a classical (simple to deal with) sequential programming model, which spares the developer from being aware that the simulation program is run concurrently and speculatively. We also show, via an experimental study, how the design/development optimizations we present lead to limited overhead, giving rise to the situation where the simulation run would have been carried out with near-to-zero or reduced output management cost. At the same time, the delay for materializing the output stream (making it available for any type of audit activity) is shown to be fairly limited and constant, especially for good mixtures of I/O-bound vs CPU-bound behaviors at the application level. Further, the whole output streams' management subsystem has been designed in order to provide scalability for I/O management on clusters. © 2013 ACM
    corecore