162 research outputs found

    The disruptor's dilemma: TiVo and the U.S. television ecosystem

    Get PDF
    Firms introducing disruptive innovations into multisided ecosystems may confront the disruptor's dilemma – they must gain the support of the very incumbents they disrupt. We examine how these firms may address this dilemma through a longitudinal study of TiVo, a company that pioneered the Digital Video Recorder. Our analysis reveals how TiVo navigated co-opetitive tensions by continually adjusting its strategy, its technology platform, and its relational positioning within the evolving U.S. television industry ecosystem. We theorize how (a) disruption may affect not just specific incumbents, but also the entire ecosystem, (b) co-opetition is not just dyadic, but also multilateral and intertemporal, and (c) strategy is both a deliberative and emergent process involving continual adjustments, as the disruptor attempts to balance co-opetitive tensions over time

    Augmenting data warehousing architectures with hadoop

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementAs the volume of available data increases exponentially, traditional data warehouses struggle to transform this data into actionable knowledge. Data strategies that include the creation and maintenance of data warehouses have a lot to gain by incorporating technologies from the Big Data’s spectrum. Hadoop, as a transformation tool, can add a theoretical infinite dimension of data processing, feeding transformed information into traditional data warehouses that ultimately will retain their value as central components in organizations’ decision support systems. This study explores the potentialities of Hadoop as a data transformation tool in the setting of a traditional data warehouse environment. Hadoop’s execution model, which is oriented for distributed parallel processing, offers great capabilities when the amounts of data to be processed require the infrastructure to expand. Horizontal scalability, which is a key aspect in a Hadoop cluster, will allow for proportional growth in processing power as the volume of data increases. Through the use of a Hive on Tez, in a Hadoop cluster, this study transforms television viewing events, extracted from Ericsson’s Mediaroom Internet Protocol Television infrastructure, into pertinent audience metrics, like Rating, Reach and Share. These measurements are then made available in a traditional data warehouse, supported by a traditional Relational Database Management System, where they are presented through a set of reports. The main contribution of this research is a proposed augmented data warehouse architecture where the traditional ETL layer is replaced by a Hadoop cluster, running Hive on Tez, with the purpose of performing the heaviest transformations that convert raw data into actionable information. Through a typification of the SQL statements, responsible for the data transformation processes, we were able to understand that Hadoop, and its distributed processing model, delivers outstanding performance results associated with the analytical layer, namely in the aggregation of large data sets. Ultimately, we demonstrate, empirically, the performance gains that can be extracted from Hadoop, in comparison to an RDBMS, regarding speed, storage usage and scalability potential, and suggest how this can be used to evolve data warehouses into the age of Big Data

    A Deep Learning approach for monitoring severe rainfall in urban catchments using consumer cameras. Models development and deployment on a case study in Matera (Italy) Un approccio basato sul Deep Learning per monitorare le piogge intense nei bacini urbani utilizzando fotocamere generiche. Sviluppo e implementazione di modelli su un caso di studio a Matera (Italia)

    Get PDF
    In the last 50 years, flooding has figured as the most frequent and widespread natural disaster globally. Extreme precipitation events stemming from climate change could alter the hydro-geological regime resulting in increased flood risk. Near real-time precipitation monitoring at local scale is essential for flood risk mitigation in urban and suburban areas, due to their high vulnerability. Presently, most of the rainfall data is obtained from ground‐based measurements or remote sensing that provide limited information in terms of temporal or spatial resolution. Other problems may be due to the high costs. Furthermore, rain gauges are unevenly spread and usually placed away from urban centers. In this context, a big potential is represented by the use of innovative techniques to develop low-cost monitoring systems. Despite the diversity of purposes, methods and epistemological fields, the literature on the visual effects of the rain supports the idea of camera-based rain sensors but tends to be device-specific. The present thesis aims to investigate the use of easily available photographing devices as rain detectors-gauges to develop a dense network of low-cost rainfall sensors to support the traditional methods with an expeditious solution embeddable into smart devices. As opposed to existing works, the study focuses on maximizing the number of image sources (like smartphones, general-purpose surveillance cameras, dashboard cameras, webcams, digital cameras, etc.). This encompasses cases where it is not possible to adjust the camera parameters or obtain shots in timelines or videos. Using a Deep Learning approach, the rainfall characterization can be achieved through the analysis of the perceptual aspects that determine whether and how a photograph represents a rainy condition. The first scenario of interest for the supervised learning was a binary classification; the binary output (presence or absence of rain) allows the detection of the presence of precipitation: the cameras act as rain detectors. Similarly, the second scenario of interest was a multi-class classification; the multi-class output described a range of quasi-instantaneous rainfall intensity: the cameras act as rain estimators. Using Transfer Learning with Convolutional Neural Networks, the developed models were compiled, trained, validated, and tested. The preparation of the classifiers included the preparation of a suitable dataset encompassing unconstrained verisimilar settings: open data, several data owned by National Research Institute for Earth Science and Disaster Prevention - NIED (dashboard cameras in Japan coupled with high precision multi-parameter radar data), and experimental activities conducted in the NIED Large Scale Rainfall Simulator. The outcomes were applied to a real-world scenario, with the experimentation through a pre-existent surveillance camera using 5G connectivity provided by Telecom Italia S.p.A. in the city of Matera (Italy). Analysis unfolded on several levels providing an overview of generic issues relating to the urban flood risk paradigm and specific territorial questions inherent with the case study. These include the context aspects, the important role of rainfall from driving the millennial urban evolution to determining present criticality, and components of a Web prototype for flood risk communication at local scale. The results and the model deployment raise the possibility that low‐cost technologies and local capacities can help to retrieve rainfall information for flood early warning systems based on the identification of a significant meteorological state. The binary model reached accuracy and F1 score values of 85.28% and 0.86 for the test, and 83.35% and 0.82 for the deployment. The multi-class model reached test average accuracy and macro-averaged F1 score values of 77.71% and 0.73 for the 6-way classifier, and 78.05% and 0.81 for the 5-class. The best performances were obtained in heavy rainfall and no-rain conditions, whereas the mispredictions are related to less severe precipitation. The proposed method has limited operational requirements, can be easily and quickly implemented in real use cases, exploiting pre-existent devices with a parsimonious use of economic and computational resources. The classification can be performed on single photographs taken in disparate conditions by commonly used acquisition devices, i.e. by static or moving cameras without adjusted parameters. This approach is especially useful in urban areas where measurement methods such as rain gauges encounter installation difficulties or operational limitations or in contexts where there is no availability of remote sensing data. The system does not suit scenes that are also misleading for human visual perception. The approximations inherent in the output are acknowledged. Additional data may be gathered to address gaps that are apparent and improve the accuracy of the precipitation intensity prediction. Future research might explore the integration with further experiments and crowdsourced data, to promote communication, participation, and dialogue among stakeholders and to increase public awareness, emergency response, and civic engagement through the smart community idea.Negli ultimi 50 anni, le alluvioni si sono confermate come il disastro naturale più frequente e diffuso a livello globale. Tra gli impatti degli eventi meteorologici estremi, conseguenti ai cambiamenti climatici, rientrano le alterazioni del regime idrogeologico con conseguente incremento del rischio alluvionale. Il monitoraggio delle precipitazioni in tempo quasi reale su scala locale è essenziale per la mitigazione del rischio di alluvione in ambito urbano e periurbano, aree connotate da un'elevata vulnerabilità. Attualmente, la maggior parte dei dati sulle precipitazioni è ottenuta da misurazioni a terra o telerilevamento che forniscono informazioni limitate in termini di risoluzione temporale o spaziale. Ulteriori problemi possono derivare dagli elevati costi. Inoltre i pluviometri sono distribuiti in modo non uniforme e spesso posizionati piuttosto lontano dai centri urbani, comportando criticità e discontinuità nel monitoraggio. In questo contesto, un grande potenziale è rappresentato dall'utilizzo di tecniche innovative per sviluppare sistemi inediti di monitoraggio a basso costo. Nonostante la diversità di scopi, metodi e campi epistemologici, la letteratura sugli effetti visivi della pioggia supporta l'idea di sensori di pioggia basati su telecamera, ma tende ad essere specifica per dispositivo scelto. La presente tesi punta a indagare l'uso di dispositivi fotografici facilmente reperibili come rilevatori-misuratori di pioggia, per sviluppare una fitta rete di sensori a basso costo a supporto dei metodi tradizionali con una soluzione rapida incorporabile in dispositivi intelligenti. A differenza dei lavori esistenti, lo studio si concentra sulla massimizzazione del numero di fonti di immagini (smartphone, telecamere di sorveglianza generiche, telecamere da cruscotto, webcam, telecamere digitali, ecc.). Ciò comprende casi in cui non sia possibile regolare i parametri fotografici o ottenere scatti in timeline o video. Utilizzando un approccio di Deep Learning, la caratterizzazione delle precipitazioni può essere ottenuta attraverso l'analisi degli aspetti percettivi che determinano se e come una fotografia rappresenti una condizione di pioggia. Il primo scenario di interesse per l'apprendimento supervisionato è una classificazione binaria; l'output binario (presenza o assenza di pioggia) consente la rilevazione della presenza di precipitazione: gli apparecchi fotografici fungono da rivelatori di pioggia. Analogamente, il secondo scenario di interesse è una classificazione multi-classe; l'output multi-classe descrive un intervallo di intensità delle precipitazioni quasi istantanee: le fotocamere fungono da misuratori di pioggia. Utilizzando tecniche di Transfer Learning con reti neurali convoluzionali, i modelli sviluppati sono stati compilati, addestrati, convalidati e testati. La preparazione dei classificatori ha incluso la preparazione di un set di dati adeguato con impostazioni verosimili e non vincolate: dati aperti, diversi dati di proprietà del National Research Institute for Earth Science and Disaster Prevention - NIED (telecamere dashboard in Giappone accoppiate con dati radar multiparametrici ad alta precisione) e attività sperimentali condotte nel simulatore di pioggia su larga scala del NIED. I risultati sono stati applicati a uno scenario reale, con la sperimentazione attraverso una telecamera di sorveglianza preesistente che utilizza la connettività 5G fornita da Telecom Italia S.p.A. nella città di Matera (Italia). L'analisi si è svolta su più livelli, fornendo una panoramica sulle questioni relative al paradigma del rischio di alluvione in ambito urbano e questioni territoriali specifiche inerenti al caso di studio. Queste ultime includono diversi aspetti del contesto, l'importante ruolo delle piogge dal guidare l'evoluzione millenaria della morfologia urbana alla determinazione delle criticità attuali, oltre ad alcune componenti di un prototipo Web per la comunicazione del rischio alluvionale su scala locale. I risultati ottenuti e l'implementazione del modello corroborano la possibilità che le tecnologie a basso costo e le capacità locali possano aiutare a caratterizzare la forzante pluviometrica a supporto dei sistemi di allerta precoce basati sull'identificazione di uno stato meteorologico significativo. Il modello binario ha raggiunto un'accuratezza e un F1-score di 85,28% e 0,86 per il set di test e di 83,35% e 0,82 per l'implementazione nel caso di studio. Il modello multi-classe ha raggiunto un'accuratezza media e F1-score medio (macro-average) di 77,71% e 0,73 per il classificatore a 6 vie e 78,05% e 0,81 per quello a 5 classi. Le prestazioni migliori sono state ottenute nelle classi relative a forti precipitazioni e assenza di pioggia, mentre le previsioni errate sono legate a precipitazioni meno estreme. Il metodo proposto richiede requisiti operativi limitati, può essere implementato facilmente e rapidamente in casi d'uso reali, sfruttando dispositivi preesistenti con un uso parsimonioso di risorse economiche e computazionali. La classificazione può essere eseguita su singole fotografie scattate in condizioni disparate da dispositivi di acquisizione di uso comune, ovvero da telecamere statiche o in movimento senza regolazione dei parametri. Questo approccio potrebbe essere particolarmente utile nelle aree urbane in cui i metodi di misurazione come i pluviometri incontrano difficoltà di installazione o limitazioni operative o in contesti in cui non sono disponibili dati di telerilevamento o radar. Il sistema non si adatta a scene che sono fuorvianti anche per la percezione visiva umana. I limiti attuali risiedono nelle approssimazioni intrinseche negli output. Per colmare le lacune evidenti e migliorare l'accuratezza della previsione dell'intensità di precipitazione, sarebbe possibile un'ulteriore raccolta di dati. Sviluppi futuri potrebbero riguardare l'integrazione con ulteriori esperimenti in campo e dati da crowdsourcing, per promuovere comunicazione, partecipazione e dialogo aumentando la resilienza attraverso consapevolezza pubblica e impegno civico in una concezione di comunità smart

    Main Battle Tank crew in-tank activities and workload

    Get PDF
    This thesis has examined Main Battle Tank (MBT) crew activities and workload. The aims were to provide a package of workload and fatigue indicators for use during field exercises; to document the 'in tank' activities of MBT crew; to provide a task analysis which includes an indication of task demands, and to draw general conclusions concerning the practical assessment of human workload. It is in two volumes. The first volume, which is unclassified, gives the background, presents reviews of relevant literature on task analysis and workload and gives an overview of the thesis. It then documents the studies and experiments which were carried out in order to refine the methodology, and to select a battery of measures of workload and fatigue which were suitable for use on military exercises. [Continues.

    Grid-based methods for chemistry simulations on a quantum computer

    Get PDF
    First-quantized, grid-based methods for chemistry modeling are a natural and elegant fit for quantum computers. However, it is infeasible to use today’s quantum prototypes to explore the power of this approach because it requires a substantial number of near-perfect qubits. Here, we use exactly emulated quantum computers with up to 36 qubits to execute deep yet resource-frugal algorithms that model 2D and 3D atoms with single and paired particles. A range of tasks is explored, from ground state preparation and energy estimation to the dynamics of scattering and ionization; we evaluate various methods within the split-operator QFT (SO-QFT) Hamiltonian simulation paradigm, including protocols previously described in theoretical papers and our own techniques. While we identify certain restrictions and caveats, generally, the grid-based method is found to perform very well; our results are consistent with the view that first-quantized paradigms will be dominant from the early fault-tolerant quantum computing era onward

    Applications of MATLAB in Science and Engineering

    Get PDF
    The book consists of 24 chapters illustrating a wide range of areas where MATLAB tools are applied. These areas include mathematics, physics, chemistry and chemical engineering, mechanical engineering, biological (molecular biology) and medical sciences, communication and control systems, digital signal, image and video processing, system modeling and simulation. Many interesting problems have been included throughout the book, and its contents will be beneficial for students and professionals in wide areas of interest
    corecore