1,687 research outputs found

    Die unsicheren Kanäle

    Get PDF
    Zeitgenössische IT-Sicherheit operiert in einer Überbietungslogik zwischen Sicherheitsvorkehrungen und Angriffsszenarien. Diese paranoid strukturierte Form negativer Sicherheit lässt sich vom Ursprung der IT-Sicherheit in der modernen Kryptografie über Computerviren und -würmer, Ransomware und Backdoors bis hin zum AIDS-Diskurs der 1980er Jahre nachzeichnen. Doch Sicherheit in und mit digital vernetzten Medien lässt sich auch anders denken: Marie-Luise Shnayien schlägt die Verwendung eines reparativen, queeren Sicherheitsbegriffs vor, dessen Praktiken zwar nicht auf der Ebene des Technischen angesiedelt sind, aber dennoch nicht ohne ein genaues Wissen desselben auskommen

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Doubly High-Dimensional Contextual Bandits: An Interpretable Model for Joint Assortment-Pricing

    Full text link
    Key challenges in running a retail business include how to select products to present to consumers (the assortment problem), and how to price products (the pricing problem) to maximize revenue or profit. Instead of considering these problems in isolation, we propose a joint approach to assortment-pricing based on contextual bandits. Our model is doubly high-dimensional, in that both context vectors and actions are allowed to take values in high-dimensional spaces. In order to circumvent the curse of dimensionality, we propose a simple yet flexible model that captures the interactions between covariates and actions via a (near) low-rank representation matrix. The resulting class of models is reasonably expressive while remaining interpretable through latent factors, and includes various structured linear bandit and pricing models as particular cases. We propose a computationally tractable procedure that combines an exploration/exploitation protocol with an efficient low-rank matrix estimator, and we prove bounds on its regret. Simulation results show that this method has lower regret than state-of-the-art methods applied to various standard bandit and pricing models. Real-world case studies on the assortment-pricing problem, from an industry-leading instant noodles company to an emerging beauty start-up, underscore the gains achievable using our method. In each case, we show at least three-fold gains in revenue or profit by our bandit method, as well as the interpretability of the latent factor models that are learned

    The Unexpected Sources of Innovation

    Get PDF
    From increased access to information to a shift in production from material to immaterial goods, recent trends enable citizens to become more active agents of change. Both at their homes and workplaces, citizens are witnessed to be producers of goods, including innovations enabling new functions when compared with the existing goods offered on the (local) market. Examples range from tangible goods, such as new brewing technologies for making craft beers, to intangible goods, like open-source software. In the words of Eric von Hippel (2005): innovation is democratizing. In this thesis, Max Mulhuijzen studies the democratization of innovation. With four studies, he researches the development of innovations by individual citizens and when and how these innovations diffuse. Thereby, Max sheds light on the process of innovation brought about by actors not recognized in the traditional academic literature: the unexpected sources of innovation. The first study unravels the process through which citizens produce household sector (HHS) innovations. In particular, how citizens’ income and discretionary time permit them to develop goods at home and subsequently, how these resources allow citizens to be innovative in their efforts. The main contributions of this chapter to the literature are the more nuanced conceptualization of HHS innovation—Max connects the concept to broader constructs on citizen production behavior (e.g., do-it-yourself)—and the sophisticated model theorizing how resources steer innovation by citizens. In the second study, Max takes a helicopter view of the regional factors enabling citizens to develop and diffuse innovations and develops an ecosystem model. Past studies of HHS innovation are weakly correlated concerning the policies they advise, resulting in only a few changes to policymaking. The ecosystem model presented in this chapter explains how the most significant regional elements may determine levels of HHS innovation, how these elements complement or weaken each other, and provides a valuable toolbox to scholars and policymakers in suggesting HHS innovation policies. The third study focuses on the interactions between innovating citizens and firms. Though the academic literature has counseled firms to open up their boundaries and facilitate innovation by and absorb knowledge from users of their products, few theories to date explain variation in users’ characteristics and how this might explain their innovation outcomes. Max examines quantitatively the case of the Ultimaker 3D printer and its online platform YouMagine—such platforms allow users to share freely the product improvements or additions they developed. He offers new insights into the characteristics of users contributing designs well-received by the user community, guiding firms on which users are likely contributors. The final study included in this dissertation considers how a democratized view of innovation implicates innovation in firms by exploring underground innovation, i.e., the innovations employees initiate and develop without their supervisors or managers knowing. Previous studies have reported such cases but did not provide an in-depth account of employees’ motivations—while these can have implications for the diffusion, hence the visibility of underground innovations. This study contributes such an account and reveals three orientations characterizing employees’ projects developed underground

    Limit order books in statistical arbitrage and anomaly detection

    Full text link
    Cette thèse propose des méthodes exploitant la vaste information contenue dans les carnets d’ordres (LOBs). La première partie de cette thèse découvre des inefficacités dans les LOBs qui sont source d’arbitrage statistique pour les traders haute fréquence. Le chapitre 1 développe de nouvelles relations théoriques entre les actions intercotées afin que leurs prix soient exempts d’arbitrage. Toute déviation de prix est capturée par une stratégie novatrice qui est ensuite évaluée dans un nouvel environnement de backtesting permettant l’étude de la latence et de son importance pour les traders haute fréquence. Le chapitre 2 démontre empiriquement l’existence d’arbitrage lead-lag à haute fréquence. Les relations dites lead-lag ont été bien documentées par le passé, mais aucune étude n’a montré leur véritable potentiel économique. Un modèle économétrique original est proposé pour prédire les rendements de l’actif en retard, ce qu’il réalise de manière précise hors échantillon, conduisant à des opportunités d’arbitrage de courte durée. Dans ces deux chapitres, les inefficacités des LOBs découvertes sont démontrées comme étant rentables, fournissant ainsi une meilleure compréhension des activités des traders haute fréquence. La deuxième partie de cette thèse investigue les séquences anormales dans les LOBs. Le chapitre 3 évalue la performance de méthodes d’apprentissage automatique dans la détection d’ordres frauduleux. En raison de la grande quantité de données, les fraudes sont difficilement détectables et peu de cas sont disponibles pour ajuster les modèles de détection. Un nouveau cadre d’apprentissage profond non supervisé est proposé afin de discerner les comportements anormaux du LOB dans ce contexte ardu. Celui-ci est indépendant de l’actif et peut évoluer avec les marchés, offrant alors de meilleures capacités de détection pour les régulateurs financiers.This thesis proposes methods exploiting the vast informational content of limit order books (LOBs). The first part of this thesis discovers LOB inefficiencies that are sources of statistical arbitrage for high-frequency traders. Chapter 1 develops new theoretical relationships between cross-listed stocks, so their prices are arbitrage free. Price deviations are captured by a novel strategy that is then evaluated in a new backtesting environment enabling the study of latency and its importance for high-frequency traders. Chapter 2 empirically demonstrates the existence of lead-lag arbitrage at high-frequency. Lead-lag relationships have been well documented in the past, but no study has shown their true economic potential. An original econometric model is proposed to forecast returns on the lagging asset, and does so accurately out-of-sample, resulting in short-lived arbitrage opportunities. In both chapters, the discovered LOB inefficiencies are shown to be profitable, thus providing a better understanding of high-frequency traders’ activities. The second part of this thesis investigates anomalous patterns in LOBs. Chapter 3 studies the performance of machine learning methods in the detection of fraudulent orders. Because of the large amount of LOB data generated daily, trade frauds are challenging to catch, and very few cases are available to fit detection models. A novel unsupervised deep learning–based framework is proposed to discern abnormal LOB behavior in this difficult context. It is asset independent and can evolve alongside markets, providing better fraud detection capabilities to market regulators

    A multiscale strategy for fouling prediction and mitigation in gas turbines

    Get PDF
    Gas turbines are one of the primary sources of power for both aerospace and land-based applications. Precisely for this reason, they are often forced to operate in harsh environmental conditions, which involve the occurrence of particle ingestion by the engine. The main implications of this problem are often underestimated. The particulate in the airflow ingested by the machine can deposit or erode its internal surfaces, and lead to the variation of their aerodynamic geometry, entailing performance degradation and, possibly, a reduction in engine life. This issue affects the compressor and the turbine section and can occur for either land-based or aeronautical turbines. For the former, the problem can be mitigated (but not eliminated) by installing filtration systems. For what concern the aerospace field, filtration systems cannot be used. Volcanic eruptions and sand dust storms can send particulate to aircraft cruising altitudes. Also, aircraft operating in remote locations or low altitudes can be subjected to particle ingestion, especially in desert environments. The aim of this work is to propose different methodologies capable to mitigate the effects of fouling or predicting the performance degradation that it generates. For this purpose, both hot and cold engine sections are considered. Concerning the turbine section, new design guidelines are presented. This is because, for this specific component, the time scales of failure events due to hot deposition can be of the order of minutes, which makes any predictive model inapplicable. In this respect, design optimization techniques were applied to find the best HPT vane geometry that is less sensitive to the fouling phenomena. After that, machine learning methods were adopted to obtain a design map that can be useful in the first steps of the design phase. Moreover, after a numerical uncertainty quantification analysis, it was demonstrated that a deterministic optimization is not sufficient to face highly aleatory phenomena such as fouling. This suggests the use of robust or aggressive design techniques to front this issue. On the other hand, with respect to the compressor section, the research was mainly focused on the building of a predictive maintenance tool. This is because the time scales of failure events due to cold deposition are longer than the ones for the hot section, hence the main challenge for this component is the optimization of the washing schedule. As reported in the previous sections, there are several studies in the literature focused on this issue, but almost all of them are data-based instead of physics-based. The innovative strategy proposed here is a mixture between physics-based and data-based methodologies. In particular, a reduced-order model has been developed to predict the behaviour of the whole engine as the degradation proceeds. For this purpose, a gas path code that uses the components’ characteristic maps has been created to simulate the gas turbine. A map variation technique has been used to take into account the fouling effects on each engine component. Particularly, fouling coefficients as a function of the engine architecture, its operating conditions, and the contaminant characteristics have been created. For this purpose, both experimental and computational results have been used. Specifically for the latter, efforts have been done to develop a new numerical deposition/detachment model.Le turbine a gas sono una delle pricipali fonti di energia, sia per applicazioni aeronautiche che terrestri. Proprio per questa ragione, esse sono spesso costrette ad operare in ambienti non propriamente puliti, il che comporta l’ingestione di contaminanti solidi da parte del motore. Le principali implicazioni di questo problema sono spesso sottovalutate. Le particelle solide presenti nel flusso d’aria che il motore ingerisce durante il suo funzionamento possono depositarsi o erodere le superfici interne della macchina, e portare a variazioni alla sua aerodinamica, quindi a degrado di performance e, molto probabilmente, alla diminuzione della sua vita utile. Questo problema aflligge sia la parte del compressore che la parte della turbina, e si manifesta sia in applicazioni terrestri che aeronautiche. Per quanto riguarda la prima, la questione può essere mitigata (ma non eliminata) dall’installazione di sistemi di filtraggio all’ingresso della macchina. Per le applicazioni aeronautiche invece, i sistemi di filtraggio non possono essere utilizzati. Questo implica che il particolato presente ad alte quote, magari grazie ad eventi catastrofici quali eruzioni vulcaniche, o a basse quote, quindi ambienti deseritic, entra liberamente nella turbina a gas. Lo scopo principale di questo lavoro di tesi, è quello di proporre differenti metodologieallo scopo di mitigare gli effetti dello sporcamento o predirre il degrado che esso comporta nelle turbine a gas. Per questo scopo, sia la parte del compressore che quella della turbina sono state prese in considerazione. Per quanto riguarda la parte turbina, saranno presentate nuove guide progettuali volte al trovare la geometria che sia meno sensibile possibile al problema dello sporcamento. Dopo di ciò, i risultati ottenuti verranno trattati tramite tecniche di machine learning, ottenendo una mappa di progetto che potrà essere utile nelle prime fasi della progettazione di questi componenti. Inoltre, essendo l’analisi fin qui condotta di tipo deterministico, un’analisi delle principali fonti di incertezza verrà eseguita con l’utilizzo di tecniche derivanti dall’uncertainty quantification. Questo dimostrerà che l’analisi deterministica è troppo semplificativa, e che sarebbe opportuno spingersi verso una progettazione robusta per affrontare questa tipologia di problemi. D’altro canto, per quanto concerne la parte compressore, la ricerca è stata incentrata principalmente sulla costruzione di uno strumento predittivo, questo perchè la scala temporale del degrado dovuto alla deposizione a "freddo" è molto più dilatata rispetto a quella della sezione "calda". La trategia proposta in questo lavoro di tesi è un’insieme di modelli fisici e data-driven. In particolare, si è sviluppato un modello ad ordine ridotto per la previsione del comportamento del motore soggetto a degrado dovuto all’ingestione di particolato, durante un’intera missione aerea. Per farlo, si è generato un codice cosiddetto gas-path, che modella i singoli componenti della macchina attraverso le loro mappe caratteristiche. Quest’ultime vengono modificate, a seguito della deposizione, attraverso opportuni coefficienti di degrado. Tali coefficienti devono essere adeguatamente stimati per avere una corretta previsione degli eventi, e per fare ciò verrà proposta una strategia che comporta l’utilizzo sia di metodi sperimentali che computazionali, per la generazione di un algoritmo che avrà lo scopo di fornire come output questi coefficienti

    Demand Response in Smart Grids

    Get PDF
    The Special Issue “Demand Response in Smart Grids” includes 11 papers on a variety of topics. The success of this Special Issue demonstrates the relevance of demand response programs and events in the operation of power and energy systems at both the distribution level and at the wide power system level. This reprint addresses the design, implementation, and operation of demand response programs, with focus on methods and techniques to achieve an optimized operation as well as on the electricity consumer

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Machine-Learning-Powered Cyber-Physical Systems

    Get PDF
    In the last few years, we witnessed the revolution of the Internet of Things (IoT) paradigm and the consequent growth of Cyber-Physical Systems (CPSs). IoT devices, which include a plethora of smart interconnected sensors, actuators, and microcontrollers, have the ability to sense physical phenomena occurring in an environment and provide copious amounts of heterogeneous data about the functioning of a system. As a consequence, the large amounts of generated data represent an opportunity to adopt artificial intelligence and machine learning techniques that can be used to make informed decisions aimed at the optimization of such systems, thus enabling a variety of services and applications across multiple domains. Machine learning processes and analyses such data to generate a feedback, which represents a status the environment is in. A feedback given to the user in order to make an informed decision is called an open-loop feedback. Thus, an open-loop CPS is characterized by the lack of an actuation directed at improving the system itself. A feedback used by the system itself to actuate a change aimed at optimizing the system itself is called a closed-loop feedback. Thus, a closed-loop CPS pairs feedback based on sensing data with an actuation that impacts the system directly. In this dissertation, we propose several applications in the context of CPS. We propose open-loop CPSs designed for the early prediction, diagnosis, and persistency detection of Bovine Respiratory Disease (BRD) in dairy calves, and for gait activity recognition in horses.These works use sensor data, such as pedometers and automated feeders, to perform valuable real-field data collection. Data are then processed by a mix of state-of-the-art approaches as well as novel techniques, before being fed to machine learning algorithms for classification, which informs the user on the status of their animals. Our work further evaluates a variety of trade-offs. In the context of BRD, we adopt optimization techniques to explore the trade-offs of using sensor data as opposed to manual examination performed by domain experts. Similarly, we carry out an extensive analysis on the cost-accuracy trade-offs, which farmers can adopt to make informed decisions on their barn investments. In the context of horse gait recognition we evaluate the benefits of lighter classifications algorithms to improve energy and storage usage, and their impact on classification accuracy. With respect to closed-loop CPS we proposes an incentive-based demand response approach for Heating Ventilation and Air Conditioning (HVAC) designed for peak load reduction in the context of smart grids. Specifically, our approach uses machine learning to process power data from smart thermostats deployed in user homes, along with their personal temperature preferences. Our machine learning models predict power savings due to thermostat changes, which are then plugged into our optimization problem that uses auction theory coupled with behavioral science. This framework selects the set of users who fulfill the power saving requirement, while minimizing financial incentives paid to the users, and, as a consequence, their discomfort. Our work on BRD has been published on IEEE DCOSS 2022 and Frontiers in Animal Science. Our work on gait recognition has been published on IEEE SMARTCOMP 2019 and Elsevier PMC 2020, and our work on energy management and energy prediction has been published on IEEE PerCom 2022 and IEEE SMARTCOMP 2022. Several other works are under submission when this thesis was written, and are included in this document as well
    • …
    corecore