1,126 research outputs found

    Integrating Case-Based Reasoning with Adaptive Process Management

    Get PDF
    The need for more flexiblity of process-aware information systems (PAIS) has been discussed for several years and different approaches for adaptive process management have emerged. Only few of them provide support for both changes of individual process instances and the propagation of process type changes to a collection of related process instances. The knowledge about changes has not yet been exploited by any of these systems. To overcome this practical limitation, PAIS must capture the whole process life cycle and all kinds of changes in an integrated way. They must allow users to deviate from the predefined process in exceptional situations, and assist them in retrieving and reusing knowledge about previously performed changes. In this report we present a proof-of concept implementation of a learning adaptive PAIS. The prototype combines the ADEPT2 framework for dynamic process changes with concepts and methods provided by case-based reasoning(CBR) technology

    Automatic low-cost IP watermarking technique based on output mark insertions

    No full text
    International audienceToday, although intellectual properties (IP) and their reuse are common, their use is causing design security issues: illegal copying, counterfeiting, and reverse engineering. IP watermarking is an efficient way to detect an unauthorized IP copy or a counterfeit. In this context, many interesting solutions have been proposed. However, few combine the watermarking process with synthesis. This article presents a new solution, i.e. automatic low cost IP watermarking included in the high-level synthesis process. The proposed method differs from those cited in the literature as the marking is not material, but is based on mathematical relationships between numeric values as inputs and outputs at specified times. Some implementation results with Xilinx Virtex-5 FPGA that the proposed solution required a lower area and timing overhead than existing solutions

    Improving performance using computational compression through memoization: A case study using a railway power consumption simulator

    Get PDF
    The objective of data compression is to avoid redundancy in order to reduce the size of the data to be stored or transmitted. In some scenarios, data compression may help to increase global performance by reducing the amount of data at a competitive cost in terms of global time and energy consumption. We have introduced computational compression as a technique for reducing redundant computation, in other words, to avoid carrying out the same computation with the same input to obtain the same output. In some scenarios, such as simulations, graphic processing, and so on, part of the computation is repeated using the same input in order to obtain the same output, and this computation could have an important cost in terms of global time and energy consumption. We propose applying computational compression by using memoization in order to store the results for future reuse and, in this way, minimize the use of the same costly computation. Although memoization was proposed for sequential applications in the 1980s, and there are some projects that have applied it in very specific domains, we propose a novel, domain-independent way of using it in high-performance applications, as a means of avoiding redundant computation.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under the project TIN2013-41350-P (Scalable Data Management Techniques for High-End Computing Systems)

    Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification

    Full text link
    Person re-identification (re-id) aims to match pedestrians observed by disjoint camera views. It attracts increasing attention in computer vision due to its importance to surveillance system. To combat the major challenge of cross-view visual variations, deep embedding approaches are proposed by learning a compact feature space from images such that the Euclidean distances correspond to their cross-view similarity metric. However, the global Euclidean distance cannot faithfully characterize the ideal similarity in a complex visual feature space because features of pedestrian images exhibit unknown distributions due to large variations in poses, illumination and occlusion. Moreover, intra-personal training samples within a local range are robust to guide deep embedding against uncontrolled variations, which however, cannot be captured by a global Euclidean distance. In this paper, we study the problem of person re-id by proposing a novel sampling to mine suitable \textit{positives} (i.e. intra-class) within a local range to improve the deep embedding in the context of large intra-class variations. Our method is capable of learning a deep similarity metric adaptive to local sample structure by minimizing each sample's local distances while propagating through the relationship between samples to attain the whole intra-class minimization. To this end, a novel objective function is proposed to jointly optimize similarity metric learning, local positive mining and robust deep embedding. This yields local discriminations by selecting local-ranged positive samples, and the learned features are robust to dramatic intra-class variations. Experiments on benchmarks show state-of-the-art results achieved by our method.Comment: Published on Pattern Recognitio

    Proactive Interference Caused By Repeated Use Of Memory Palace

    Get PDF

    Idiographic Digital Profiling: Behavioral Analysis Based On Digital Forensics

    Get PDF
    Idiographic digital profiling (IDP) is the application of behavioral analysis to the field of digital forensics. Previous work in this field takes a nomothetic approach to behavioral analysis by attempting to understand the aggregate behaviors of cybercriminals. This work is the first to take an idiographic approach by examining a particular subject\u27s digital footprints for immediate use in an ongoing investigation. IDP provides a framework for investigators to analyze digital behavioral evidence for the purposes of case planning, subject identification, lead generation, obtaining and executing warrants, and prosecuting offenders

    Resource Management for Edge Computing in Internet of Things (IoT)

    Get PDF
    Die große Anzahl an Geräten im Internet der Dinge (IoT) und deren kontinuierliche Datensammlungen führen zu einem rapiden Wachstum der gesammelten Datenmenge. Die Daten komplett mittels zentraler Cloud Server zu verarbeiten ist ineffizient und zum Teil sogar unmöglich oder unnötig. Darum wird die Datenverarbeitung an den Rand des Netzwerks verschoben, was zu den Konzepten des Edge Computings geführt hat. Informationsverarbeitung nahe an der Datenquelle (z.B. auf Gateways und Edge Geräten) reduziert nicht nur die hohe Arbeitslast zentraler Server und Netzwerke, sondern verringer auch die Latenz für Echtzeitanwendungen, da die potentiell unzuverlässige Kommunikation zu Cloud Servern mit ihrer unvorhersehbaren Netzwerklatenz vermieden wird. Aktuelle IoT Architekturen verwenden Gateways, um anwendungsspezifische Verbindungen zu IoT Geräten herzustellen. In typischen Konfigurationen teilen sich mehrere IoT Edge Geräte ein IoT Gateway. Wegen der begrenzten verfügbaren Bandbreite und Rechenkapazität eines IoT Gateways muss die Servicequalität (SQ) der verbundenen IoT Edge Geräte über die Zeit angepasst werden. Nicht nur um die Anforderungen der einzelnen Nutzer der IoT Geräte zu erfüllen, sondern auch um die SQBedürfnisse der anderen IoT Edge Geräte desselben Gateways zu tolerieren. Diese Arbeit untersucht zuerst essentielle Technologien für IoT und existierende Trends. Dabei werden charakteristische Eigenschaften von IoT für die Embedded Domäne, sowie eine umfassende IoT Perspektive für Eingebettete Systeme vorgestellt. Mehrere Anwendungen aus dem Gesundheitsbereich werden untersucht und implementiert, um ein Model für deren Datenverarbeitungssoftware abzuleiten. Dieses Anwendungsmodell hilft bei der Identifikation verschiedener Betriebsmodi. IoT Systeme erwarten von den Edge Geräten, dass sie mehrere Betriebsmodi unterstützen, um sich während des Betriebs an wechselnde Szenarien anpassen zu können. Z.B. Energiesparmodi bei geringen Batteriereserven trotz gleichzeitiger Aufrechterhaltung der kritischen Funktionalität oder einen Modus, um die Servicequalität auf Wunsch des Nutzers zu erhöhen etc. Diese Modi verwenden entweder verschiedene Auslagerungsschemata (z.B. die übertragung von Rohdaten, von partiell bearbeiteten Daten, oder nur des finalen Ergebnisses) oder verschiedene Servicequalitäten. Betriebsmodi unterscheiden sich in ihren Ressourcenanforderungen sowohl auf dem Gerät (z.B. Energieverbrauch), wie auch auf dem Gateway (z.B. Kommunikationsbandbreite, Rechenleistung, Speicher etc.). Die Auswahl des besten Betriebsmodus für Edge Geräte ist eine Herausforderung in Anbetracht der begrenzten Ressourcen am Rand des Netzwerks (z.B. Bandbreite und Rechenleistung des gemeinsamen Gateways), diverser Randbedingungen der IoT Edge Geräte (z.B. Batterielaufzeit, Servicequalität etc.) und der Laufzeitvariabilität am Rand der IoT Infrastruktur. In dieser Arbeit werden schnelle und effiziente Auswahltechniken für Betriebsmodi entwickelt und präsentiert. Wenn sich IoT Geräte in der Reichweite mehrerer Gateways befinden, ist die Verwaltung der gemeinsamen Ressourcen und die Auswahl der Betriebsmodi für die IoT Geräte sogar noch komplexer. In dieser Arbeit wird ein verteilter handelsorientierter Geräteverwaltungsmechanismus für IoT Systeme mit mehreren Gateways präsentiert. Dieser Mechanismus zielt auf das kombinierte Problem des Bindens (d.h. ein Gateway für jedes IoT Gerät bestimmen) und der Allokation (d.h. die zugewiesenen Ressourcen für jedes Gerät bestimmen) ab. Beginnend mit einer initialen Konfiguration verhandeln und kommunizieren die Gateways miteinander und migrieren IoT Geräte zwischen den Gateways, wenn es den Nutzen für das Gesamtsystem erhöht. In dieser Arbeit werden auch anwendungsspezifische Optimierungen für IoT Geräte vorgestellt. Drei Anwendungen für den Gesundheitsbereich wurden realisiert und für tragbare IoT Geräte untersucht. Es wird auch eine neuartige Kompressionsmethode vorgestellt, die speziell für IoT Anwendungen geeignet ist, die Bio-Signale für Gesundheitsüberwachungen verarbeiten. Diese Technik reduziert die zu übertragende Datenmenge des IoT Gerätes, wodurch die Ressourcenauslastung auf dem Gerät und dem gemeinsamen Gateway reduziert wird. Um die vorgeschlagenen Techniken und Mechanismen zu evaluieren, wurden einige Anwendungen auf IoT Plattformen untersucht, um ihre Parameter, wie die Ausführungszeit und Ressourcennutzung, zu bestimmen. Diese Parameter wurden dann in einem Rahmenwerk verwendet, welches das IoT Netzwerk modelliert, die Interaktion zwischen Geräten und Gateway erfasst und den Kommunikationsoverhead sowie die erreichte Batterielebenszeit und Servicequalität der Geräte misst. Die Algorithmen zur Auswahl der Betriebsmodi wurden zusätzlich auf IoT Plattformen implementiert, um ihre Overheads bzgl. Ausführungszeit und Speicherverbrauch zu messen

    Automated Website Fingerprinting through Deep Learning

    Full text link
    Several studies have shown that the network traffic that is generated by a visit to a website over Tor reveals information specific to the website through the timing and sizes of network packets. By capturing traffic traces between users and their Tor entry guard, a network eavesdropper can leverage this meta-data to reveal which website Tor users are visiting. The success of such attacks heavily depends on the particular set of traffic features that are used to construct the fingerprint. Typically, these features are manually engineered and, as such, any change introduced to the Tor network can render these carefully constructed features ineffective. In this paper, we show that an adversary can automate the feature engineering process, and thus automatically deanonymize Tor traffic by applying our novel method based on deep learning. We collect a dataset comprised of more than three million network traces, which is the largest dataset of web traffic ever used for website fingerprinting, and find that the performance achieved by our deep learning approaches is comparable to known methods which include various research efforts spanning over multiple years. The obtained success rate exceeds 96% for a closed world of 100 websites and 94% for our biggest closed world of 900 classes. In our open world evaluation, the most performant deep learning model is 2% more accurate than the state-of-the-art attack. Furthermore, we show that the implicit features automatically learned by our approach are far more resilient to dynamic changes of web content over time. We conclude that the ability to automatically construct the most relevant traffic features and perform accurate traffic recognition makes our deep learning based approach an efficient, flexible and robust technique for website fingerprinting.Comment: To appear in the 25th Symposium on Network and Distributed System Security (NDSS 2018

    EKILA: Synthetic Media Provenance and Attribution for Generative Art

    Full text link
    We present EKILA; a decentralized framework that enables creatives to receive recognition and reward for their contributions to generative AI (GenAI). EKILA proposes a robust visual attribution technique and combines this with an emerging content provenance standard (C2PA) to address the problem of synthetic image provenance -- determining the generative model and training data responsible for an AI-generated image. Furthermore, EKILA extends the non-fungible token (NFT) ecosystem to introduce a tokenized representation for rights, enabling a triangular relationship between the asset's Ownership, Rights, and Attribution (ORA). Leveraging the ORA relationship enables creators to express agency over training consent and, through our attribution model, to receive apportioned credit, including royalty payments for the use of their assets in GenAI.Comment: Proc. CVPR Workshop on Media Forensics 202

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture
    • …
    corecore