11 research outputs found

    Building a payment card skimmer database

    Get PDF
    Amb l'auge de les targetes de crèdit com a part integral de l'economia, els delictes relacionats amb elles ha augmentat corresponentment. Una de les maneres més comunes de robar les dades de targetes de crèdit és a través de skimmers als sortidors de gasolina. L'skimmer consisteix en una simple PCB (circuit imprès) que és insertada dins el sortidor per robar les dades de les targetes dels clients. Les despeses com a causa del frau poden arribar fins als milers per persona. Grups criminals instal·len diversos skimmers a través de comtats i estats dels Estats Units. Quan els skimmers són descoberts eventualment, és pràcticament impossible dur a terme una investigació policial satisfactòriament. Els departaments policials rarament colaboren sobre aquests casos que abasten diversos comtats i estats, el qual elimina qualsevol possibilitat de ser resolts. Skimmer Tracker és una aplicació web que permet a departaments policials publicar els skim- mers que hagin trobat. Compartint l'evidència de diferents casos pretenem connectar-los com a part del mateix cas a través d'anàlisi basat en visió per computador.With the rise of credit cards as an integral part of the economy, crime related to them has risen accordingly. One of the most common ways to steal credit card data is through skimmers in gas-pumps. The skimmer device consists of a simple PCB (printed circuit board), and it is inserted inside the gas-pump to steal consumer's credit cards. Incurred costs due to fraud can go well into the thousands per person. Criminals install multiple skimmers across counties and states in the US. When skimmers are eventually discovered it is practically impossible for police to conduct a successful investigation on them. Police departments rarely collaborate on these sorts of cases that span different counties and states, which eliminates any possibility of them being solved. Skimmer Tracker is a web application that lets law enforcement agencies publish the skimmers they find. With this sharing of evidence we aim to group different skimmers and connect them as part of the same case through computer vision based analysis

    Ensuring Service Level Agreements for Composite Services by Means of Request Scheduling

    Get PDF
    Building distributed systems according to the Service-Oriented Architecture (SOA) allows simplifying the integration process, reducing development costs and increasing scalability, interoperability and openness. SOA endorses the reusability of existing services and aggregating them into new service layers for future recycling. At the same time, the complexity of large service-oriented systems negatively reflects on their behavior in terms of the exhibited Quality of Service. To address this problem this thesis focuses on using request scheduling for meeting Service Level Agreements (SLAs). The special focus is given to composite services specified by means of workflow languages. The proposed solution suggests using two level scheduling: global and local. The global policies assign the response time requirements for component service invocations. The local scheduling policies are responsible for performing request scheduling in order to meet these requirements. The proposed scheduling approach can be deployed without altering the code of the scheduled services, does not require a central point of control and is platform independent. The experiments, conducted using a simulation, were used to study the effectiveness and the feasibility of the proposed scheduling schemes in respect to various deployment requirements. The validity of the simulation was confirmed by comparing its results to the results obtained in experiments with a real-world service. The proposed approach was shown to work well under different traffic conditions and with different types of SLAs

    The implementation of manufacturing concepts in a non-traditional manufacturing environment.

    Get PDF
    This thesis is the result of a two year Teaching Company Programme between Middlesex University and Tony Stone Images in which basic manufacturing philosophy has been applied to an arts based Company. Various techniques were used including flowcharts, a production simulation package and a critical path analysis to determine the production lead time of a photographic transparency to three possible destinations. On identifying the processes involved a series of smaller projects were undertaken to remove the non-value adding elements. In some cases this resulted in some quite large changes occurring in the Company; one being to move departmental locations in the building to better reflect the flow of work through the organisation. Once improvements had been made a System was installed which allows accurate tracking of the images through the processes in terms of their volume, location and routing history. This has provided valuable information on image processing by giving a more realistic figure for production lead time rather than the senior managers relying on a 'best guess' for each department. The System became known as the Workflow System and its development and installation was divided into two phases. This thesis covers phase one. One of the most important issues arising from the project is the fact that the Company has grown so rapidly over the last few years it has become unable to change culture and operating policy to meet the demands of the customer or competition. The conflict between image quality and image volume needs to be conquered in order to allow production to be more scientifically measured and controlled to provide an efficient manufacturing lead time to satisfy market needs. The Workflow System has provided the foundations upon which this measurement and control can be built

    Task Scheduling in Big Data Platforms: A Systematic Literature Review

    Get PDF
    Context: Hadoop, Spark, Storm, and Mesos are very well known frameworks in both research and industrial communities that allow expressing and processing distributed computations on massive amounts of data. Multiple scheduling algorithms have been proposed to ensure that short interactive jobs, large batch jobs, and guaranteed-capacity production jobs running on these frameworks can deliver results quickly while maintaining a high throughput. However, only a few works have examined the effectiveness of these algorithms. Objective: The Evidence-based Software Engineering (EBSE) paradigm and its core tool, i.e., the Systematic Literature Review (SLR), have been introduced to the Software Engineering community in 2004 to help researchers systematically and objectively gather and aggregate research evidences about different topics. In this paper, we conduct a SLR of task scheduling algorithms that have been proposed for big data platforms. Method: We analyse the design decisions of different scheduling models proposed in the literature for Hadoop, Spark, Storm, and Mesos over the period between 2005 and 2016. We provide a research taxonomy for succinct classification of these scheduling models. We also compare the algorithms in terms of performance, resources utilization, and failure recovery mechanisms. Results: Our searches identifies 586 studies from journals, conferences and workshops having the highest quality in this field. This SLR reports about different types of scheduling models (dynamic, constrained, and adaptive) and the main motivations behind them (including data locality, workload balancing, resources utilization, and energy efficiency). A discussion of some open issues and future challenges pertaining to improving the current studies is provided

    Using process mining to learn from process changes in evolutionary systems

    Get PDF
    Abstract. Traditional information systems struggle with the requirement to provide flexibility and process support while still enforcing some degree of control. Accordingly, adaptive process management systems (PMSs) have emerged that provide some flexibility by enabling dynamic process changes during runtime. Based on the assumption that these process changes are recorded explicitly, we present two techniques for mining change logs in adaptive PMSs; i.e., we do not only analyze the execution logs of the operational processes, but also consider the adaptations made at the process instance level. The change processes discovered through process mining provide an aggregated overview of all changes that happened so far. This, in turn, can serve as basis for integrating the extrinsic drivers of process change (i.e., the stimuli for flexibility) with existing process adaptation approaches (i.e., the intrinsic change mechanisms). Using process mining as an analysis tool we show in this paper how better support can be provided for truly flexible processes by understanding when and why process changes become necessary

    Adaptive Failure-Aware Scheduling for Hadoop

    Get PDF
    Given the dynamic nature of cloud environments, failures are the norm rather than the exception in data centers powering cloud frameworks. Despite the diversity of integrated recovery mechanisms in cloud frameworks, their schedulers still generate poor scheduling decisions leading to tasks' failures due to unforeseen events such as unpredicted demands of services or hardware outages. Traditionally, simulation and analytical modeling have been widely used to analyze the impact of the scheduling decisions on the failures rates. However, they cannot provide accurate results and exhaustive coverage of the cloud systems especially when failures occur. In this thesis, we present new approaches for modeling and verifying an adaptive failure-aware scheduling algorithm for Hadoop to early detect these failures and to reschedule tasks according to changes in the cloud. Hadoop is the framework of choice on many off-the-shelf clusters in the cloud to process data-intensive applications by efficiently running them across distributed multiple machines. The proposed scheduling algorithm for Hadoop relies on predictions made by machine learning algorithms trained on previously executed tasks and data collected from the Hadoop environment. To further improve Hadoop scheduling decisions on the fly, we use reinforcement learning techniques to select an appropriate scheduling action for a scheduled task. Furthermore, we propose an adaptive algorithm to dynamically detect failures of nodes in Hadoop. We implement the above approaches in ATLAS: an AdapTive Failure-Aware Scheduling algorithm that can be built on top of existing Hadoop schedulers. To illustrate the usefulness and benefits of ATLAS, we conduct a large empirical study on a Hadoop cluster deployed on Amazon Elastic MapReduce (EMR) to compare the performance of ATLAS to those of three Hadoop scheduling algorithms (FIFO, Fair, and Capacity). Results show that ATLAS outperforms these scheduling algorithms in terms of failures' rates, execution times, and resources utilization. Finally, we propose a new methodology to formally identify the impact of the scheduling decisions of Hadoop on the failures rates. We use model checking to verify some of the most important scheduling properties in Hadoop (schedulability, resources-deadlock freeness, and fairness) and provide possible strategies to avoid their occurrences in ATLAS. The formal verification of the Hadoop scheduler allows to identify more tasks failures and hence reduce the number of failures in ATLAS

    Medication safety in intravenous drug administration : error causes and systemic defenses in hospital setting

    Get PDF
    Intravenous administration of drugs is associated with the highest medication error frequencies and more serious consequences to the patient than any other administration route. The bioavailability of intravenously administered medication is high, the therapeutic dose range is often narrow, and effects are hard to undo. Many intravenously administered drugs are high-alert medications, bearing a heightened risk of causing significant patient harm if used in error. Smart infusion pumps with dose error-reduction software can be used to prevent harmful medication errors in high-risk clinical settings, such as neonatal intensive care units. This study investigated intravenous medication safety in hospital settings by identifying recent research evidence related to systemic causes of medication errors (Study I) and systemic defenses to prevent these errors (Study II). The study also explored the development of dose-error reduction software in a neonatal intensive care unit (Study III). A systems approach to medication risk management based on the Theory of Human Error was applied as a theoretical framework. The study was conducted in two phases. In the first phase, a systematic review of recent research evidence on systemic causes of intravenous medication errors (Study I) and systemic defenses aiming to prevent these errors (Study II) was carried out. In Study I, 11 studies from six countries were included in the analysis. Systemic causes related to prescribing (n=6 studies), preparation (n=6), administration (n=6), dispensing and storage (n=5) and treatment monitoring (n=2) were identified. Insufficient actions to secure safe use of high-alert medications, lack of knowledge of the drug, failures in calculation tasks and in double-checking procedures, and confusion between look-alike, sound-alike medications were the leading causes of intravenous medication errors. The number of the included studies was limited, all of them being observational studies and graded as low quality. In Study II, 46 studies from 11 countries were included in the analysis. Systemic defenses related to administration (n=24 studies), prescribing (n=8), preparation (n=6), treatment monitoring (n=2), and dispensing (n=1) were identified. In addition, five studies explored defenses related to multiple stages of the medication use process. Defenses including features of closed-loop medication management systems appeared in 61% of the studies, smart pumps being the defense most widely studied (24%). The evidence quality of the included articles was limited, as 83% were graded as low quality, 13% moderate quality, and only 4% high quality. A mixed-methods study was conducted in the second phase, applying qualitative and quantitative methods (Study III). Medication error reports were used to develop simulation-type test cases to assess the suitability of dosing limits in a neonatal intensive care unit’s smart infusion pump drug library. Of all medication errors reported in the neonatal intensive care unit, 3.5% (n=21/601) involved an error or near-miss related to wrong infusion rate. Based on the identified error mechanisms, 2-, 5-, and 10-fold infusion rates and mix-ups between infusion rates of different drugs were established as test cases. When conducting the pump programming for the test cases (n=226), no alerts were triggered with infusion rates responding to the usual dosages (n=32). Of the erroneous 2-, 5-, and 10-fold infusion rates, 73% (n = 70/96) caused an alert. Mix-ups between infusion rates triggered an alert only in 24% (n=24/98) of the test cases. This study provided an overview of recent research evidence related to intravenous medication safety in hospital settings. Current intravenous medication systems remain vulnerable, which can result in patient harm. While in-hospital intravenous medication use processes are developing towards closed-loop medication management systems, combinations of different defenses and their effectiveness in error prevention should be explored. In addition to improved medication safety, implementing new systemic defenses leads to new error types, emphasizing the importance of continuous proactive risk management as an essential part of clinical practice.Laskimonsisäiseen lääkkeen annosteluun liittyy merkittävä lääkityspoikkeamien ja vakavien haittatapahtumien riski. Sairaaloissa käytetään useita laskimoon annosteltavia suuren riskin lääkkeitä, joiden virheellinen käyttö johtaa muita lääkkeitä todennäköisemmin vakaviin haittoihin. Tässä tutkimuksessa tunnistettiin järjestelmällisen kirjallisuuskatsauksen perusteella lääkityspoikkeamien järjestelmälähtöisiä syitä (osatyö I) sekä lääkehoitoprosessin suojauksia (osatyö II). Lisäksi tutkittiin älyinfuusiopumppujen käyttöönottoa vastasyntyneiden teho-osastolla. Teoreettisena viitekehyksenä käytettiin inhimillisen erehdyksen teoriaa ja järjestelmänäkökulmaa lääkehoitoprosessin riskien hallinnassa. Osatyön I aineistosta (n=11 tutkimusta) tunnistettiin lääkityspoikkeamien syntyyn vaikuttavia järjestelmälähtöisiä syitä, jotka liittyivät lääkehoidon määräämiseen (n=6), käyttökuntoon saattoon (n=6), antoon (n=6), jakeluun ja varastointiin (n=5) sekä seurantaan (n=2). Yleisimpiä syitä olivat riittämättömät toimenpiteet suuren riskin lääkkeiden turvallisen käytön varmistamisessa, ammattilaisten heikot tiedot lääkkeistä, virheet laskutoimituksissa ja kaksoistarkistuksissa sekä toisiltaan näyttävien ja kuulostavien lääkkeiden sekaantuminen keskenään. Osatyön II aineistossa (n=46 tutkimusta) kuvattiin lääkehoitoprosessin suojauksia, jotka liittyivät lääkkeiden annosteluun (n=24), määräämiseen (n=8), käyttökuntoon saattoon (n=6), hoidon seurantaan (n=2) ja jakeluun (n=1). Lisäksi viidessä tutkimuksessa kuvattiin useaan lääkehoitoprosessin vaiheeseen liittyviä suojauksia. Katkeamattoman lääkehoitoprosessin piirteitä tunnistettiin 61 prosentissa tutkimuksista ja älyinfuusiopumput olivat eniten tutkittu suojaus (24 %). Osatyö III toteutettiin monimenetelmätutkimuksena. Vastasyntyneiden teho-osastolla raportoitujen lääkityspoikkeamien pohjalta kehitettiin simulaatiotyyppisiä testitapauksia, joilla arvioitiin annosrajojen sopivuutta älyinfuusiopumppujen lääkekirjastoon. Lääkityspoikkeamista 3,5 % (n=21/601) liittyi väärään infuusionopeuteen ja niiden perusteella testitapauksiksi määritettiin 2-, 5- ja 10-kertaiset infuusionopeudet sekä eri lääkkeiden antonopeuksien sekaantuminen keskenään. Testitapauksissa (n=226) infuusiopumput eivät hälyttäneet tavanomaisia nopeuksia ohjelmoitaessa (n=32), mutta virheellisistä infuusionopeuksista 73 % (n=70/96) aiheutti hälytyksen. Nopeuksien sekaantuminen keskenään laukaisi hälytyksen vain 24 %:ssa (n=24/98) testitapauksista. Sairaaloiden laskimonsisäinen lääkehoitoprosessi kehittyy kohti katkeamatonta lääkehoitoprosessia, mutta se on edelleen altis lääkityspoikkeamille. Kirjallisuuskatsauksiin sisällytettyjen tutkimusten laatu oli pääosin heikko, joten lääkityspoikkeamien riskitekijöitä ja suojauksia tulee edelleen tutkia yhä laadukkaammissa tutkimusasetelmissa. Uusien suojausten käyttöönotto muuttaa myös riskikohtia, mikä korostaa ennakoivan riskienhallinnan merkitystä osana sairaaloiden toimintaa
    corecore