23,436 research outputs found

    Deep Learning in the Automotive Industry: Applications and Tools

    Full text link
    Deep Learning refers to a set of machine learning techniques that utilize neural networks with many hidden layers for tasks, such as image classification, speech recognition, language understanding. Deep learning has been proven to be very effective in these domains and is pervasively used by many Internet services. In this paper, we describe different automotive uses cases for deep learning in particular in the domain of computer vision. We surveys the current state-of-the-art in libraries, tools and infrastructures (e.\,g.\ GPUs and clouds) for implementing, training and deploying deep neural networks. We particularly focus on convolutional neural networks and computer vision use cases, such as the visual inspection process in manufacturing plants and the analysis of social media data. To train neural networks, curated and labeled datasets are essential. In particular, both the availability and scope of such datasets is typically very limited. A main contribution of this paper is the creation of an automotive dataset, that allows us to learn and automatically recognize different vehicle properties. We describe an end-to-end deep learning application utilizing a mobile app for data collection and process support, and an Amazon-based cloud backend for storage and training. For training we evaluate the use of cloud and on-premises infrastructures (including multiple GPUs) in conjunction with different neural network architectures and frameworks. We assess both the training times as well as the accuracy of the classifier. Finally, we demonstrate the effectiveness of the trained classifier in a real world setting during manufacturing process.Comment: 10 page

    Process of designing robust, dependable, safe and secure software for medical devices: Point of care testing device as a case study

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Copyright © 2013 Sivanesan Tulasidas et al. This paper presents a holistic methodology for the design of medical device software, which encompasses of a new way of eliciting requirements, system design process, security design guideline, cloud architecture design, combinatorial testing process and agile project management. The paper uses point of care diagnostics as a case study where the software and hardware must be robust, reliable to provide accurate diagnosis of diseases. As software and software intensive systems are becoming increasingly complex, the impact of failures can lead to significant property damage, or damage to the environment. Within the medical diagnostic device software domain such failures can result in misdiagnosis leading to clinical complications and in some cases death. Software faults can arise due to the interaction among the software, the hardware, third party software and the operating environment. Unanticipated environmental changes and latent coding errors lead to operation faults despite of the fact that usually a significant effort has been expended in the design, verification and validation of the software system. It is becoming increasingly more apparent that one needs to adopt different approaches, which will guarantee that a complex software system meets all safety, security, and reliability requirements, in addition to complying with standards such as IEC 62304. There are many initiatives taken to develop safety and security critical systems, at different development phases and in different contexts, ranging from infrastructure design to device design. Different approaches are implemented to design error free software for safety critical systems. By adopting the strategies and processes presented in this paper one can overcome the challenges in developing error free software for medical devices (or safety critical systems).Brunel Open Access Publishing Fund

    A Framework for Genetic Algorithms Based on Hadoop

    Full text link
    Genetic Algorithms (GAs) are powerful metaheuristic techniques mostly used in many real-world applications. The sequential execution of GAs requires considerable computational power both in time and resources. Nevertheless, GAs are naturally parallel and accessing a parallel platform such as Cloud is easy and cheap. Apache Hadoop is one of the common services that can be used for parallel applications. However, using Hadoop to develop a parallel version of GAs is not simple without facing its inner workings. Even though some sequential frameworks for GAs already exist, there is no framework supporting the development of GA applications that can be executed in parallel. In this paper is described a framework for parallel GAs on the Hadoop platform, following the paradigm of MapReduce. The main purpose of this framework is to allow the user to focus on the aspects of GA that are specific to the problem to be addressed, being sure that this task is going to be correctly executed on the Cloud with a good performance. The framework has been also exploited to develop an application for Feature Subset Selection problem. A preliminary analysis of the performance of the developed GA application has been performed using three datasets and shown very promising performance

    Taxonomy of Technological IT Outsourcing Risks: Support for Risk Identification and Quantification

    Get PDF
    The past decade has seen an increasing interest in IT outsourcing as it promises companies many economic benefits. In recent years, IT paradigms, such as Software-as-a-Service or Cloud Computing using third-party services, are increasingly adopted. Current studies show that IT security and data privacy are the dominant factors affecting the perceived risk of IT outsourcing. Therefore, we explicitly focus on determining the technological risks related to IT security and quality of service characteristics associated with IT outsourcing. We conducted an extensive literature review, and thoroughly document the process in order to reach high validity and reliability. 149 papers have been evaluated based on a review of the whole content and out of the finally relevant 68 papers, we extracted 757 risk items. Using a successive refinement approach, which involved reduction of similar items and iterative re-grouping, we establish a taxonomy with nine risk categories for the final 70 technological risk items. Moreover, we describe how the taxonomy can be used to support the first two phases of the IT risk management process: risk identification and quantification. Therefore, for each item, we give parameters relevant for using them in an existing mathematical risk quantification model

    The integrity of digital technologies in the evolving characteristics of real-time enterprise architecture

    Get PDF
    Advancements in interactive and responsive enterprises involve real-time access to the information and capabilities of emerging technologies. Digital technologies (DTs) are emerging technologies that provide end-to-end business processes (BPs), engage a diversified set of real-time enterprise (RTE) participants, and institutes interactive DT services. This thesis offers a selection of the author’s work over the last decade that addresses the real-time access to changing characteristics of information and integration of DTs. They are critical for RTEs to run a competitive business and respond to a dynamic marketplace. The primary contributions of this work are listed below. • Performed an intense investigation to illustrate the challenges of the RTE during the advancement of DTs and corresponding business operations. • Constituted a practical approach to continuously evolve the RTEs and measure the impact of DTs by developing, instrumenting, and inferring the standardized RTE architecture and DTs. • Established the RTE operational governance framework and instituted it to provide structure, oversight responsibilities, features, and interdependencies of business operations. • Formulated the incremental risk (IR) modeling framework to identify and correlate the evolving risks of the RTEs during the deployment of DT services. • DT service classifications scheme is derived based on BPs, BP activities, DT’s paradigms, RTE processes, and RTE policies. • Identified and assessed the evaluation paradigms of the RTEs to measure the progress of the RTE architecture based on the DT service classifications. The starting point was the author’s experience with evolving aspects of DTs that are disrupting industries and consequently impacting the sustainability of the RTE. The initial publications emphasized innovative characteristics of DTs and lack of standardization, indicating the impact and adaptation of DTs are questionable for the RTEs. The publications are focused on developing different elements of RTE architecture. Each published work concerns the creation of an RTE architecture framework fit to the purpose of business operations in association with the DT services and associated capabilities. The RTE operational governance framework and incremental risk methodology presented in subsequent publications ensure the continuous evolution of RTE in advancements of DTs. Eventually, each publication presents the evaluation paradigms based on the identified scheme of DT service classification to measure the success of RTE architecture or corresponding elements of the RTE architecture

    Standards in Disruptive Innovation: Assessment Method and Application to Cloud Computing

    Get PDF
    Die Dissertation schlägt ein konzeptionelles Informationsmodell und eine Methode zur Bewertung von Technologie-Standards im Kontext von Disruptiven Innovationen vor. Das konzeptionelle Informationsmodell stellt die Grundlage zur Strukturierung relevanter Informationen dar. Die Methode definiert ein Prozessmodell, das die Instanziierung des Informationsmodells für verschiedenen Domänen beschreibt und Stakeholder bei der Klassifikation und Evaluation von Technologie-Standards unterstützt
    corecore