44,768 research outputs found

    Machine learning for quality control system

    Get PDF
    In this work, we propose and develop a classification model to be used in a quality control system for clothing manufacturing using machine learning algorithms. The system consists of using pictures taken through mobile devices to detect defects on production objects. In this work, a defect can be a missing component or a wrong component in a production object. Therefore, the function of the system is to classify the components that compose a production object through the use of a classification model. As a manufacturing business progresses, new objects are created, thus, the classification model must be able to learn the new classes without losing previous knowledge. However, most classification algorithms do not support an increase of classes, these need to be trained from scratch with all . Thus. In this work, we make use of an incremental learning algorithm to tackle this problem. This algorithm classifies features extracted from pictures of the production objects using a convolutional neural network (CNN), which have proven to be very successful in image classification problems. We apply the current developed approach to a process in clothing manufacturing. Therefore, the production objects correspond to clothing itemsinfo:eu-repo/semantics/submittedVersio

    Compositional Falsification of Cyber-Physical Systems with Machine Learning Components

    Full text link
    Cyber-physical systems (CPS), such as automotive systems, are starting to include sophisticated machine learning (ML) components. Their correctness, therefore, depends on properties of the inner ML modules. While learning algorithms aim to generalize from examples, they are only as good as the examples provided, and recent efforts have shown that they can produce inconsistent output under small adversarial perturbations. This raises the question: can the output from learning components can lead to a failure of the entire CPS? In this work, we address this question by formulating it as a problem of falsifying signal temporal logic (STL) specifications for CPS with ML components. We propose a compositional falsification framework where a temporal logic falsifier and a machine learning analyzer cooperate with the aim of finding falsifying executions of the considered model. The efficacy of the proposed technique is shown on an automatic emergency braking system model with a perception component based on deep neural networks

    Neural overlap of L1 and L2 semantic representations across visual and auditory modalities : a decoding approach/

    Get PDF
    This study investigated whether brain activity in Dutch-French bilinguals during semantic access to concepts from one language could be used to predict neural activation during access to the same concepts from another language, in different language modalities/tasks. This was tested using multi-voxel pattern analysis (MVPA), within and across language comprehension (word listening and word reading) and production (picture naming). It was possible to identify the picture or word named, read or heard in one language (e.g. maan, meaning moon) based on the brain activity in a distributed bilateral brain network while, respectively, naming, reading or listening to the picture or word in the other language (e.g. lune). The brain regions identified differed across tasks. During picture naming, brain activation in the occipital and temporal regions allowed concepts to be predicted across languages. During word listening and word reading, across-language predictions were observed in the rolandic operculum and several motor-related areas (pre- and postcentral, the cerebellum). In addition, across-language predictions during reading were identified in regions typically associated with semantic processing (left inferior frontal, middle temporal cortex, right cerebellum and precuneus) and visual processing (inferior and middle occipital regions and calcarine sulcus). Furthermore, across modalities and languages, the left lingual gyrus showed semantic overlap across production and word reading. These findings support the idea of at least partially language- and modality-independent semantic neural representations

    Combining Language and Vision with a Multimodal Skip-gram Model

    Full text link
    We extend the SKIP-GRAM model of Mikolov et al. (2013a) by taking visual information into account. Like SKIP-GRAM, our multimodal models (MMSKIP-GRAM) build vector-based word representations by learning to predict linguistic contexts in text corpora. However, for a restricted set of words, the models are also exposed to visual representations of the objects they denote (extracted from natural images), and must predict linguistic and visual features jointly. The MMSKIP-GRAM models achieve good performance on a variety of semantic benchmarks. Moreover, since they propagate visual information to all words, we use them to improve image labeling and retrieval in the zero-shot setup, where the test concepts are never seen during model training. Finally, the MMSKIP-GRAM models discover intriguing visual properties of abstract words, paving the way to realistic implementations of embodied theories of meaning.Comment: accepted at NAACL 2015, camera ready version, 11 page

    The iconicity advantage in sign production: The case of bimodal bilinguals

    Get PDF
    Recent evidence demonstrates that pictures corresponding to iconic signs are named faster than pictures corresponding to non-iconic signs. The present study investigates the locus of the iconicity advantage in hearing bimodal bilinguals. A naming experiment with iconic and noniconic pictures in Italian Sign Language (LIS) was conducted. Bimodal bilinguals named the pictures either using a noun construction that involved the production of the sign corresponding to the picture or using a marked demonstrative pronoun construction replacing the picture sign. In this last condition, the pictures were colored and participants were instructed to name the pronoun together with the color. The iconicity advantage was reliable in the noun utterance but not in the marked demonstrative pronoun utterance. In a third condition, the colored pictures were presented as distractor stimuli and participants required to name the color. In this last condition, distractor pictures with iconic signs elicited faster naming latencies than non-iconic signs. The results suggest that the advantage of iconic signs in production arises at the level of semantic-tophonological links. In addition, we conclude that bimodal bilinguals and native signers do not differ in terms of the activation flow within the sign production system
    corecore