2,182,863 research outputs found

    Evaluation of classical machine learning techniques towards urban sound recognition embedded systems

    Get PDF
    Automatic urban sound classification is a desirable capability for urban monitoring systems, allowing real-time monitoring of urban environments and recognition of events. Current embedded systems provide enough computational power to perform real-time urban audio recognition. Using such devices for the edge computation when acting as nodes of Wireless Sensor Networks (WSN) drastically alleviates the required bandwidth consumption. In this paper, we evaluate classical Machine Learning (ML) techniques for urban sound classification on embedded devices with respect to accuracy and execution time. This evaluation provides a real estimation of what can be expected when performing urban sound classification on such constrained devices. In addition, a cascade approach is also proposed to combine ML techniques by exploiting embedded characteristics such as pipeline or multi-thread execution present in current embedded devices. The accuracy of this approach is similar to the traditional solutions, but provides in addition more flexibility to prioritize accuracy or timing

    TasNet: time-domain audio separation network for real-time, single-channel speech separation

    Full text link
    Robust speech processing in multi-talker environments requires effective speech separation. Recent deep learning systems have made significant progress toward solving this problem, yet it remains challenging particularly in real-time, short latency applications. Most methods attempt to construct a mask for each source in time-frequency representation of the mixture signal which is not necessarily an optimal representation for speech separation. In addition, time-frequency decomposition results in inherent problems such as phase/magnitude decoupling and long time window which is required to achieve sufficient frequency resolution. We propose Time-domain Audio Separation Network (TasNet) to overcome these limitations. We directly model the signal in the time-domain using an encoder-decoder framework and perform the source separation on nonnegative encoder outputs. This method removes the frequency decomposition step and reduces the separation problem to estimation of source masks on encoder outputs which is then synthesized by the decoder. Our system outperforms the current state-of-the-art causal and noncausal speech separation algorithms, reduces the computational cost of speech separation, and significantly reduces the minimum required latency of the output. This makes TasNet suitable for applications where low-power, real-time implementation is desirable such as in hearable and telecommunication devices.Comment: Camera ready version for ICASSP 2018, Calgary, Canad

    Reaction time on fencing and karate high level athletes

    Get PDF
    The great speed of the actions in combat sports makes very difficult to react quickJy without mistakes. If the fighter had longer time to react, their reaction would be more accurate. This fact gives relevance to choice lime reaction (CRT) studies on these kinds of sports. The importance of the athletes1 physical or psychological abilities varies depending on the sporl played. According to the requirements of the speciality, players who reach Ihe maximum level will be those who have the characteristics requested to compete on it. These abilities could be innate or "life-long" acquired by training. Previous studies have not confirmed yet in which sports reaction time is more important, in addition, previous measurements should be considered with caution because some of Ihem included movement time in the reaction time results (Martmez de Quel, 2003). An approach to gel further knowledge about this subject, it is comparing the results of e\perts in two or more disciplines with unspecific tests, in which previous sport experience is not required in order to perform the lest

    Robot hands and extravehicular activity

    Get PDF
    Extravehicular activity (EVA) is crucial to the success of both current and future space operations. As space operations have evolved in complexity so has the demand placed on the EVA crewman. In addition, some NASA requirements for human capabilities at remote or hazardous sites were identified. One of the keys to performing useful EVA tasks is the ability to manipulate objects accurately, quickly and without early or excessive fatigue. The current suit employs a glove which enables the crewman to perform grasping tasks, use tools, turn switches, and perform other tasks for short periods of time. However, the glove's bulk and resistance to motion ultimately causes fatigue. Due to this limitation it may not be possible to meet the productivity requirements that will be placed on the EVA crewman of the future with the current or developmental Extravehicular Mobility Unit (EMU) hardware. In addition, this hardware will not meet the requirements for remote or hazardous operations. In an effort to develop ways for improving crew productivity, a contract was awarded to develop a prototype anthromorphic robotic hand (ARH) for use with an extravehicular space suit. The first step in this program was to perform a a design study which investigated the basic technology required for the development of an ARH to enhance crew performance and productivity. The design study phase of the contract and some additional development work is summarized

    Ambulance 3G

    Get PDF
    Minimising the time required for a patient to receive primary care has always been the concern of the Accidents and Emergency units. Ambulances are usually the first to arrive on the scene and to administer first aid. However, as the time that it takes to transfer the patient to the hospital increases, so does the fatality rate. In this paper, a mobile teleconsultation system is presented, based primarily on third generation mobile links and on Wi-Fi hotspots around a city. This system can be installed inside an ambulance and will permit high-resolution videoconferencing between the moving vehicle and a doctor or a consultant within a base station (usually a hospital). In addition to video and voice, high quality still images and screenshots from medical equipment can also be sent. The test was carried out in Athens, Greece where a 3G system was recently deployed by Vodafone. The results show that the system can perform satisfactory in most conditions and can effectively increase the patient’s quality of service, while having a modest cost

    Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems

    Get PDF
    Crowdsourcing systems commonly face the problem of aggregating multiple judgments provided by potentially unreliable workers. In addition, several aspects of the design of efficient crowdsourcing processes, such as defining worker's bonuses, fair prices and time limits of the tasks, involve knowledge of the likely duration of the task at hand. Bringing this together, in this work we introduce a new time--sensitive Bayesian aggregation method that simultaneously estimates a task's duration and obtains reliable aggregations of crowdsourced judgments. Our method, called BCCTime, builds on the key insight that the time taken by a worker to perform a task is an important indicator of the likely quality of the produced judgment. To capture this, BCCTime uses latent variables to represent the uncertainty about the workers' completion time, the tasks' duration and the workers' accuracy. To relate the quality of a judgment to the time a worker spends on a task, our model assumes that each task is completed within a latent time window within which all workers with a propensity to genuinely attempt the labelling task (i.e., no spammers) are expected to submit their judgments. In contrast, workers with a lower propensity to valid labeling, such as spammers, bots or lazy labelers, are assumed to perform tasks considerably faster or slower than the time required by normal workers. Specifically, we use efficient message-passing Bayesian inference to learn approximate posterior probabilities of (i) the confusion matrix of each worker, (ii) the propensity to valid labeling of each worker, (iii) the unbiased duration of each task and (iv) the true label of each task. Using two real-world public datasets for entity linking tasks, we show that BCCTime produces up to 11% more accurate classifications and up to 100% more informative estimates of a task's duration compared to state-of-the-art methods

    Assessing Security Risk to a Network Using a Statistical Model of Attacker Community Competence

    Get PDF
    We propose a novel approach for statistical risk modeling of network attacks that lets an operator perform risk analysis using a data model and an impact model on top of an attack graph in combination with a statistical model of the attacker community exploitation skill. The data model describes how data flows between nodes in the network -- how it is copied and processed by softwares and hosts -- while the impact model models how exploitation of vulnerabilities affects the data flows with respect to the confidentiality, integrity and availability of the data. In addition, by assigning a loss value to a compromised data set, we can estimate the cost of a successful attack. The statistical model lets us incorporate real-time monitor data from a honeypot in the risk calculation. The exploitation skill distribution is inferred by first classifying each vulnerability into a required exploitation skill-level category, then mapping each skill-level into a distribution over the required exploitation skill, and last applying Bayesian inference over the attack data. The final security risk is thereafter computed by marginalizing over the exploitation skill

    Artificial intelligence in steam cracking modeling : a deep learning algorithm for detailed effluent prediction

    Get PDF
    Chemical processes can benefit tremendously from fast and accurate effluent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning into these fields, substantial economic and environmental gains can be achieved. The bottleneck for high-frequency optimization and process control is often the time necessary to perform the required detailed analyses of, for example, feed and product. To resolve these issues, a framework of four deep learning artificial neural networks (DL ANNs) has been developed for the largest chemicals production process-steam cracking. The proposed methodology allows both a detailed characterization of a naphtha feedstock and a detailed composition of the steam cracker effluent to be determined, based on a limited number of commercial naphtha indices and rapidly accessible process characteristics. The detailed characterization of a naphtha is predicted from three points on the boiling curve and paraffins, iso-paraffins, olefins, naphthenes, and aronatics (PIONA) characterization. If unavailable, the boiling points are also estimated. Even with estimated boiling points, the developed DL ANN outperforms several established methods such as maximization of Shannon entropy and traditional ANNs. For feedstock reconstruction, a mean absolute error (MAE) of 0.3 wt% is achieved on the test set, while the MAE of the effluent prediction is 0.1 wt%. When combining all networks-using the output of the previous as input to the next-the effluent MAE increases to 0.19 wt%. In addition to the high accuracy of the networks, a major benefit is the negligible computational cost required to obtain the predictions. On a standard Intel i7 processor, predictions are made in the order of milliseconds. Commercial software such as COILSIM1D performs slightly better in terms of accuracy, but the required central processing unit time per reaction is in the order of seconds. This tremendous speed-up and minimal accuracy loss make the presented framework highly suitable for the continuous monitoring of difficult-to-access process parameters and for the envisioned, high-frequency real-time optimization (RTO) strategy or process control. Nevertheless, the lack of a fundamental basis implies that fundamental understanding is almost completely lost, which is not always well-accepted by the engineering community. In addition, the performance of the developed networks drops significantly for naphthas that are highly dissimilar to those in the training set. (C) 2019 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company

    Electromagnetic and mechanical characterisation of ITER CS-MC conductors affected by transverse cyclic loading, part 1: coupling current loss

    Get PDF
    The magnetic field generated by a coil acts on the cable which results in a transverse force on the strands. This affects the interstrand contact resistances (Rc), the coupling current loss and current redistribution during field changes. A special cryogenic press has been built to study the mechanical and electrical properties of full-size ITER conductor samples under transverse, mechanical loading. The cryogenic press can transmit a variable (cyclic) force up to 650 kN/m to a conductor section of 400 mm length at 4.2 K. The jacket is partly opened in order to transmit the force directly onto the cable. In addition a superconducting dipole coil provides the magnetic field required to perform magnetisation measurements using pick-up coils. The various Rc's between strands selected from different positions inside the cable have been studied. The coupling loss time constants (nτ) during and after loading are verified for the Nb3Sn, 45 kA, 10 and 13 T, ITER Model Coil conductors. A summary of the results obtained with up to several tens of full loading cycles is presented. A significant decrease of the cable nτ after several cycles is observed. The values of the nτ's are discussed with respect to the Rc measurements and a multiple time constant model (MTC)
    corecore