6 research outputs found

    Fusing Data Processing in the Construction of Machine Vision Systems in Robotic Complexes

    Get PDF
    The development of machine vision systems is based on the analysis of visual information recorded by sensitive matrices. This information is most often distorted by the presence of interfering factors represented by a noise component. The common causes of the noise include imperfect sensors, dust and aerosols, used ADCs, electromagnetic interference, and others. The presence of these noise components reduces the quality of the subsequent analysis. To implement systems that allow operating in the presence of a noise, a new approach, which allows parallel processing of data obtained in various electromagnetic ranges, has been proposed. The primary area of application of the approach are machine vision systems used in complex robotic cells. The use of additional data obtained by a group of sensors allows the formation of arrays of usefull information that provide successfull optimization of operations. The set of test data shows the applicability of the proposed approach to combined images in machine vision systems

    Fusing Data Processing in the Construction of Machine Vision Systems in Robotic Complexes

    No full text
    The development of machine vision systems is based on the analysis of visual information recorded by sensitive matrices. This information is most often distorted by the presence of interfering factors represented by a noise component. The common causes of the noise include imperfect sensors, dust and aerosols, used ADCs, electromagnetic interference, and others. The presence of these noise components reduces the quality of the subsequent analysis. To implement systems that allow operating in the presence of a noise, a new approach, which allows parallel processing of data obtained in various electromagnetic ranges, has been proposed. The primary area of application of the approach are machine vision systems used in complex robotic cells. The use of additional data obtained by a group of sensors allows the formation of arrays of usefull information that provide successfull optimization of operations. The set of test data shows the applicability of the proposed approach to combined images in machine vision systems

    Tactical and psychological features of interrogating suspects with interpreter’s participation

    No full text
    The work aims to consider issues related to the peculiarity of interrogating suspects with the participation of an interpreter in the investigation of crimes during the preliminary investigation. The specificity of this situation is determined by the fact that the criminal procedural legislation of the Russian Federation for a person who does not speak language of legal proceedings or does not have a sufficient level of this language provides for the right to use the services of an interpreter free of charge. At the same time, the tactical recommendations for interrogation that exist in forensic science are developed for a situation when the subject of law enforcement and the interrogated person communicate in the same language. In addition, a significant difficulty in interrogation is the very terminology related to various spheres of human activity, which is as important for an interpreter as the ability to translate. The method of achieving the stated goal is to compare tactics typical for the situation of monolingualism and multilingualism of participants in criminal proceedings. The article deals with the organizational, tactical and psychological features of interrogation of a suspect: characteristics of pre-interrogation situations, interrogation tactics, features of presenting evidence in order to obtain truthful testimony. The article shows significant differences in interrogation of a suspect with the participation of an interpreter in the investigation of crimes

    Control System of Collaborative Robotic Based on the Methods of Contactless Recognition of Human Actions

    Get PDF
    Human-robot collaboration is a key concept in modern intelligent manufacturing. Traditional human-robot interfaces are quite difficult to control and require additional operator training. The development of an intuitive and native user interface is important for the unobstructed interaction of human and robot in production. The control system of collaborative robotics described in the work is focused on increasing productivity, ensuring safety and ergonomics, minimize the cognitive workload of the operator in the process of human-robot interaction using contactless recognition of human actions. The system uses elements of technical vision to get of input data from the user in the form of gesture commands. As a set of commands for control collaborative robotic complexes and training the method proposed in the work, we use the actions from the UTD-MHAD database. The gesture recognition method is based on deep learning technology. An artificial neural network extracts the skeleton joints of the human and describes their position relative to each other and the center of gravity of the whole skeleton. The received descriptors feed to the input of the classifier, where the assignment to a specific class occur. This approach allows reducing the error from the redundancy of the data feed at the input of the neural network

    Control System of Collaborative Robotic Based on the Methods of Contactless Recognition of Human Actions

    No full text
    Human-robot collaboration is a key concept in modern intelligent manufacturing. Traditional human-robot interfaces are quite difficult to control and require additional operator training. The development of an intuitive and native user interface is important for the unobstructed interaction of human and robot in production. The control system of collaborative robotics described in the work is focused on increasing productivity, ensuring safety and ergonomics, minimize the cognitive workload of the operator in the process of human-robot interaction using contactless recognition of human actions. The system uses elements of technical vision to get of input data from the user in the form of gesture commands. As a set of commands for control collaborative robotic complexes and training the method proposed in the work, we use the actions from the UTD-MHAD database. The gesture recognition method is based on deep learning technology. An artificial neural network extracts the skeleton joints of the human and describes their position relative to each other and the center of gravity of the whole skeleton. The received descriptors feed to the input of the classifier, where the assignment to a specific class occur. This approach allows reducing the error from the redundancy of the data feed at the input of the neural network
    corecore