27,540 research outputs found
Human Error Management Paying Emphasis on Decision Making and Social Intelligence -Beyond the Framework of Man-Machine Interface Design-
How latent error or violation induces a serious accident has been reviewed and a proper addressing measure of this has been proposed in the framework of decision making, emotional intelligence (EI) and social intelligence (SI) of organization and its members. It has been clarified that EI and SI play an important role in decision making. Violations frequently occur all over the world, although we definitely understand that we should not commit violations, and a secret to prevent this might exist in the enhancement of both social intelligence and reliability. The construction of social structure or system that supports organizational efforts to enhance both social intelligence and reliability would be essential. Traditional safety education emphasizes that it is possible to change attitudes or mind toward safety by means of education. In spite of thisļ¼accidents or scandals frequently occur and never decrease. These problems must be approached on the basis of the full understanding of social intelligence and limited reasonability in decision making. Social dilemma (We do not necessarily cooperate in spite of understanding its importance, and we sometimes make decision not to select cooperative behavior. Non-cooperation gives rise to a desirable result for an individual. However, if all take non-cooperative actions, undesirable results are finally induced to all.) must be solved in some ways and the transition from relief (closed) society to global (reliability) society must be realized as a whole. New social system, where cooperative relation can be easily and reliably obtained, must be constructed to support such an approach and prevent violation-based accidents
Corporation robots
Nowadays, various robots are built to perform multiple tasks. Multiple robots working
together to perform a single task becomes important. One of the key elements for multiple
robots to work together is the robot need to able to follow another robot. This project is
mainly concerned on the design and construction of the robots that can follow line. In this
project, focuses on building line following robots leader and slave. Both of these robots will
follow the line and carry load. A Single robot has a limitation on handle load capacity such as
cannot handle heavy load and cannot handle long size load. To overcome this limitation an
easier way is to have a groups of mobile robots working together to accomplish an aim that
no single robot can do alon
Energy Efficient and Reliable ARQ Scheme (ER-ACK) for Mission Critical M2M/IoT Services
Wireless sensor networks (WSNs) are the main infrastructure for machine to machine (M2M) and Internet of thing (IoT). Since various sophisticated M2M/IoT services have their own quality-of-service (QoS) requirements, reliable data transmission in WSNs is becoming more important. However, WSNs have strict constraints on resources due to the crowded wireless frequency, which results in high collision probability. Therefore a more efficient data delivering scheme that minimizes both the transmission delay and energy consumption is required. This paper proposes energy efficient and reliable data transmission ARQ scheme, called energy efficient and reliable ACK (ER-ACK), to minimize transmission delay and energy consumption at the same time. The proposed scheme has three aspects of advantages compared to the legacy ARQ schemes such as ACK, NACK and implicit-ACK (I-ACK). It consumes smaller energy than ACK, has smaller transmission delay than NACK, and prevents the duplicated retransmission problem of I-ACK. In addition, resource considered reliability (RCR) is suggested to quantify the improvement of the proposed scheme, and mathematical analysis of the transmission delay and energy consumption are also presented. The simulation results show that the ER-ACK scheme achieves high RCR by significantly reducing transmission delay and energy consumption
Considering Human Aspects on Strategies for Designing and Managing Distributed Human Computation
A human computation system can be viewed as a distributed system in which the
processors are humans, called workers. Such systems harness the cognitive power
of a group of workers connected to the Internet to execute relatively simple
tasks, whose solutions, once grouped, solve a problem that systems equipped
with only machines could not solve satisfactorily. Examples of such systems are
Amazon Mechanical Turk and the Zooniverse platform. A human computation
application comprises a group of tasks, each of them can be performed by one
worker. Tasks might have dependencies among each other. In this study, we
propose a theoretical framework to analyze such type of application from a
distributed systems point of view. Our framework is established on three
dimensions that represent different perspectives in which human computation
applications can be approached: quality-of-service requirements, design and
management strategies, and human aspects. By using this framework, we review
human computation in the perspective of programmers seeking to improve the
design of human computation applications and managers seeking to increase the
effectiveness of human computation infrastructures in running such
applications. In doing so, besides integrating and organizing what has been
done in this direction, we also put into perspective the fact that the human
aspects of the workers in such systems introduce new challenges in terms of,
for example, task assignment, dependency management, and fault prevention and
tolerance. We discuss how they are related to distributed systems and other
areas of knowledge.Comment: 3 figures, 1 tabl
NeuTM: A Neural Network-based Framework for Traffic Matrix Prediction in SDN
This paper presents NeuTM, a framework for network Traffic Matrix (TM)
prediction based on Long Short-Term Memory Recurrent Neural Networks (LSTM
RNNs). TM prediction is defined as the problem of estimating future network
traffic matrix from the previous and achieved network traffic data. It is
widely used in network planning, resource management and network security. Long
Short-Term Memory (LSTM) is a specific recurrent neural network (RNN)
architecture that is well-suited to learn from data and classify or predict
time series with time lags of unknown size. LSTMs have been shown to model
long-range dependencies more accurately than conventional RNNs. NeuTM is a LSTM
RNN-based framework for predicting TM in large networks. By validating our
framework on real-world data from GEEANT network, we show that our model
converges quickly and gives state of the art TM prediction performance.Comment: Submitted to NOMS18. arXiv admin note: substantial text overlap with
arXiv:1705.0569
Unclassified information list, 12-16 September 1966
Book and document information list - astrophysics, atmospherics, biology, nuclear physics, missile technology, navigation, electronics, chemistry, materials, mathematics, and other topic
- ā¦