34,941 research outputs found
A survey and tutorial of electromagnetic radiation and reduction in mobile communication systems
This paper provides a survey and tutorial of electromagnetic (EM) radiation exposure and reduction in mobile communication systems. EM radiation exposure has received a fair share of interest in the literature; however, this work is one of the first to compile the most interesting results and ideas related to EM exposure in mobile communication systems and present possible ways of reducing it. We provide a comprehensive survey of existing literature and also offer a tutorial on the dosimetry, metrics, international projects as well as guidelines and limits on the exposure from EM radiation in mobile communication systems. Based on this survey and given that EM radiation exposure is closely linked with specific absorption rate (SAR) and transmit power usage, we propose possible techniques for reducing EM radiation exposure in mobile communication systems by exploring known concepts related to SAR and transmit power reduction in mobile systems. Thus, this paper serves as an introductory guide to EM radiation exposure in mobile communication systems and provides insights toward the design of future low-EM exposure mobile communication networks
Electromagnetic emission-aware schedulers for the uplink of OFDM wireless communication systems
The popularity and convergence of wireless communications have resulted in continuous network upgrades in order to support the increasing demand for bandwidth. However, given that wireless communication systems operate on radiofrequency waves, the health effects of electromagnetic emission from these systems are increasingly becoming a concern due to the ubiquity of mobile communication devices. In order to address these concerns, we propose two schemes (offline and online) for minimizing the EM emission of users in the uplink of OFDM systems, while maintaining an acceptable quality of service. We formulate our offline EM reduction scheme as a convex optimization problem and solve it through water-filling. This is based on the assumption that the long-term channel state information of all the users is known. Given that, in practice, long-term channel state information of all the users cannot always be available, we propose our online EM emission reduction scheme, which is based on minimizing the instantaneous transmit energy per bit of each user. Simulation results show that both our proposed schemes significantly minimize the EM emission when compared to the benchmark classic greedy spectral efficiency based scheme and an energy efficiency based scheme. Furthermore, our offline scheme proves to be very robust against channel prediction errors
Three years of experience with the STELLA robotic observatory
Since May 2006, the two STELLA robotic telescopes at the Izana observatory in
Tenerife, Spain, delivered an almost uninterrupted stream of scientific data.
To achieve such a high level of autonomous operation, the replacement of all
troubleshooting skills of a regular observer in software was required. Care
must be taken on error handling issues and on robustness of the algorithms
used. In the current paper, we summarize the approaches we followed in the
STELLA observatory
Impact of Sugarcane Delivery Schedule on Product Value at Raw Sugar Factories
Conversion to combine harvesters has resulted in Louisiana sugarcane growers delivering a more perishable product to raw sugar factories. Dextran formation increases as the time between harvest and milling is extended. Milling of freshly cut sugarcane reduces the formation of dextran and associated economic losses. One approach available to factories to reduce dextran formation is to extend the harvested sugarcane delivery schedule to the mill. A simulation model was developed to evaluate alternative delivery schedules at raw sugar factories. Economic losses in product value associated with dextran formation were estimated and compared for various extended delivery schedules.dextran, milling, product value, raw sugar factories, scheduling, sugarcane industry, Crop Production/Industries, Marketing, Production Economics,
SANTO: Social Aerial NavigaTion in Outdoors
In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones
Cooperation and Competition when Bidding for Complex Projects: Centralized and Decentralized Perspectives
To successfully complete a complex project, be it a construction of an
airport or of a backbone IT system, agents (companies or individuals) must form
a team having required competences and resources. A team can be formed either
by the project issuer based on individual agents' offers (centralized
formation); or by the agents themselves (decentralized formation) bidding for a
project as a consortium---in that case many feasible teams compete for the
contract. We investigate rational strategies of the agents (what salary should
they ask? with whom should they team up?). We propose concepts to characterize
the stability of the winning teams and study their computational complexity
Towards a Mini-App for Smoothed Particle Hydrodynamics at Exascale
The smoothed particle hydrodynamics (SPH) technique is a purely Lagrangian
method, used in numerical simulations of fluids in astrophysics and
computational fluid dynamics, among many other fields. SPH simulations with
detailed physics represent computationally-demanding calculations. The
parallelization of SPH codes is not trivial due to the absence of a structured
grid. Additionally, the performance of the SPH codes can be, in general,
adversely impacted by several factors, such as multiple time-stepping,
long-range interactions, and/or boundary conditions. This work presents
insights into the current performance and functionalities of three SPH codes:
SPHYNX, ChaNGa, and SPH-flow. These codes are the starting point of an
interdisciplinary co-design project, SPH-EXA, for the development of an
Exascale-ready SPH mini-app. To gain such insights, a rotating square patch
test was implemented as a common test simulation for the three SPH codes and
analyzed on two modern HPC systems. Furthermore, to stress the differences with
the codes stemming from the astrophysics community (SPHYNX and ChaNGa), an
additional test case, the Evrard collapse, has also been carried out. This work
extrapolates the common basic SPH features in the three codes for the purpose
of consolidating them into a pure-SPH, Exascale-ready, optimized, mini-app.
Moreover, the outcome of this serves as direct feedback to the parent codes, to
improve their performance and overall scalability.Comment: 18 pages, 4 figures, 5 tables, 2018 IEEE International Conference on
Cluster Computing proceedings for WRAp1
A Methodology to Engineer and Validate Dynamic Multi-level Multi-agent Based Simulations
This article proposes a methodology to model and simulate complex systems,
based on IRM4MLS, a generic agent-based meta-model able to deal with
multi-level systems. This methodology permits the engineering of dynamic
multi-level agent-based models, to represent complex systems over several
scales and domains of interest. Its goal is to simulate a phenomenon using
dynamically the lightest representation to save computer resources without loss
of information. This methodology is based on two mechanisms: (1) the activation
or deactivation of agents representing different domain parts of the same
phenomenon and (2) the aggregation or disaggregation of agents representing the
same phenomenon at different scales.Comment: Presented at 3th International Workshop on Multi-Agent Based
Simulation, Valencia, Spain, 5th June 201
On human motion prediction using recurrent neural networks
Human motion modelling is a classical problem at the intersection of graphics
and computer vision, with applications spanning human-computer interaction,
motion synthesis, and motion prediction for virtual and augmented reality.
Following the success of deep learning methods in several computer vision
tasks, recent work has focused on using deep recurrent neural networks (RNNs)
to model human motion, with the goal of learning time-dependent representations
that perform tasks such as short-term motion prediction and long-term human
motion synthesis. We examine recent work, with a focus on the evaluation
methodologies commonly used in the literature, and show that, surprisingly,
state-of-the-art performance can be achieved by a simple baseline that does not
attempt to model motion at all. We investigate this result, and analyze recent
RNN methods by looking at the architectures, loss functions, and training
procedures used in state-of-the-art approaches. We propose three changes to the
standard RNN models typically used for human motion, which result in a simple
and scalable RNN architecture that obtains state-of-the-art performance on
human motion prediction.Comment: Accepted at CVPR 1
- …