867 research outputs found
Secondary Neutron and Photon Dose in Proton Therapy
Abstract Background and purpose : The dose due to secondary neutrons and photons in proton therapy was estimated with Monte Carlo simulations. Three existing facilities treating eye and deep-seated tumours were taken into account. The results of the calculations related to eye proton therapy were verified with measurements. Materials and methods : The simulations were performed with the FLUKA code. Neutron fluence was measured inside an Alderson phantom (type ART) with activation techniques. Results : The maximum dose due to secondaries produced in a passive beam delivery system was estimated to be of the order of 10 −4 and 10 −2 Gy per therapy Gy for eye and deep tumour treatments, respectively. In the case of irradiations of deep-seated tumours carried out with an active system, the dose was of the order of 10 −3 Gy per therapy Gy. Conclusions : The dose due to secondaries depends on the geometry of the beam delivery system and on the energy of the primary beam and is lower in the healthy tissues distant from the target volume
Swarm SLAM: Challenges and Perspectives
A robot swarm is a decentralized system characterized by locality of sensing and communication, self-organization, and redundancy. These characteristics allow robot swarms to achieve scalability, flexibility and fault tolerance, properties that are especially valuable in the context of simultaneous localization and mapping (SLAM), specifically in unknown environments that evolve over time. So far, research in SLAM has mainly focused on single- and centralized multi-robot systems—i.e., non-swarm systems. While these systems can produce accurate maps, they are typically not scalable, cannot easily adapt to unexpected changes in the environment, and are prone to failure in hostile environments. Swarm SLAM is a promising approach to SLAM as it could leverage the decentralized nature of a robot swarm and achieve scalable, flexible and fault-tolerant exploration and mapping. However, at the moment of writing, swarm SLAM is a rather novel idea and the field lacks definitions, frameworks, and results. In this work, we present the concept of swarm SLAM and its constraints, both from a technical and an economical point of view. In particular, we highlight the main challenges of swarm SLAM for gathering, sharing, and retrieving information. We also discuss the strengths and weaknesses of this approach against traditional multi-robot SLAM. We believe that swarm SLAM will be particularly useful to produce abstract maps such as topological or simple semantic maps and to operate under time or cost constraints
Complexity Measures: Open Questions and Novel Opportunities in the Automatic Design and Analysis of Robot Swarms
Complexity measures and information theory metrics in general have recently been attracting the interest of multi-agent and robotics communities, owing to their capability of capturing relevant features of robot behaviors, while abstracting from implementation details. We believe that theories and tools from complex systems science and information theory may be fruitfully applied in the near future to support the automatic design of robot swarms and the analysis of their dynamics. In this paper we discuss opportunities and open questions in this scenario
Controlling Robot Swarm Aggregation through a Minority of Informed Robots
Self-organised aggregation is a well studied behaviour in swarm robotics as
it is the pre-condition for the development of more advanced group-level
responses. In this paper, we investigate the design of decentralised algorithms
for a swarm of heterogeneous robots that self-aggregate over distinct target
sites. A previous study has shown that including as part of the swarm a number
of informed robots can steer the dynamic of the aggregation process to a
desirable distribution of the swarm between the available aggregation sites. We
have replicated the results of the previous study using a simplified approach,
we removed constraints related to the communication protocol of the robots and
simplified the control mechanisms regulating the transitions between states of
the probabilistic controller. The results show that the performances obtained
with the previous, more complex, controller can be replicated with our
simplified approach which offers clear advantages in terms of portability to
the physical robots and in terms of flexibility. That is, our simplified
approach can generate self-organised aggregation responses in a larger set of
operating conditions than what can be achieved with the complex controller.Comment: Submitted to ANTS 202
A metaheuristic multi-criteria optimisation approach to portfolio selection
Portfolio selection is concerned with selecting from of a universe of assets the ones in which one wishes to invest and the amount of the investment. Several criteria can be used for portfolio selection, and the resulting approaches can be classified as being either active or passive. The two approaches are thought to be mutually exclusive, but some authors have suggested combining them in a unified framework. In this work, we define a multi-criteria optimisation problem in which the two types of approaches are combined, and we introduce a hybrid metaheuristic that combines local search and quadratic programming to obtain an approximation of the Pareto set. We experimentally analyse this approach on benchmarks from two different instance classes: these classes refer to the same indexes, but they use two different return representations. Results show that this metaheuristic can be effectively used to solve multi-criteria portfolio selection problems. Furthermore, with an experiment on a set of instances coming from a different financial scenario, we show that the results obtained by our metaheuristic are robust with respect to the return representation used
Temporal task allocation in periodic environments. An approach based on synchronization
In this paper, we study a robot swarm that has to perform task allocation in an environment that features periodic properties. In this environment, tasks appear in different areas following periodic temporal patterns. The swarm has to reallocate its workforce periodically, performing a temporal task allocation that must be synchronized with the environment to be effective.
We tackle temporal task allocation using methods and concepts that we borrow from the signal processing literature. In particular, we propose a distributed temporal task allocation algorithm that synchronizes robots of the swarm with the environment and with each other. In this algorithm, robots use only local information and a simple visual communication protocol based on light blinking. Our results show that a robot swarm that uses the proposed temporal task allocation algorithm performs considerably more tasks than a swarm that uses a greedy algorithm
Automatic design of ant-miner mixed attributes for classification rule discovery
Ant-Miner Mixed Attributes (Ant-MinerMA) was inspired and built based on ACOMV. which uses an archive-based pheromone model to cope with mixed attribute types. On the one hand, the use of an archive-based pheromone model improved significantly the runtime of Ant-MinerMA and helped to eliminate the need for discretisation procedure when dealing with continuous attributes. On the other hand, the graph-based pheromone model showed superiority when dealing with datasets containing a large size of attributes, as the graph helps the algorithm to easily identify good attributes. In this paper, we propose an automatic design framework to incorporate the graph-based model along with the archive-based model in the rule creation process. We compared the automatically designed hybrid algorithm against existing ACO-based algorithms: one using a graph-based pheromone model and one using an archive-based pheromone model. Our results show that the hybrid algorithm improves the predictive quality over both the base archive-based and graph-based algorithms
- …