26,452 research outputs found
Collaborative Evolution of a Dynamic Scenario Model for the Interaction of Critical Infrastructures
ABSTRACT This paper reviews current work on a model of the cascading effects of Critical Infrastructure (CI) failures during disasters. Based upon the contributions of 26 professionals, we have created a reliable model for the interaction among sixteen CIs. An internal CI model can be used as a core part of a number of larger models, each of which are tailored to a specific disaster in a specific location
Model Based Development of Quality-Aware Software Services
Modelling languages and development frameworks give support for functional and structural description of software architectures. But quality-aware applications require languages which allow expressing QoS as a first-class concept during architecture design and service composition, and to extend existing tools and infrastructures adding support for modelling, evaluating, managing and monitoring QoS aspects. In addition to its functional behaviour and internal structure, the developer of each service must consider the fulfilment of its quality requirements. If the service is flexible, the output quality depends both on input quality and available resources (e.g., amounts of CPU execution time and memory). From the software engineering point of view, modelling of quality-aware requirements and architectures require modelling support for the description of quality concepts, support for the analysis of quality properties (e.g. model checking and consistencies of quality constraints, assembly of quality), tool support for the transition from quality requirements to quality-aware architectures, and from quality-aware architecture to service run-time infrastructures. Quality management in run-time service infrastructures must give support for handling quality concepts dynamically. QoS-aware modeling frameworks and QoS-aware runtime management infrastructures require a common evolution to get their integration
Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things
The number of connected sensors and devices is expected to increase to billions in the near
future. However, centralised cloud-computing data centres present various challenges to meet the
requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput
and bandwidth constraints. Edge computing is becoming the standard computing paradigm for
latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related
to centralised cloud-computing models. Such a paradigm relies on bringing computation close to
the source of data, which presents serious operational challenges for large-scale cloud-computing
providers. In this work, we present an architecture composed of low-cost Single-Board-Computer
clusters near to data sources, and centralised cloud-computing data centres. The proposed
cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT
workload requirements while keeping scalability. We include an extensive empirical analysis to
assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data
centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud
architectures, and evaluate them through extensive simulation. We finally show that acquisition costs
can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209
Roadmap for Real World Internet applications
This paper emphasises the socioeconomic background required to design the Future Internet in order that its services will be accepted by its users and that the economic value latent in the technology is realised. It contains an innovative outlook on sensing aspects of the Future Internet and describes a scenario-based design approach that is feasible to roadmap the dynamic deployment of Real World Internet applications. A multifaceted socioeconomic assessment leads to recommendations for the technology deployment and key features of the Future Internet that will globally integrate technologies like Wireless Sensor and Actuator Networks and Networked Embedded Devices.Real World Internet ; Future Internet ; Scenario-based Design ; Socioeconomics ; Business Models ; Requirements
A Conceptual Model for Network Decision Support Systems
We introduce the concept of a network DSS (NWDSS)
consisting of fluid, heterogeneous nodes of human
and machine agents, connected by wireless
technology, which may enter and leave the network at
unpredictable times, yet must also cooperate in
decision-making activities. We describe
distinguishing properties of the NWDSS and propose
a 3-tier conceptual model comprised of digital
infrastructure, transactive memory systems and
emergent collaborative decision-making. We suggest
a decision loop of Sense-Analyze-Adapt-Memory
leveraging TMS as a starting point for addressing the
agile collaborative requirements of emergent
decision-making. Several examples of innovative
NWDSS services are presented from Naval
Postgraduate School field experiments
INTEROPERABILITY FOR MODELING AND SIMULATION IN MARITIME EXTENDED FRAMEWORK
This thesis reports on the most relevant researches performed during the years of the Ph.D. at the Genova University and within the Simulation Team. The researches have been performed according to M&S well known recognized standards. The studies performed on interoperable simulation cover all the environments of the Extended Maritime Framework, namely Sea Surface, Underwater, Air, Coast & Land, Space and Cyber Space. The applications cover both the civil and defence domain. The aim is to demonstrate the potential of M&S applications for the Extended Maritime Framework, applied to innovative unmanned vehicles as well as to traditional assets, human personnel included. A variety of techniques and methodology have been fruitfully applied in the researches, ranging from interoperable simulation, discrete event simulation, stochastic simulation, artificial intelligence, decision support system and even human behaviour modelling
Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"
According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient.
The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself.
Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners.
• The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another.
• The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion.
The behaviour of the entities may vary over time.
• The systems operate with incomplete information about the environment.
For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered.
The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems.
This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative.
We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration
- …