639 research outputs found
Quality assessment technique for ubiquitous software and middleware
The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
Elastic Highly Available Cloud Computing
High availability and elasticity are two the cloud computing services technical features. Elasticity is a key feature of cloud computing where provisioning of resources is closely tied to the runtime demand. High availability assure that cloud applications are resilient to failures. Existing cloud solutions focus on providing both features at the level of the virtual resource through virtual machines by managing their restart, addition, and removal as needed. These existing solutions map applications to a specific design, which is not suitable for many applications especially virtualized telecommunication applications that are required to meet carrier grade standards. Carrier grade applications typically rely on the underlying platform to manage their availability by monitoring heartbeats, executing recoveries, and attempting repairs to bring the system back to normal. Migrating such applications to the cloud can be particularly challenging, especially if the elasticity policies target the application only, without considering the underlying platform contributing to its high availability (HA). In this thesis, a Network Function Virtualization (NFV) framework is introduced; the challenges and requirements of its use in mobile networks are discussed. In particular, an architecture for NFV framework entities in the virtual environment is proposed. In order to reduce signaling traffic congestion and achieve better performance, a criterion to bundle multiple functions of virtualized evolved packet-core in a single physical device or a group of adjacent devices is proposed. The analysis shows that the proposed grouping can reduce the network control traffic by 70 percent. Moreover, a comprehensive framework for the elasticity of highly available applications that considers the elastic deployment of the platform and the HA placement of the application’s components is proposed. The approach is applied to an internet protocol multimedia subsystem (IMS) application and demonstrate how, within a matter of seconds, the IMS application can be scaled up while maintaining its HA status
Recommended from our members
Managing Next Generation Networks (NGNs) based on the Service-Oriented Architechture (SOA). Design, Development and testing of a message-based Network Management platform for the integration of heterogeneous management systems.
Next Generation Networks (NGNs) aim to provide a unified network
infrastructure to offer multimedia data and telecommunication services
through IP convergence. NGNs utilize multiple broadband, QoS-enabled
transport technologies, creating a converged packet-switched network
infrastructure, where service-related functions are separated from the
transport functions. This requires significant changes in the way how
networks are managed to handle the complexity and heterogeneity of
NGNs.
This thesis proposes a Service Oriented Architecture (SOA) based
management framework that integrates heterogeneous management
systems in a loose coupling manner. The key benefit of the proposed
management architecture is the reduction of the complexity through
service and data integration. A network management middleware layer
that merges low level management functionality with higher level
management operations to resolve the problem of heterogeneity was
proposed.
A prototype was implemented using Web Services and a testbed was
developed using trouble ticket systems as the management application to
demonstrate the functionality of the proposed framework. Test results
show the correcting functioning of the system. It also concludes that the
proposed framework fulfils the principles behind the SOA philosophy
Recommended from our members
Evaluating the resilience and security of boundaryless, evolving socio-technical Systems of Systems
Enhancing Failure Propagation Analysis in Cloud Computing Systems
In order to plan for failure recovery, the designers of cloud systems need to
understand how their system can potentially fail. Unfortunately, analyzing the
failure behavior of such systems can be very difficult and time-consuming, due
to the large volume of events, non-determinism, and reuse of third-party
components. To address these issues, we propose a novel approach that joins
fault injection with anomaly detection to identify the symptoms of failures. We
evaluated the proposed approach in the context of the OpenStack cloud computing
platform. We show that our model can significantly improve the accuracy of
failure analysis in terms of false positives and negatives, with a low
computational cost.Comment: 12 pages, The 30th International Symposium on Software Reliability
Engineering (ISSRE 2019
Availability Incidents in the Telecommunication Domain:A Literature Review
Non-availability incidents in public telecom services may have a wide-spread impact, such as disruption of internet services, mobile services, and land-line communication. This, in turn, may disrupt the life of consumers and citizens, and the provision of services by commercial and public organizations. These incidents are always analyzed and solved by the provider. In Europe, there is a legal obligation to report the analysis and solution of the incident to the national telecom regulator. However, these reports are highly confidential, and beyond some elementary descriptive statistics, they are not analyzed. This means that a significant opportunity is missed to draw lessons from these incidents, which could be valuable to other providers and to standardization bodies. In the LINC project, we aim to develop a method to draw lessons learned from registered non-availability incidents without compromising the confidentiality of those registrations. As a preparation for that, we have conducted a systematic literature review of non-availability incidents in public telecom services reported in the scientific and professional literature, to see what we can learn from the reported incident model and analysis methods used. In this report, we present an incident analysis taxonomy to establish a common terminological ground among researchers and practitioners.<br/
CoAP Infrastructure for IoT
The Internet of Things (IoT) can be seen as a large-scale network of billions of smart devices. Often IoT
devices exchange data in small but numerous messages, which requires IoT services to be more scalable and
reliable than ever. Traditional protocols that are known in the Web world does not fit well in the constrained
environment that these devices operate in. Therefore many lightweight protocols specialized for the IoT have
been studied, among which the Constrained Application Protocol (CoAP) stands out for its well-known REST
paradigm and easy integration with existing Web. On the other hand, new paradigms such as Fog Computing
emerges, attempting to avoid the centralized bottleneck in IoT services by moving computations to the edge
of the network. Since a node of the Fog essentially belongs to relatively constrained environment, CoAP fits
in well. Among the many attempts of building scalable and reliable systems, Erlang as a typical concurrency-oriented programming (COP) language has been battle tested in the telecom industry, which has similar requirements
as the IoT. In order to explore the possibility of applying Erlang and COP in general to the IoT, this thesis
presents an Erlang based CoAP server/client prototype ecoap with a flexible concurrency model that can
scale up to an unconstrained environment like the Cloud and scale down to a constrained environment like
an embedded platform. The flexibility of the presented server renders the same architecture applicable from
Fog to Cloud. To evaluate its performance, the proposed server is compared with the mainstream CoAP
implementation on an Amazon Web Service (AWS) Cloud instance and a Raspberry Pi 3, representing the
unconstrained and constrained environment respectively. The ecoap server achieves comparable throughput,
lower latency, and in general scales better than the other implementation in the Cloud and on the Raspberry
Pi. The thesis yields positive results and demonstrates the value of the philosophy of Erlang in the IoT space
- …