241 research outputs found

    Context Aware Adaptable Applications - A global approach

    Get PDF
    Actual applications (mostly component based) requirements cannot be expressed without a ubiquitous and mobile part for end-users as well as for M2M applications (Machine to Machine). Such an evolution implies context management in order to evaluate the consequences of the mobility and corresponding mechanisms to adapt or to be adapted to the new environment. Applications are then qualified as context aware applications. This first part of this paper presents an overview of context and its management by application adaptation. This part starts by a definition and proposes a model for the context. It also presents various techniques to adapt applications to the context: from self-adaptation to supervised approached. The second part is an overview of architectures for adaptable applications. It focuses on platforms based solutions and shows information flows between application, platform and context. Finally it makes a synthesis proposition with a platform for adaptable context-aware applications called Kalimucho. Then we present implementations tools for software components and a dataflow models in order to implement the Kalimucho platform

    A Client-Server System for Ubiquitous Video Service

    Get PDF
    In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users

    INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS

    Get PDF
    Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions. With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants. IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS. This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in 12 each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency. The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future

    Empirical Analysis of Privacy Preservation Models for Cyber Physical Deployments from a Pragmatic Perspective

    Get PDF
    The difficulty of privacy protection in cyber-physical installations encompasses several sectors and calls for methods like encryption, hashing, secure routing, obfuscation, and data exchange, among others. To create a privacy preservation model for cyber physical deployments, it is advised that data privacy, location privacy, temporal privacy, node privacy, route privacy, and other types of privacy be taken into account. Consideration must also be given to other types of privacy, such as temporal privacy. The computationally challenging process of incorporating these models into any wireless network also affects quality of service (QoS) variables including end-to-end latency, throughput, energy use, and packet delivery ratio. The best privacy models must be used by network designers and should have the least negative influence on these quality-of-service characteristics. The designers used common privacy models for the goal of protecting cyber-physical infrastructure in order to achieve this. The limitations of these installations' interconnection and interface-ability are not taken into account in this. As a result, even while network security has increased, the network's overall quality of service has dropped. The many state-of-the-art methods for preserving privacy in cyber-physical deployments without compromising their performance in terms of quality of service are examined and analyzed in this research. Lowering the likelihood that such circumstances might arise is the aim of this investigation and review. These models are rated according to how much privacy they provide, how long it takes from start to finish to transfer data, how much energy they use, and how fast their networks are. In order to maximize privacy while maintaining a high degree of service performance, the comparison will assist network designers and researchers in selecting the optimal models for their particular deployments. Additionally, the author of this book offers a variety of tactics that, when used together, might improve each reader's performance. This study also provides a range of tried-and-true machine learning approaches that networks may take into account and examine in order to enhance their privacy performance

    Personalised privacy in pervasive and ubiquitous systems

    Get PDF
    Our world is edging closer to the realisation of pervasive systems and their integration in our everyday life. While pervasive systems are capable of offering many benefits for everyone, the amount and quality of personal information that becomes available raise concerns about maintaining user privacy and create a real need to reform existing privacy practices and provide appropriate safeguards for the user of pervasive environments. This thesis presents the PERSOnalised Negotiation, Identity Selection and Management (PersoNISM) system; a comprehensive approach to privacy protection in pervasive environments using context aware dynamic personalisation and behaviour learning. The aim of the PersoNISM system is twofold: to provide the user with a comprehensive set of privacy protecting tools and to help them make the best use of these tools according to their privacy needs. The PersoNISM system allows users to: a) configure the terms and conditions of data disclosure through the process of privacy policy negotiation, which addresses the current “take it or leave it” approach; b) use multiple identities to interact with pervasive services to avoid the accumulation of vast amounts of personal information in a single user profile; and c) selectively disclose information based on the type of information, who requests it, under what context, for what purpose and how the information will be treated. The PersoNISM system learns user privacy preferences by monitoring the behaviour of the user and uses them to personalise and/or automate the decision making processes in order to unburden the user from manually controlling these complex mechanisms. The PersoNISM system has been designed, implemented, demonstrated and evaluated during three EU funded projects
    • 

    corecore