95 research outputs found

    A look at cloud architecture interoperability through standards

    Get PDF
    Enabling cloud infrastructures to evolve into a transparent platform while preserving integrity raises interoperability issues. How components are connected needs to be addressed. Interoperability requires standard data models and communication encoding technologies compatible with the existing Internet infrastructure. To reduce vendor lock-in situations, cloud computing must implement universal strategies regarding standards, interoperability and portability. Open standards are of critical importance and need to be embedded into interoperability solutions. Interoperability is determined at the data level as well as the service level. Corresponding modelling standards and integration solutions shall be analysed

    A Service Late Binding Enabled Solution for Data Integration from Autonomous and Evolving Databases

    Get PDF
    Integrating data from autonomous, distributed and heterogeneous data sources to provide a unified vision is a common demand for many businesses. Since the data sources may evolve frequently to satisfy their own independent business needs, solutions which use hard coded queries to integrate participating databases may cause high maintenance costs when evolution occurs. Thus a new solution which can handle database evolution with lower maintenance effort is required. This thesis presents a new solution: Service Late binding Enabled Data Integration (SLEDI) which is set into a framework modeling the essential processes of the data integration activity. It integrates schematic heterogeneous relational databases with decreased maintenance costs for handling database evolution. An algorithm, named Information Provision Unit Describing (IPUD) is designed to describe each database as a set of Information Provision Units (IPUs). The IPUs are represented as Directed Acyclic Graph (DAG) structured data instead of hard coded queries, and further realized as data services. Hence the data integration is achieved through service invocations. Furthermore, a set of processes is defined to handle the database evolution through automatically identifying and modifying the IPUs which are affected by the evolution. An extensive evaluation based on a case study is presented. The result shows that the schematic heterogeneities defined in this thesis can be solved by IPUD except the relation isomorphism discrepancy. Ten out of thirteen types of schematic database evolution can be automatically handled by the evolution handling processes as long as the evolution is represented by the designed data model. The computational costs of the automatic evolution handling show a slow linear growth with the number of participating databases. Other characteristics addressed include SLEDI’s scalability, independence of application domain and databases model. The descriptive comparison with other data integration approaches shows that although the Data as a Service approach may result in lower performance under some circumstances, it supports better flexibility for integrating data from autonomous and evolving data sources

    Earth Observation Open Science and Innovation

    Get PDF
    geospatial analytics; social observatory; big earth data; open data; citizen science; open innovation; earth system science; crowdsourced geospatial data; citizen science; science in society; data scienc

    PRIME: Proactive Inter-Middleware for Global Enterprise Resource Integration

    Get PDF
    We present PRIME software ecosystem, which connects heterogeneous resources from different layers of the Internet of Things and capable of handling complex interoperability scenarios involving: hardware devices, software-based systems and humansМы представляем программную экосистему  PRIME, которая соединит разнородные ресурсы из различных слоев Интернета вещей и способна обслуживать сложные сценарии взаимодействия с участием: аппаратных устройств, программных систем и людейМи представляємо програмну екосистему PRIME, яка з'єднає різнорідні ресурси з різних верств Інтернету речей і здатна обслуговувати складні сценарії взаємодії за участю: апаратних пристроїв, програмних систем і люде

    UNDERSTANDING USER PERCEPTIONS AND PREFERENCES FOR MASS-MARKET INFORMATION SYSTEMS – LEVERAGING MARKET RESEARCH TECHNIQUES AND EXAMPLES IN PRIVACY-AWARE DESIGN

    Get PDF
    With cloud and mobile computing, a new category of software products emerges as mass-market information systems (IS) that addresses distributed and heterogeneous end-users. Understanding user requirements and the factors that drive user adoption are crucial for successful design of such systems. IS research has suggested several theories and models to explain user adoption and intentions to use, among them the IS Success Model and the Technology Acceptance Model (TAM). Although these approaches contribute to theoretical understanding of the adoption and use of IS in mass-markets, they are criticized for not being able to drive actionable insights on IS design as they consider the IT artifact as a black-box (i.e., they do not sufficiently address the system internal characteristics). We argue that IS needs to embrace market research techniques to understand and empirically assess user preferences and perceptions in order to integrate the "voice of the customer" in a mass-market scenario. More specifically, conjoint analysis (CA), from market research, can add user preference measurements for designing high-utility IS. CA has gained popularity in IS research, however little guidance is provided for its application in the domain. We aim at supporting the design of mass-market IS by establishing a reliable understanding of consumer’s preferences for multiple factors combing functional, non-functional and economic aspects. The results include a “Framework for Conjoint Analysis Studies in IS” and methodological guidance for applying CA. We apply our findings to the privacy-aware design of mass-market IS and evaluate their implications on user adoption. We contribute to both academia and practice. For academia, we contribute to a more nuanced conceptualization of the IT artifact (i.e., system) through a feature-oriented lens and a preference-based approach. We provide methodological guidelines that support researchers in studying user perceptions and preferences for design variations and extending that to adoption. Moreover, the empirical studies for privacy- aware design contribute to a better understanding of the domain specific applications of CA for IS design and evaluation with a nuanced assessment of user preferences for privacy-preserving features. For practice, we propose guidelines for integrating the voice of the customer for successful IS design. -- Les technologies cloud et mobiles ont fait émerger une nouvelle catégorie de produits informatiques qui s’adressent à des utilisateurs hétérogènes par le biais de systèmes d'information (SI) distribués. Les termes “SI de masse” sont employés pour désigner ces nouveaux systèmes. Une conception réussie de ceux-ci passe par une phase essentielle de compréhension des besoins et des facteurs d'adoption des utilisateurs. Pour ce faire, la recherche en SI suggère plusieurs théories et modèles tels que le “IS Success Model” et le “Technology Acceptance Model”. Bien que ces approches contribuent à la compréhension théorique de l'adoption et de l'utilisation des SI de masse, elles sont critiquées pour ne pas être en mesure de fournir des informations exploitables sur la conception de SI car elles considèrent l'artefact informatique comme une boîte noire. En d’autres termes, ces approches ne traitent pas suffisamment des caractéristiques internes du système. Nous soutenons que la recherche en SI doit adopter des techniques d'étude de marché afin de mieux intégrer les exigences du client (“Voice of Customer”) dans un scénario de marché de masse. Plus précisément, l'analyse conjointe (AC), issue de la recherche sur les consommateurs, peut contribuer au développement de système SI à forte valeur d'usage. Si l’AC a gagné en popularité au sein de la recherche en SI, des recommandations quant à son utilisation dans ce domaine restent rares. Nous entendons soutenir la conception de SI de masse en facilitant une identification fiable des préférences des consommateurs sur de multiples facteurs combinant des aspects fonctionnels, non-fonctionnels et économiques. Les résultats comprennent un “Cadre de référence pour les études d'analyse conjointe en SI” et des recommandations méthodologiques pour l'application de l’AC. Nous avons utilisé ces contributions pour concevoir un SI de masse particulièrement sensible au respect de la vie privée des utilisateurs et nous avons évalué l’impact de nos recherches sur l'adoption de ce système par ses utilisateurs. Ainsi, notre travail contribue tant à la théorie qu’à la pratique des SI. Pour le monde universitaire, nous contribuons en proposant une conceptualisation plus nuancée de l'artefact informatique (c'est-à-dire du système) à travers le prisme des fonctionnalités et par une approche basée sur les préférences utilisateurs. Par ailleurs, les chercheurs peuvent également s'appuyer sur nos directives méthodologiques pour étudier les perceptions et les préférences des utilisateurs pour différentes variations de conception et étendre cela à l'adoption. De plus, nos études empiriques sur la conception d’un SI de masse sensible au respect de la vie privée des utilisateurs contribuent à une meilleure compréhension de l’application des techniques CA dans ce domaine spécifique. Nos études incluent notamment une évaluation nuancée des préférences des utilisateurs sur des fonctionnalités de protection de la vie privée. Pour les praticiens, nous proposons des lignes directrices qui permettent d’intégrer les exigences des clients afin de concevoir un SI réussi

    Complication to managing HIV in relation to HCV in persons seen for routine clinical care in Italy

    Get PDF
    Hepatitis C virus (HCV) is a global public health concern compared to communicable diseases such as HIV and shown to be associated with faster disease progression in PLWH. The introduction of highly effective direct acting antivirals (DAAs) in 2015 revolutionised HCV therapy. In 2015 WHO called for a global strategy in HCV elimination by 2030. Whilst DAA is recommended to all, HIV/HCV coinfected individuals may require special consideration. My initial research focused on the role of HCV as an effect modifier for the association between alcohol consumption and risk of severe liver disease (SLD) and the association between HCV and risk of specific ARV drug discontinuation in PLWH. This shifted to, real-world estimate of the presence of late HCV presentation and its risk of all-cause mortality. I evaluated regional differences in rate of accessing care with respect to HCV-RNA testing, DAA uptake and achieving sustained virological response (SVR). The data analysis involved two multicentre observational prospective cohorts enrolling PLWH with/without HCV in routine care across Italy. There was no evidence that HCV was an effect measure modifier for the relationship between alcohol consumption and risk of SLD. The rate of ARV discontinuation was similar between HIV/HCV coinfected and HIV monoinfected participants, except of darunavir/r for which the risk of discontinuation was higher in the coinfected. There was weak evidence for an association between late HCV presentation and risk of all-cause mortality. Among people enrolled between 2015 and 2018 in Icona, 90% were HCV-RNA tested and among those initiating DAA treatment, 88% achieved SVR. HIV/HCV coinfected individuals receiving care in the South had 50% (95%CI:34%–55%; p<0.001) reduced probability of initiating DAA compared to those receiving care in the North and Central regions. Overall, the results indicate that Italy is on course towards meeting the WHO HCV elimination goals in PLWH

    Epitope mapping of the E2 glycoprotein, including the hypervariable region 1, of the hepatitis C virus genotype 3a, in the context of humoral immune pressure

    Get PDF
    The hepatitis C virus (HCV) is an enveloped +ssRNA virus, belonging to the family Flaviviridae. HCV is notable for displaying extraordinary genetic diversity and variability, having seven recognised genotypes and over sixty subtypes. HCV is responsible for the disease known as hepatitis C, which is associated with cirrhosis and hepatocellular carcinoma (HCC). The Global Hepatitis Report released by the World Health Organisation (WHO) in 2017 estimated that viral hepatitis was responsible for 1,340,000 deaths in 2015. The report also estimated that 71,000,000 people have ongoing HCV infections. HCV is largely transmitted via exposure to infected blood, with intravenous drug use accounting for approximately 55% of cases. HCV infections can be categorised as acute or chronic. During chronic HCV infections, antibodies (Abs) are produced against HCV - however, the host Abs are unable to neutralise HCV and only accelerate the evolution of circulating HCV variants. HCV variants resistant to the current generation of host Abs become the dominant variant through selective pressure. The variants of HCV within a host are known as quasispecies. Although the host Ab response is not able to resolve the chronic HCV infection, some Abs can bind to particular HCV variants. These Abs form complexes with virus particles and are known as AAVs (antibody-associated virus). AAVs are detectable in the blood of patients with chronic HCV infections and examination of these AAVs could reveal conserved viral structures and vulnerable HCV epitopes. Twenty genotype 3a serum and plasma samples from patients with chronic HCV infections were obtained from the National Virus Registry Laboratory (NVRL) and from the Molecular Virology Research and Diagnostic Laboratory (MVDRL). HCV genotype 3a was chosen for this research project given its prevalence (estimated to account for 17.9% of chronic HCV infections), resistance to treatment, and increased risk of causing severe steatosis and HCC when compared to other genotypes. Building on previous research carried out by the MVDRL, the patient samples were screened for the presence of AAVs. Initially, AAV+ samples were going to be processed and used to generate HCV pseudoparticles (HCVpp). The HCVpp system is a model system that incorporates the E1E2 glycoprotein from HCV into a plasmid. The E1E2 glycoprotein is responsible for HCV entry and infection, meaning the HCVpp can be used to infectivity and Ab neutralisation assays. However, the E1E2 glycoproteins could not be extracted from the AAV+ patient samples. Instead, the IgG from the AAV+ samples was extracted and used for a series of neutralisation experiments on HCV pseudoparticles generated using the HCV H77 isolate. H77 (GenBank: AAB67037.1) is an infectious genotype 1a isolate that has undergone complete genome sequencing. The Abs that showed the greatest neutralisation potential against the H77 pseudoparticles were selected for epitope mapping. The epitope mapping procedure tested the selected Ab samples against a synthesised H77 E2 glycoprotein structure, and characterised the sites where the patient Abs bound to the synthesised E2. This revealed vulnerable epitopes on the HCV E2 glycoprotein. The epitope mapping also revealed a large number of glycosylation sites around the vulnerable epitopes – a phenomenon known as glycan shielding. Glycan shielding is used by a number of viruses (including HCV and HIV) to protect conserved and vulnerable epitopes from Abs. However, strategies are being developed to counter viral glycosylation, including modifications to glycosylation sites and the use of polysaccharides derived from non-mammalian sources as therapeutic agents against glycosylated viruses

    A new MDA-SOA based framework for intercloud interoperability

    Get PDF
    Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective. This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime. Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.Fundação para a Ciência e a Tecnologia (FCT) - (Referencia da bolsa: SFRH SFRH / BD / 33965 / 2009) and EC 7th Framework Programme under grant agreement n° FITMAN 604674 (http://www.fitman-fi.eu

    Artificial intelligence in business-to-business marketing: a bibliometric analysis of current research status, development and future directions

    Get PDF
    Purpose-Although the value of AI has been acknowledged by companies, the literature shows challenges concerning AI-enabled B2B marketing innovation, as well as the diversity of roles AI can play in this regard. Accordingly, this study investigates the approaches that AI can be used for enabling B2B marketing innovation. Design/methodology/approach-Applying a bibliometric research method, this study systematically investigates the literature regarding AI-enabled B2B marketing. It synthesises state-of-the-art knowledge from 221 journal articles published between 1990 and 2021. Findings-Apart from offering specific information regarding the most influential authors and most frequently cited articles, the study further categorises the use of AI for innovation in B2B marketing into five domains, identified the main trends in the literature, and suggest directions for future research. Practical implications-Through our identified five domains, practitioners can assess their current use of AI ability in terms of their conceptualisation capability, technological applications, and identify their future needs in the relevant domains in order to make appropriate decisions on whether to invest in AI. Thus, the research outcomes can help companies to realise their digital marketing innovation strategy through AI. Originality/value-While more and more studies acknowledge the potential value of AI in B2B marketing, few attempts have been made to synthesise the literature. The results from the study can contribute by 1) obtaining and comparing the most influential works based on a series of analyses; 2) identifying five domains of research into how AI can be used for facilitating B2B marketing innovation; and 3) classifying relevant articles into five different time periods in order to identify both past trends and future directions in this specific field

    Self-adaptive mobile web service discovery framework for dynamic mobile environment

    Get PDF
    The advancement in mobile technologies has undoubtedly turned mobile web service (MWS) into a significant computing resource in a dynamic mobile environment (DME). The discovery is one of the critical stages in the MWS life cycle to identify the most relevant MWS for a particular task as per the request's context needs. While the traditional service discovery frameworks that assume the world is static with predetermined context are constrained in DME, the adaptive solutions show potential. Unfortunately, the effectiveness of these frameworks is plagued by three problems. Firstly, the coarse-grained MWS categorization approach that fails to deal with the proliferation of functionally similar MWS. Secondly, context models constricted by insufficient expressiveness and inadequate extensibility confound the difficulty in describing the DME, MWS, and the user’s MWS needs. Thirdly, matchmaking requires manual adjustment and disregard context information that triggers self-adaptation, leading to the ineffective and inaccurate discovery of relevant MWS. Therefore, to address these challenges, a self-adaptive MWS discovery framework for DME comprises an enhanced MWS categorization approach, an extensible meta-context ontology model, and a self-adaptive MWS matchmaker is proposed. In this research, the MWS categorization is achieved by extracting the goals and tags from the functional description of MWS and then subsuming k-means in the modified negative selection algorithm (M-NSA) to create categories that contain similar MWS. The designing of meta-context ontology is conducted using the lightweight unified process for ontology building (UPON-Lite) in collaboration with the feature-oriented domain analysis (FODA). The self-adaptive MWS matchmaking is achieved by enabling the self-adaptive matchmaker to learn MWS relevance using a Modified-Negative Selection Algorithm (M-NSA) and retrieve the most relevant MWS based on the current context of the discovery. The MWS categorization approach was evaluated, and its impact on the effectiveness of the framework is assessed. The meta-context ontology was evaluated using case studies, and its impact on the service relevance learning was assessed. The proposed framework was evaluated using a case study and the ProgrammableWeb dataset. It exhibits significant improvements in terms of binary relevance, graded relevance, and statistical significance, with the highest average precision value of 0.9167. This study demonstrates that the proposed framework is accurate and effective for service-based application designers and other MWS clients
    corecore