85 research outputs found

    Identification of top-K influential communities in big networks

    Get PDF

    Exploring the Future Shape of Business Intelligence: Mapping Dynamic Capabilities of Information Systems to Business Intelligence Agility

    Get PDF
    A major challenge in today’s turbulent environments is to make appropriate decisions to sustainably steer an organization. Business intelligence (BI) systems are often used as a basis for decision making. But achieving agility in BI and cope with dynamic environments is no trivial endeavor as the classical, data-warehouse (DWH)-based BI is primarily used to retrospectively reflect an organization’s performance. Using an exploratory approach, this paper investigates how current trends affect the concept of BI and thus their ability to support adequate decision making. The key focus is to understand dynamic capabilities in the field of information systems (IS) and how they are connected to BI agility. We therefore map dynamic capabilities from the IS literature to agility dimensions of BI. Additionally, we propose a structural model that focusses on DWH-based BI and analyze how current BI-related trends and environmental turbulence affect the way that BI is shaped in the future

    A consumer perspective e-commerce websites evaluation model

    Get PDF
    Existing website evaluation methods have some weaknesses such as neglecting consumer criteria in their evaluation, being unable to deal with qualitative criteria, and involving complex weight and score calculations. This research aims to develop a hybrid consumer-oriented e-commerce website evaluation model based on the Fuzzy Analytical Hierarchy Process (FAHP) and the Hardmard Method (HM). Four phases were involved in developing the model: requirements identification, empirical study, model construction, and model confirmation. Requirements identification and empirical study were to identify critical web-design criteria and gather online consumers' preferences. Data, collected from 152 Malaysian consumers using online questionnaires, were used to identify critical e-commerce website features and scale of importance. The new evaluation model comprised of three components. First, the consumer evaluation criteria that consist of the important principles considered by consumers; second, the evaluation mechanisms that integrate FAHP and HM consisting of mathematical expressions that handle subjective judgments, new formulas to calculate the weight and score for each criterion; and third, the evaluation procedures consisting of activities that comprise of goal establishment, document preparation, and identification of website performance. The model was examined by six experts and applied to four case studies. The results show that the new model is practical, and appropriate to evaluate e-commerce websites from consumers' perspectives, and is able to calculate weights and scores for qualitative criteria in a simple way. In addition, it is able to assist decision-makers to make decisions in a measured objective way. The model also contributes new knowledge to the software evaluation fiel

    SERVICE-BASED AUTOMATION OF SOFTWARE CONSTRUCTION ACTIVITIES

    Get PDF
    The reuse of software units, such as classes, components and services require professional knowledge to be performed. Today a multiplicity of different software unit technologies, supporting tools, and related activities used in reuse processes exist. Each of these relevant reuse elements may also include a high number of variations and may differ in the level and quality of necessary reuse knowledge. In such an environment of increasing variations and, therefore, an increasing need for knowledge, software engineers must obtain such knowledge to be able to perform software unit reuse activities. Today many different reuse activities exist for a software unit. Some typical knowledge intensive activities are: transformation, integration, and deployment. In addition to the problem of the amount of knowledge required for such activities, other difficulties also exist. The global industrial environment makes it challenging to identify sources of, and access to, knowledge. Typically, such sources (e.g., repositories) are made to search and retrieve information about software unitsand not about the required reuse activity knowledge for a special unit. Additionally, the knowledge has to be learned by inexperienced software engineers and, therefore, to be interpreted. This interpretation may lead to variations in the reuse result and can differ from the estimated result of the knowledge creator. This makes it difficult to exchange knowledge between software engineers or global teams. Additionally, the reuse results of reuse activities have to be repeatable and sustainable. In such a scenario, the knowledge about software reuse activities has to be exchanged without the above mentioned problems by an inexperienced software engineer. The literature shows a lack of techniques to store and subsequently distribute relevant reuse activity knowledge among software engineers. The central aim of this thesis is to enable inexperienced software engineers to use knowledge required to perform reuse activities without experiencing the aforementioned problems. The reuse activities: transformation, integration, and deployment, have been selected as the foundation for the research. Based on the construction level of handling a software unit, these activities are called Software Construction Activities (SCAcs) throughout the research. To achieve the aim, specialised software construction activity models have been created and combined with an abstract software unit model. As a result, different SCAc knowledge is described and combined with different software unit artefacts needed by the SCAcs. Additionally, the management (e.g., the execution of an SCAc) will be provided in a service-oriented environment. Because of the focus on reuse activities, an approach which avoids changing the knowledge level of software engineers and the abstraction view on software units and activities, the object of the investigation differs from other approaches which aim to solve the insufficient reuse activity knowledge problem. The research devised novel abstraction models to describe SCAcs as knowledge models related to the relevant information of software units. The models and the focused environment have been created using standard technologies. As a result, these were realised easily in a real world environment. Softwareengineers were able to perform single SCAcs without having previously acquired the necessary knowledge. The risk of failing reuse decreases because single activities can be performed. The analysis of the research results is based on a case study. An example of a reuse environmenthas been created and tested in a case study to prove the operational capability of the approach. The main result of the research is a proven concept enabling inexperienced software engineers to reuse software units by reusing SCAcs. The research shows the reduction in time for reuse and a decrease of learning effort is significant

    Advanced Applications Of Big Data Analytics

    Full text link
    Human life is progressing with advancements in technology such as laptops, smart phones, high speed communication networks etc., which helps us by reducing load in doing our daily activities. For instance, one can chat, talk, make video calls with his/her friends instantly using social networking platforms such as Facebook, Twitter, Google+, WhatsApp etc. LinkedIn, Indeed, etc., connects employees with potential employers. The number of people using these applications are increasing day-by-day, and so is the amount of data generated from these applications. Processing such vast amounts of data, may require new techniques for gaining valuable insights. Network theory concepts form the core of such techniques that are designed to uncover valuable insights from large social network datasets. Many interesting problems such as ranking top-K nodes and top-K communities that can effectively diffuse any given message into the network, restaurant recommendations, friendship recommendations on social networking websites, etc., can be addressed by using the concepts of network centrality. Network centrality measures such as In-degree centrality, Out-degree centrality, Eigen-vector centrality, Katz Broadcast centrality, Katz Receive centrality, and PageRank centrality etc., comes handy in solving these problems. In this thesis, we propose different formulae for computing the strength for identifying top-K nodes and communities that can spread viral marketing messages into the network. The strength formulae are based on Katz Broadcast centrality, Resolvent matrix measure and Personalized PageRank measure. Moreover, the effects of intercommunity and intracommunity connectivity in ranking top-K communities are studied. Top-K nodes for spreading any message effectively into the network are determined by using Katz Broadcast centrality measure. Results obtained through this technique are compared with the top-K nodes obtained by using Degree centrality measure. We also studied the effects of varying α on the number of nodes in search space. In Algorithms 2 and 3, top-K communities are obtained by using Resolvent matrix and Personalized PageRank measure. Algorithm 2 results were studied by varying the parameter α

    Semantic model for mining e-learning usage with ontology and meaningful learning characteristics

    Get PDF
    The use of e-learning in higher education institutions is a necessity in the learning process. E-learning accumulates vast amount of usage data which could produce a new knowledge and useful for educators. The demand to gain knowledge from e-learning usage data requires a correct mechanism to extract exact information. Current models for mining e-learning usage have focused on the activities usage but ignored the actions usage. In addition, the models lack the ability to incorporate learning pedagogy, leading to a semantic gap to annotate mining data towards education domain. The other issue raised is the absence of usage recommendation that refers to result of data mining task. This research proposes a semantic model for mining e-learning usage with ontology and meaningful learning characteristics. The model starts by preparing data including activity and action hits. The next step is to calculate meaningful hits which categorized into five namely active, cooperative, constructive, authentic, and intentional. The process continues to apply K-means clustering analysis to group usage data into three clusters. Lastly, the usage data is mapped into ontology and the ontology manager generates the meaningful usage cluster and usage recommendation. The model was experimented with three datasets of distinct courses and evaluated by mapping against the student learning outcomes of the courses. The results showed that there is a positive relationship between meaningful hits and learning outcomes, and there is a positive relationship between meaningful usage cluster and learning outcomes. It can be concluded that the proposed semantic model is valid with 95% of confidence level. This model is capable to mine and gain insight into e-learning usage data and to provide usage recommendation

    HypertenGene: extracting key hypertension genes from biomedical literature with position and automatically-generated template features

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The genetic factors leading to hypertension have been extensively studied, and large numbers of research papers have been published on the subject. One of hypertension researchers' primary research tasks is to locate key hypertension-related genes in abstracts. However, gathering such information with existing tools is not easy: (1) Searching for articles often returns far too many hits to browse through. (2) The search results do not highlight the hypertension-related genes discovered in the abstract. (3) Even though some text mining services mark up gene names in the abstract, the key genes investigated in a paper are still not distinguished from other genes. To facilitate the information gathering process for hypertension researchers, one solution would be to extract the key hypertension-related genes in each abstract. Three major tasks are involved in the construction of this system: (1) gene and hypertension named entity recognition, (2) section categorization, and (3) gene-hypertension relation extraction.</p> <p>Results</p> <p>We first compare the retrieval performance achieved by individually adding template features and position features to the baseline system. Then, the combination of both is examined. We found that using position features can almost double the original AUC score (0.8140vs.0.4936) of the baseline system. However, adding template features only results in marginal improvement (0.0197). Including both improves AUC to 0.8184, indicating that these two sets of features are complementary, and do not have overlapping effects. We then examine the performance in a different domain--diabetes, and the result shows a satisfactory AUC of 0.83.</p> <p>Conclusion</p> <p>Our approach successfully exploits template features to recognize true hypertension-related gene mentions and position features to distinguish key genes from other related genes. Templates are automatically generated and checked by biologists to minimize labor costs. Our approach integrates the advantages of machine learning models and pattern matching. To the best of our knowledge, this the first systematic study of extracting hypertension-related genes and the first attempt to create a hypertension-gene relation corpus based on the GAD database. Furthermore, our paper proposes and tests novel features for extracting key hypertension genes, such as relative position, section, and template features, which could also be applied to key-gene extraction for other diseases.</p

    Synthesizing Adaptive Test Strategies from Temporal Logic Specifications

    Full text link
    Constructing good test cases is difficult and time-consuming, especially if the system under test is still under development and its exact behavior is not yet fixed. We propose a new approach to compute test strategies for reactive systems from a given temporal logic specification using formal methods. The computed strategies are guaranteed to reveal certain simple faults in every realization of the specification and for every behavior of the uncontrollable part of the system's environment. The proposed approach supports different assumptions on occurrences of faults (ranging from a single transient fault to a persistent fault) and by default aims at unveiling the weakest one. Based on well-established hypotheses from fault-based testing, we argue that such tests are also sensitive for more complex bugs. Since the specification may not define the system behavior completely, we use reactive synthesis algorithms with partial information. The computed strategies are adaptive test strategies that react to behavior at runtime. We work out the underlying theory of adaptive test strategy synthesis and present experiments for a safety-critical component of a real-world satellite system. We demonstrate that our approach can be applied to industrial specifications and that the synthesized test strategies are capable of detecting bugs that are hard to detect with random testing

    A review of software change impact analysis

    Get PDF
    Change impact analysis is required for constantly evolving systems to support the comprehension, implementation, and evaluation of changes. A lot of research effort has been spent on this subject over the last twenty years, and many approaches were published likewise. However, there has not been an extensive attempt made to summarize and review published approaches as a base for further research in the area. Therefore, we present the results of a comprehensive investigation of software change impact analysis, which is based on a literature review and a taxonomy for impact analysis. The contribution of this review is threefold. First, approaches proposed for impact analysis are explained regarding their motivation and methodology. They are further classified according to the criteria of the taxonomy to enable the comparison and evaluation of approaches proposed in literature. We perform an evaluation of our taxonomy regarding the coverage of its classification criteria in studied literature, which is the second contribution. Last, we address and discuss yet unsolved problems, research areas, and challenges of impact analysis, which were discovered by our review to illustrate possible directions for further research
    • …
    corecore