5,110 research outputs found

    The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka

    Get PDF
    It was evident through the literature that the perceived value delivery of the global software engineering industry is low due to various facts. Therefore, this research concerns global software product companies in Sri Lanka to explore the software engineering methods and practices in increasing the value addition. The overall aim of the study is to identify the key determinants for value addition in the global software engineering industry and critically evaluate the impact of them for the software product companies to help maximise the value addition to ultimately assure the sustainability of the industry. An exploratory research approach was used initially since findings would emerge while the study unfolds. Mixed method was employed as the literature itself was inadequate to investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the disciplines from the targeted organisations which was combined with the literature findings as well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings of the existing literature were verified through the exploratory study and the outcomes were used to formulate the questionnaire for the public survey. 371 responses were considered after cleansing the total responses received for the data analysis through SPSS 21 with alpha level 0.05. Internal consistency test was done before the descriptive analysis. After assuring the reliability of the dataset, the correlation test, multiple regression test and analysis of variance (ANOVA) test were carried out to fulfil the requirements of meeting the research objectives. Five determinants for value addition were identified along with the key themes for each area. They are staffing, delivery process, use of tools, governance, and technology infrastructure. The cross-functional and self-organised teams built around the value streams, employing a properly interconnected software delivery process with the right governance in the delivery pipelines, selection of tools and providing the right infrastructure increases the value delivery. Moreover, the constraints for value addition are poor interconnection in the internal processes, rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team arrangements and inadequate focus for the technology infrastructure. The findings add to the existing body of knowledge on increasing the value addition by employing effective processes, practices and tools and the impacts of inaccurate applications the same in the global software engineering industry

    Annals [...].

    Get PDF
    Pedometrics: innovation in tropics; Legacy data: how turn it useful?; Advances in soil sensing; Pedometric guidelines to systematic soil surveys.Evento online. Coordenado por: Waldir de Carvalho Junior, Helena Saraiva Koenow Pinheiro, Ricardo Simão Diniz Dalmolin

    From wallet to mobile: exploring how mobile payments create customer value in the service experience

    Get PDF
    This study explores how mobile proximity payments (MPP) (e.g., Apple Pay) create customer value in the service experience compared to traditional payment methods (e.g. cash and card). The main objectives were firstly to understand how customer value manifests as an outcome in the MPP service experience, and secondly to understand how the customer activities in the process of using MPP create customer value. To achieve these objectives a conceptual framework is built upon the Grönroos-Voima Value Model (Grönroos and Voima, 2013), and uses the Theory of Consumption Value (Sheth et al., 1991) to determine the customer value constructs for MPP, which is complimented with Script theory (Abelson, 1981) to determine the value creating activities the consumer does in the process of paying with MPP. The study uses a sequential exploratory mixed methods design, wherein the first qualitative stage uses two methods, self-observations (n=200) and semi-structured interviews (n=18). The subsequent second quantitative stage uses an online survey (n=441) and Structural Equation Modelling analysis to further examine the relationships and effect between the value creating activities and customer value constructs identified in stage one. The academic contributions include the development of a model of mobile payment services value creation in the service experience, introducing the concept of in-use barriers which occur after adoption and constrains the consumers existing use of MPP, and revealing the importance of the mobile in-hand momentary condition as an antecedent state. Additionally, the customer value perspective of this thesis demonstrates an alternative to the dominant Information Technology approaches to researching mobile payments and broadens the view of technology from purely an object a user interacts with to an object that is immersed in consumers’ daily life

    Innovative Hybrid Approaches for Vehicle Routing Problems

    Get PDF
    This thesis deals with the efficient resolution of Vehicle Routing Problems (VRPs). The first chapter faces the archetype of all VRPs: the Capacitated Vehicle Routing Problem (CVRP). Despite having being introduced more than 60 years ago, it still remains an extremely challenging problem. In this chapter I design a Fast Iterated-Local-Search Localized Optimization algorithm for the CVRP, shortened to FILO. The simplicity of the CVRP definition allowed me to experiment with advanced local search acceleration and pruning techniques that have eventually became the core optimization engine of FILO. FILO experimentally shown to be extremely scalable and able to solve very large scale instances of the CVRP in a fraction of the computing time compared to existing state-of-the-art methods, still obtaining competitive solutions in terms of their quality. The second chapter deals with an extension of the CVRP called the Extended Single Truck and Trailer Vehicle Routing Problem, or simply XSTTRP. The XSTTRP models a broad class of VRPs in which a single vehicle, composed of a truck and a detachable trailer, has to serve a set of customers with accessibility constraints making some of them not reachable by using the entire vehicle. This problem moves towards VRPs including more realistic constraints and it models scenarios such as parcel deliveries in crowded city centers or rural areas, where maneuvering a large vehicle is forbidden or dangerous. The XSTTRP generalizes several well known VRPs such as the Multiple Depot VRP and the Location Routing Problem. For its solution I developed an hybrid metaheuristic which combines a fast heuristic optimization with a polishing phase based on the resolution of a limited set partitioning problem. Finally, the thesis includes a final chapter aimed at guiding the computational evaluation of new approaches to VRPs proposed by the machine learning community

    Developing automated meta-research approaches in the preclinical Alzheimer's disease literature

    Get PDF
    Alzheimer’s disease is a devastating neurodegenerative disorder for which there is no cure. A crucial part of the drug development pipeline involves testing therapeutic interventions in animal disease models. However, promising findings in preclinical experiments have not translated into clinical trial success. Reproducibility has often been cited as a major issue affecting biomedical research, where experimental results in one laboratory cannot be replicated in another. By using meta-research (research on research) approaches such as systematic reviews, researchers aim to identify and summarise all available evidence relating to a specific research question. By conducting a meta-analysis, researchers can also combine the results from different experiments statistically to understand the overall effect of an intervention and to explore reasons for variations seen across different publications. Systematic reviews of the preclinical Alzheimer’s disease literature could inform decision making, encourage research improvement, and identify gaps in the literature to guide future research. However, due to the vast amount of potentially useful evidence from animal models of Alzheimer’s disease, it remains difficult to make sense of and utilise this data effectively. Systematic reviews are common practice within evidence based medicine, yet their application to preclinical research is often limited by the time and resources required. In this thesis, I develop, build-upon, and implement automated meta-research approaches to collect, curate, and evaluate the preclinical Alzheimer’s literature. I searched several biomedical databases to obtain all research relevant to Alzheimer’s disease. I developed a novel deduplication tool to automatically identify and remove duplicate publications identified across different databases with minimal human effort. I trained a crowd of reviewers to annotate a subset of the publications identified and used this data to train a machine learning algorithm to screen through the remaining publications for relevance. I developed text-mining tools to extract model, intervention, and treatment information from publications and I improved existing automated tools to extract reported measures to reduce the risk of bias. Using these tools, I created a categorised database of research in transgenic Alzheimer’s disease animal models and created a visual summary of this dataset on an interactive, openly accessible online platform. Using the techniques described, I also identified relevant publications within the categorised dataset to perform systematic reviews of two key outcomes of interest in transgenic Alzheimer’s disease models: (1) synaptic plasticity and transmission in hippocampal slices and (2) motor activity in the open field test. Over 400,000 publications were identified across biomedical research databases, with 230,203 unique publications. In a performance evaluation across different preclinical datasets, the automated deduplication tool I developed could identify over 97% of duplicate citations and a had an error rate similar to that of human performance. When evaluated on a test set of publications, the machine learning classifier trained to identify relevant research in transgenic models performed was highly sensitive (captured 96.5% of relevant publications) and excluded 87.8% of irrelevant publications. Tools to identify the model(s) and outcome measure(s) within the full-text of publications may reduce the burden on reviewers and were found to be more sensitive than searching only the title and abstract of citations. Automated tools to assess risk of bias reporting were highly sensitive and could have the potential to monitor research improvement over time. The final dataset of categorised Alzheimer’s disease research contained 22,375 publications which were then visualised in the interactive web application. Within the application, users can see how many publications report measures to reduce the risk of bias and how many have been classified as using each transgenic model, testing each intervention, and measuring each outcome. Users can also filter to obtain curated lists of relevant research, allowing them to perform systematic reviews at an accelerated pace with reduced effort required to search across databases, and a reduced number of publications to screen for relevance. Both systematic reviews and meta-analyses highlighted failures to report key methodological information within publications. Poor transparency of reporting limited the statistical power I had to understand the sources of between-study variation. However, some variables were found to explain a significant proportion of the heterogeneity. Transgenic animal model had a significant impact on results in both reviews. For certain open field test outcomes, wall colour of the open field arena and the reporting of measures to reduce the risk of bias were found to impact results. For in vitro electrophysiology experiments measuring synaptic plasticity, several electrophysiology parameters, including magnesium concentration of the recording solution, were found to explain a significant proportion of the heterogeneity. Automated meta-research approaches and curated web platforms summarising preclinical research could have the potential to accelerate the conduct of systematic reviews and maximise the potential of existing evidence to inform translation

    Great expectations: unsupervised inference of suspense, surprise and salience in storytelling

    Get PDF
    Stories interest us not because they are a sequence of mundane and predictable events but because they have drama and tension. Crucial to creating dramatic and exciting stories are surprise and suspense. Likewise, certain events are key to the plot and more important than others. Importance is referred to as salience. Inferring suspense, surprise and salience are highly challenging for computational systems. It is difficult because all these elements require a strong comprehension of the characters and their motivations, places, changes over time, and the cause/effect of complex interactions. Recently advances in machine learning (often called deep learning) have substantially improved in many language-related tasks, including story comprehension and story writing. Most of these systems rely on supervision; that is, huge numbers of people need to tag large quantities of data to tell the system what to teach these systems. An example would be tagging which events are suspenseful. It is highly inflexible and costly. Instead, the thesis trains a series of deep learning models via only reading stories, a self-supervised (or unsupervised) system. Narrative theory methods (rules and procedures) are applied to the knowledge built into the deep learning models to directly infer salience, surprise, and salience in stories. Extensions add memory and external knowledge from story plots and from Wikipedia to infer salience on novels such as Great Expectations and plays such as Macbeth. Other work adapts the models as a planning system for generating new stories. The thesis finds that applying the narrative theory to deep learning models can align with the typical reader. In follow up work, the insights could help improve computer models for tasks such as automatic story writing, assistance for writing, summarising or editing stories. Moreover, the approach of applying narrative theory to the inherent qualities built in a system that learns itself (self-supervised) from reading from books, watching videos, listening to audio is much cheaper and more adaptable to other domains and tasks. Progress is swift in improving self-supervised systems. As such, the thesis's relevance is that applying domain expertise with these systems may be a more productive approach in many areas of interest for applying machine learning

    The problem of hyperbolic discounting

    Get PDF

    The European Spallation Source neutrino super-beam conceptual design report

    Full text link
    Artículo escrito por un elevado número de autores, solo se referencian el que aparece en primer lugar, el nombre del grupo de colaboración, si le hubiere, y los autores pertenecientes a la UAMA design study, named ESS νSB for European Spallation Source neutrino Super Beam, has been carried out during the years 2018–2022 of how the 5 MW proton linear accelerator of the European Spallation Source under construction in Lund, Sweden, can be used to produce the world’s most intense long-baseline neutrino beam. The high beam intensity will allow for measuring the neutrino oscillations near the second oscillation maximum at which the CP violation signal is close to three times higher than at the first maximum, where other experiments measure. This will enable CP violation discovery in the leptonic sector for a wider range of values of the CP violating phase δCP and, in particular, a higher precision measurement of δCP. The present Conceptual Design Report describes the results of the design study of the required upgrade of the ESS linac, of the accumulator ring used to compress the linac pulses from 2.86 ms to 1.2 μs, and of the target station, where the 5 MW proton beam is used to produce the intense neutrino beam. It also presents the design of the near detector, which is used to monitor the neutrino beam as well as to measure neutrino cross sections, and of the large underground far detector located 360 km from ESS, where the magnitude of the oscillation appearance of νe from νμ is measured. The physics performance of the ESS νSB research facility has been evaluated demonstrating that after 10 years of data-taking, leptonic CP violation can be detected with more than 5 standard deviation significance over 70% of the range of values that the CP violation phase angle δCP can take and that δCP can be measured with a standard error less than 8° irrespective of the measured value of δCP. These results demonstrate the uniquely high physics performance of the proposed ESS νSB research facilit

    Metamodern Strategy: A System Of Multi-Ontological Sense Making

    Get PDF
    Multi-ontological sense making in irreducible social systems requires the use of different worldviews to generate contextually appropriate understandings and insights for action in different systems states. While models exist for describing complex dynamics in social systems, no frameworks or aids exist to explain the system of worldviews. This dissertation developed a conceptual scheme that will aid in multi-ontological sense making in social systems. This conceptual scheme has both theoretical and practical implications for visualizing, understanding, and responding to social systems and ultimately to complexity. To develop this new conceptual scheme, a qualitative meta-synthesis approach was adopted to develop theory and to develop a framework for classifying management approaches, tools and techniques to corresponding worldviews for use in dynamic and complicated social systems. The research design was sequential, with four phases. In phase one a content analysis of 16 worldviews was conducted to develop a classification framework for worldviews. In phase two the worldview classification framework was then applied to 35 strategy consulting approaches to categorize the approaches to differing underlying worldviews and to understand the ontological mapping of the differing approaches. Phase three was analyzing the data, the results of which showed that strategy consulting engagements cast sense making in social systems primarily into three simplified quadrants: the simple, complex, and complicated. The results further showed that only the process consulting approaches adopted a multi-dimensional, worldview-driven approach to social systems, an approach that moved beyond the simplified states of the expert, doctor-patient, and emergent approaches to strategy consulting. In phase four a new theory of sense making was developed: the aspectus system. The aspectus system stresses the importance of segregating sense making activities in social systems into two distinct worldview-driven categories: (a) simplified sense making which informs and is followed by (b) metamodern sense making. In doing so, the Aspectus system separates worldview-driven sense making in social systems into a separate domain, emphasizing that social systems must be considered as both complex and complicated and also as distinct from other types of systems. The aspectus system application in shared sense making was then tested in a thought experiment to demonstrate how it should be applied in practice. The results indicate that a worldview-driven, metamodern approach to multi- ontological sense making in irreducible complex and complicated social systems generates contextually appropriate models for understanding, insights, and actions
    • …
    corecore