1,155 research outputs found

    On the real world practice of Behaviour Driven Development

    Get PDF
    Surveys of industry practice over the last decade suggest that Behaviour Driven Development is a popular Agile practice. For example, 19% of respondents to the 14th State of Agile annual survey reported using BDD, placing it in the top 13 practices reported. As well as potential benefits, the adoption of BDD necessarily involves an additional cost of writing and maintaining Gherkin features and scenarios, and (if used for acceptance testing,) the associated step functions. Yet there is a lack of published literature exploring how BDD is used in practice and the challenges experienced by real world software development efforts. This gap is significant because without understanding current real world practice, it is hard to identify opportunities to address and mitigate challenges. In order to address this research gap concerning the challenges of using BDD, this thesis reports on a research project which explored: (a) the challenges of applying agile and undertaking requirements engineering in a real world context; (b) the challenges of applying BDD specifically and (c) the application of BDD in open-source projects to understand challenges in this different context. For this purpose, we progressively conducted two case studies, two series of interviews, four iterations of action research, and an empirical study. The first case study was conducted in an avionics company to discover the challenges of using an agile process in a large scale safety critical project environment. Since requirements management was found to be one of the biggest challenges during the case study, we decided to investigate BDD because of its reputation for requirements management. The second case study was conducted in the company with an aim to discover the challenges of using BDD in real life. The case study was complemented with an empirical study of the practice of BDD in open source projects, taking a study sample from the GitHub open source collaboration site. As a result of this Ph.D research, we were able to discover: (i) challenges of using an agile process in a large scale safety-critical organisation, (ii) current state of BDD in practice, (iii) technical limitations of Gherkin (i.e., the language for writing requirements in BDD), (iv) challenges of using BDD in a real project, (v) bad smells in the Gherkin specifications of open source projects on GitHub. We also presented a brief comparison between the theoretical description of BDD and BDD in practice. This research, therefore, presents the results of lessons learned from BDD in practice, and serves as a guide for software practitioners planning on using BDD in their projects

    Proceedings of the 10th International congress on architectural technology (ICAT 2024): architectural technology transformation.

    Get PDF
    The profession of architectural technology is influential in the transformation of the built environment regionally, nationally, and internationally. The congress provides a platform for industry, educators, researchers, and the next generation of built environment students and professionals to showcase where their influence is transforming the built environment through novel ideas, businesses, leadership, innovation, digital transformation, research and development, and sustainable forward-thinking technological and construction assembly design

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This ļ¬fth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different ļ¬elds of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modiļ¬ed Proportional Conļ¬‚ict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classiļ¬ers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identiļ¬cation of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classiļ¬cation. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classiļ¬cation, and hybrid techniques mixing deep learning with belief functions as well

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    The Impact of Participatory Budgeting on Health and Well-Being: A Qualitative Case Study of a Deprived Community in London

    Get PDF
    Background Participatory budgeting (PB) is a democratic innovation that enables residents to participate directly and collectively decide how to spend public money in their community. Research demonstrates PB improves social well-being through governance, citizensā€™ participation, empowerment, and improved democracy. Since 2000, PB has increasingly been used in the UK in community development approaches for improving health and well-being outcomes for people living in deprived communities. Yet little is known about how and why PB may impact health and well-being in deprived communities of the UK. This PhD study sought to explore and explain how the application of PB in the Well London programme impacted the health and well-being of people living in a deprived community in London. Methods The study employed a qualitative case study design adopting the constructivist grounded theory (CGT) methodology of Charmaz (2006) to explore critical themes from interviews with stakeholders of the Well London programme in Haringey Borough. Forty-one stakeholders engaged in planning, co-designing, co-commissioning and co-delivering, or benefitted from three interventions commissioned through PB participated in this study between March 2017 and April 2018. Results A cross-case analysis revealed six pathways through which PB improved health, particularly for the underserved. PB maximised participation and meaningful engagement; enhanced direct demand and response to the communityā€™s needs; individual and collective ownership; action on the social determinants of health; and creative partnership working. These pathways were moderated by the democratic and flexible approach of the PB ethos, particularly the inclusion of residentsā€™ voices in the planning and delivery of the interventions. Residents were motivated to act as agents to change their lives by building positive relationships based on social inclusion and integration. As a result, residentsā€™ self-esteem, sense of belonging, self-confidence, self-worth, and individual sense of belonging and community spirit increased. Residents gained a new zeal and agency to tackle the social determinants of health as they understood them in their lives. Conclusion When done correctly, PB can promote health and well-being and build more robust and resilient communities through community-centred democratic decision-making. Interventions should aim to increase critical consciousness, health literacy, and the capacity in deprived communities to tackle life-course issues that prevent residents from enjoying good health and reduce structural barriers to accessing services or interventions to improve health and reduce inequalities. The outcomes of this study have policy and practice implications for strengthening the design, commissioning, and delivery of health interventions in deprived communities of high-income countries

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Restoring and valuing global kelp forest ecosystems

    Full text link
    Kelp forests cover ~30% of the worldā€™s coastline and are the largest biogenic marine habitat on earth. Across their distribution, kelp forests are essential for the healthy functioning of marine ecosystems and consequently underpin many of the benefits coastal societies receive from the ocean. Concurrently, rising sea temperatures, overgrazing by marine herbivores, sedimentation, and water pollution have caused kelp forests populations to decline in most regions across the world. Effectively managing the response to these declines will be pivotal to maintaining healthy marine ecosystems and ensuring the benefits they provide are equitably distributed to coastal societies. In Chapter 1, I review how the marine management paradigm has shifted from protection to restoration as well as the consequences of this shift. Chapter 2 introduces the field of kelp forest restoration and provides a quantitative and qualitative review of 300 years of kelp forest restoration, exploring the genesis of restoration efforts, the lessons we have learned about restoration, and how we can develop the field for the future. Chapter 3 is a direct answer to the question faced while completing Chapter 2. This chapter details the need for a standardized marine restoration reporting framework, the benefits that it would provide, the challenges presented by creating one, and the solutions to these problems. Similarly, Chapter 4 is a response to the gaps discovered in Chapter 2. Chapter 4 explores how we can use naturally occurring positive species interactions and synergies with human activities to not only increase the benefits from ecosystem restoration but increase the probability that restoration is successful. The decision to restore an ecosystem or not is informed by the values and priorities of the society living in or managing that ecosystem. Chapter 5 quantifies the fisheries production, nutrient cycling, and carbon sequestration potential of five key genera of globally distributed kelp forests. I conclude the thesis by reviewing the lessons learned and the steps required to advance the field kelp forest restoration and conservation

    Data fusion for a multi-scale model of a wheat leaf surface: a unifying approach using a radial basis function partition of unity method

    Full text link
    Realistic digital models of plant leaves are crucial to fluid dynamics simulations of droplets for optimising agrochemical spray technologies. The presence and nature of small features (on the order of 100Ī¼m\mathrm{\mu m}) such as ridges and hairs on the surface have been shown to significantly affect the droplet evaporation, and thus the leaf's potential uptake of active ingredients. We show that these microstructures can be captured by implicit radial basis function partition of unity (RBFPU) surface reconstructions from micro-CT scan datasets. However, scanning a whole leaf (20cm220\mathrm{cm^2}) at micron resolutions is infeasible due to both extremely large data storage requirements and scanner time constraints. Instead, we micro-CT scan only a small segment of a wheat leaf (4mm24\mathrm{mm^2}). We fit a RBFPU implicit surface to this segment, and an explicit RBFPU surface to a lower resolution laser scan of the whole leaf. Parameterising the leaf using a locally orthogonal coordinate system, we then replicate the now resolved microstructure many times across a larger, coarser, representation of the leaf surface that captures important macroscale features, such as its size, shape, and orientation. The edge of one segment of the microstructure model is blended into its neighbour naturally by the partition of unity method. The result is one implicit surface reconstruction that captures the wheat leaf's features at both the micro- and macro-scales.Comment: 23 pages, 11 figure
    • ā€¦
    corecore