1,847 research outputs found

    On the real world practice of Behaviour Driven Development

    Get PDF
    Surveys of industry practice over the last decade suggest that Behaviour Driven Development is a popular Agile practice. For example, 19% of respondents to the 14th State of Agile annual survey reported using BDD, placing it in the top 13 practices reported. As well as potential benefits, the adoption of BDD necessarily involves an additional cost of writing and maintaining Gherkin features and scenarios, and (if used for acceptance testing,) the associated step functions. Yet there is a lack of published literature exploring how BDD is used in practice and the challenges experienced by real world software development efforts. This gap is significant because without understanding current real world practice, it is hard to identify opportunities to address and mitigate challenges. In order to address this research gap concerning the challenges of using BDD, this thesis reports on a research project which explored: (a) the challenges of applying agile and undertaking requirements engineering in a real world context; (b) the challenges of applying BDD specifically and (c) the application of BDD in open-source projects to understand challenges in this different context. For this purpose, we progressively conducted two case studies, two series of interviews, four iterations of action research, and an empirical study. The first case study was conducted in an avionics company to discover the challenges of using an agile process in a large scale safety critical project environment. Since requirements management was found to be one of the biggest challenges during the case study, we decided to investigate BDD because of its reputation for requirements management. The second case study was conducted in the company with an aim to discover the challenges of using BDD in real life. The case study was complemented with an empirical study of the practice of BDD in open source projects, taking a study sample from the GitHub open source collaboration site. As a result of this Ph.D research, we were able to discover: (i) challenges of using an agile process in a large scale safety-critical organisation, (ii) current state of BDD in practice, (iii) technical limitations of Gherkin (i.e., the language for writing requirements in BDD), (iv) challenges of using BDD in a real project, (v) bad smells in the Gherkin specifications of open source projects on GitHub. We also presented a brief comparison between the theoretical description of BDD and BDD in practice. This research, therefore, presents the results of lessons learned from BDD in practice, and serves as a guide for software practitioners planning on using BDD in their projects

    Pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inženjerstvu vođenom modelima

    Get PDF
    In this thesis, we present an approach to the production process specification and generation based on the model-driven paradigm, with the goal to increase the flexibility of factories and respond to the challenges that emerged in the era of Industry 4.0 more efficiently. To formally specify production processes and their variations in the Industry 4.0 environment, we created a novel domain-specific modeling language, whose models are machine-readable. The created language can be used to model production processes that can be independent of any production system, enabling process models to be used in different production systems, and process models used for the specific production system. To automatically transform production process models dependent on the specific production system into instructions that are to be executed by production system resources, we created an instruction generator. Also, we created generators for different manufacturing documentation, which automatically transform production process models into manufacturing documents of different types. The proposed approach, domain-specific modeling language, and software solution contribute to introducing factories into the digital transformation process. As factories must rapidly adapt to new products and their variations in the era of Industry 4.0, production must be dynamically led and instructions must be automatically sent to factory resources, depending on products that are to be created on the shop floor. The proposed approach contributes to the creation of such a dynamic environment in contemporary factories, as it allows to automatically generate instructions from process models and send them to resources for execution. Additionally, as there are numerous different products and their variations, keeping the required manufacturing documentation up to date becomes challenging, which can be done automatically by using the proposed approach and thus significantly lower process designers' time.У овој дисертацији представљен је приступ спецификацији и генерисању производних процеса заснован на инжењерству вођеном моделима, у циљу повећања флексибилности постројења у фабрикама и ефикаснијег разрешавања изазова који се појављују у ери Индустрије 4.0. За потребе формалне спецификације производних процеса и њихових варијација у амбијенту Индустрије 4.0, креиран је нови наменски језик, чије моделе рачунар може да обради на аутоматизован начин. Креирани језик има могућност моделовања производних процеса који могу бити независни од производних система и тиме употребљени у различитим постројењима или фабрикама, али и производних процеса који су специфични за одређени систем. Како би моделе производних процеса зависних од конкретног производног система било могуће на аутоматизован начин трансформисати у инструкције које ресурси производног система извршавају, креиран је генератор инструкција. Такође су креирани и генератори техничке документације, који на аутоматизован начин трансформишу моделе производних процеса у документе различитих типова. Употребом предложеног приступа, наменског језика и софтверског решења доприноси се увођењу фабрика у процес дигиталне трансформације. Како фабрике у ери Индустрије 4.0 морају брзо да се прилагоде новим производима и њиховим варијацијама, неопходно је динамички водити производњу и на аутоматизован начин слати инструкције ресурсима у фабрици, у зависности од производа који се креирају у конкретном постројењу. Тиме што је у предложеном приступу могуће из модела процеса аутоматизовано генерисати инструкције и послати их ресурсима, доприноси се креирању једног динамичког окружења у савременим фабрикама. Додатно, услед великог броја различитих производа и њихових варијација, постаје изазовно одржавати неопходну техничку документацију, што је у предложеном приступу могуће урадити на аутоматизован начин и тиме значајно уштедети време пројектаната процеса.U ovoj disertaciji predstavljen je pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inženjerstvu vođenom modelima, u cilju povećanja fleksibilnosti postrojenja u fabrikama i efikasnijeg razrešavanja izazova koji se pojavljuju u eri Industrije 4.0. Za potrebe formalne specifikacije proizvodnih procesa i njihovih varijacija u ambijentu Industrije 4.0, kreiran je novi namenski jezik, čije modele računar može da obradi na automatizovan način. Kreirani jezik ima mogućnost modelovanja proizvodnih procesa koji mogu biti nezavisni od proizvodnih sistema i time upotrebljeni u različitim postrojenjima ili fabrikama, ali i proizvodnih procesa koji su specifični za određeni sistem. Kako bi modele proizvodnih procesa zavisnih od konkretnog proizvodnog sistema bilo moguće na automatizovan način transformisati u instrukcije koje resursi proizvodnog sistema izvršavaju, kreiran je generator instrukcija. Takođe su kreirani i generatori tehničke dokumentacije, koji na automatizovan način transformišu modele proizvodnih procesa u dokumente različitih tipova. Upotrebom predloženog pristupa, namenskog jezika i softverskog rešenja doprinosi se uvođenju fabrika u proces digitalne transformacije. Kako fabrike u eri Industrije 4.0 moraju brzo da se prilagode novim proizvodima i njihovim varijacijama, neophodno je dinamički voditi proizvodnju i na automatizovan način slati instrukcije resursima u fabrici, u zavisnosti od proizvoda koji se kreiraju u konkretnom postrojenju. Time što je u predloženom pristupu moguće iz modela procesa automatizovano generisati instrukcije i poslati ih resursima, doprinosi se kreiranju jednog dinamičkog okruženja u savremenim fabrikama. Dodatno, usled velikog broja različitih proizvoda i njihovih varijacija, postaje izazovno održavati neophodnu tehničku dokumentaciju, što je u predloženom pristupu moguće uraditi na automatizovan način i time značajno uštedeti vreme projektanata procesa

    An Extensible User Interface for Lean 4

    Get PDF
    Contemporary proof assistants rely on complex automation and process libraries with millions of lines of code. At these scales, understanding the emergent interactions between components can be a serious challenge. One way of managing complexity, long established in informal practice, is through varying external representations. For instance, algebraic notation facilitates term-based reasoning whereas geometric diagrams invoke spatial intuition. Objects viewed one way become much simpler than when viewed differently. In contrast, modern general-purpose ITP systems usually only support limited, textual representations. Treating this as a problem of human-computer interaction, we aim to demonstrate that presentations - UI elements that store references to the objects they are displaying - are a fruitful way of thinking about ITP interface design. They allow us to make headway on two fronts - introspection of prover internals and support for diagrammatic reasoning. To this end we have built an extensible user interface for the Lean 4 prover with an associated ProofWidgets 4 library of presentation-based UI components. We demonstrate the system with several examples including type information popups, structured traces, contextual suggestions, a display for algebraic reasoning, and visualizations of red-black trees. Our interface is already part of the core Lean distribution

    Stress detection in lifelog data for improved personalized lifelog retrieval system

    Get PDF
    Stress can be categorized into acute and chronic types, with acute stress having short-term positive effects in managing hazardous situations, while chronic stress can adversely impact mental health. In a biological context, stress elicits a physiological response indicative of the fight-or-flight mechanism, accompanied by measurable changes in physiological signals such as blood volume pulse (BVP), galvanic skin response (GSR), and skin temperature (TEMP). While clinical-grade devices have traditionally been used to measure these signals, recent advancements in sensor technology enable their capture using consumer-grade wearable devices, providing opportunities for research in acute stress detection. Despite these advancements, there has been limited focus on utilizing low-resolution data obtained from sensor technology for early stress detection and evaluating stress detection models under real-world conditions. Moreover, the potential of physiological signals to infer mental stress information remains largely unexplored in lifelog retrieval systems. This thesis addresses these gaps through empirical investigations and explores the potential of utilizing physiological signals for stress detection and their integration within the state-of-the-art (SOTA) lifelog retrieval system. The main contributions of this thesis are as follows. Firstly, statistical analyses are conducted to investigate the feasibility of using low-resolution data for stress detection and emphasize the superiority of subject-dependent models over subject-independent models, thereby proposing the optimal approach to training stress detection models with low-resolution data. Secondly, longitudinal stress lifelog data is collected to evaluate stress detection models in real-world settings. It is proposed that training lifelog models on physiological signals in real-world settings is crucial to avoid detection inaccuracies caused by differences between laboratory and free-living conditions. Finally, a state-of-the-art lifelog interactive retrieval system called \lifeseeker is developed, incorporating the stress-moment filter function. Experimental results demonstrate that integrating this function improves the overall performance of the system in both interactive and non-interactive modes. In summary, this thesis contributes to the understanding of stress detection applied in real-world settings and showcases the potential of integrating stress information for enhancing personalized lifelog retrieval system performance

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates

    Exploring the Experience of Implementing a Whole School Peer Led Emotional Self-Regulation Programme.

    Get PDF
    The aim of this qualitative study was to explore the implementation of a whole school peer led emotional self-regulation breathing practice within primary schools. It is thought two in five young people score above thresholds for emotional difficulties (Deighton et al., 2019) and the Department for Education recognises schools can provide an environment where interventions can be effectively used to support wellbeing, with interventions implemented as part of a preventative approach (DfE, 2017). This research used semi-structured interviews and focus groups to gather the views and experiences of adults and children involved in the implementation of Take 5, a peer led emotional self-regulation breathing practice. Aiming to further understand what both supports and hinders implementation across a school. Adult participants (n=3) engaged in semi-structured interviews, and child participants (n=14) took part across three focus groups. Transcripts were analysed using reflexive thematic analysis (Braun & Clarke, 2022). The findings provide further understanding of how schools can implement and successfully embed a peer led whole school programme. Recommendations for future research are discussed, along with the implications of the findings on Educational Psychology practice

    Melhoria do processo legislativo no âmbito parlamentar pela introdução de tecnologias digitais do domínio legal

    Get PDF
    Atualmente, as organizações abordam a melhoria dos seus processos com foco predominante na digitalização e automatização desses processos. Na perspetiva de organizações como os Parlamentos, a automação de processos tem sido abordada, porém, pouco tem sido explorado no âmbito legislativo. Por outro lado, no campo jurídico, a utilização de soluções informáticas para serviços jurídicos tem também emergido. É neste contexto que nesta investigação apresentamos uma nova abordagem para melhorar o processo de elaboração de leis no âmbito parlamentar a partir da definição de uma linguagem específica de domínio. Esta abordagem enfatiza a definição dessa linguagem que representa os conceitos essenciais das leis e suas relações. A investigação realizou uma revisão sistemática da literatura de publicações sobre ontologias legais, como suporte aos processos e atividades de natureza legal, e realizou uma análise comparativa de diferentes ontologias legais. Essa revisão da literatura permitiu identificar problemas concretos nas atividades de produção de leis, sujeitas a vários tipos de erros, e permitiu a definição de um modelo de referência para representar leis de forma mais rigorosa e explícita, proporcionando uma maior sistematização e qualidade dos processos da produção de leis. A análise das ontologias legais permitiu-nos definir essa linguagem especifica de domínio como ferramenta de apoio a esses processos, em particular, processos que vão desde a submissão de iniciativas de leis até sua posterior autoria e homologação. Com recurso às tecnologias Eclipse e Xtext, os autores criaram a ferramenta LegalStudio que representa a linguagem LegalLanguage, com realce de sintaxe, verificação de erros e um editor integrado, no qual as atividades de escrita normativa podem ser aprimoradas e menos propensas a erros em comparação com os processos tradicionais. Foi realizada uma avaliação do protótipo da LegalStudio para apoiar à edição e validação de textos normativos, destacando seus principais atores, funcionalidades e aspetos de apreciação geral da ferramenta.Currently, organizations approach the improvement of their processes with a predominant focus on digitizing and automating these processes. From the perspective of organizations such as Parliaments, the automation of processes has been addressed, however, little has been explored in the legislative scope. On the other hand, in the legal field, the use of IT solutions for legal services has also emerged. It is in this context that in this investigation we present a new approach to improve the process of drafting laws in the parliamentary sphere, starting from the definition of a domain-specific language. This approach emphasizes the definition of that language that represents the essential concepts of the laws and their relationships. The investigation carried out a systematic literature review of publications on legal ontologies, as support to legal processes and activities, and carried out a comparative analysis of different legal ontologies. This literature review made it possible to identify concrete problems in the activities of law production, subject to various types of errors, and allowed the definition of a reference model to represent laws in a more rigorous and explicit way, providing a greater systematization and quality of the processes of the production of laws. The analysis of legal ontologies allowed us to define this domain-specific language as a tool to support these processes, in particular, processes that range from the submission of law initiatives to their subsequent authorship and approval. Using Eclipse and Xtext technologies, the authors created the LegalStudio tool that represents the LegalLanguage language, with syntax highlighting, error checking, and an integrated editor, in which normative writing activities can be improved and less prone to errors compared to traditional processes. An evaluation of the LegalStudio prototype to support the editing and validation of normative texts was carried out, highlighting its main actors, functionalities, and aspects of general appreciation of the tool

    Informing Sustainable Standards in 'The Circular Economy' utilising technological and data solutions

    Get PDF
    In our world of make, use and throw away we are now doing more damage to the planet than good, and this mindset has become unsustainable. One of the solutions to this problem is the ‘Circular Economy’ (CE). The CE replaces the concept of end-of-life production with restoration of natural systems, innovative design to design out waste and keeping products and materials in circulation for as long as possible. This research will use data science and statistical information to provide a solid foundation (framework) for standards developers to frame the development of standards for the CE. The research will extend the current CE model by interjecting innovative ideas into areas of the CE process: data analysis, restriction of harmful chemicals removing them from the supply chain, research into Local Value Creation (LVC) and research into Sustainable Development in the CE. The research will investigate how Radio Frequency Identification (RFID) tagging of products and materials provide a realistic way to trace products and materials in a CE management system. It will also expand the knowledge on digitization in standards development by analyzing key data streams connected to the CE in order to inform the standards development community of the need to develop a standard on the CE. This research will use a mixed methodology by combining quantitative methods (data analysis) and qualitative data (case studies). This will be detailed in Chapter 3 – Methodology. The data collected from the literature review will drive four main Sections and four research questions in Chapter 4. This research will analyse through Case Studies and research papers the uptake of circular thinking in China and the Ellen MacArthur Foundation and use the outcomes positive or negative to show practical applications for this research.The objective conclusion of this research is to provide a framework for a European or International standard in order to fill the gap as no such Standard currently exists European or Internationally that addresses the CE. A Framework with inclusions from the research will form a usable output from the research. This research will inform or be of interest to the Standards development community, data scientists, Circular Economy practitioners and environmental regulators. The aim of this research is to provide a framework standard using underlying data and statistical information needed to develop a new Standard on the Circular Economy. Once a Standard is developed and published it can be used by any organisation or group of organisations, country or individual wishing to manage internally and collectively their activities in order to transition to the CE and the Sustainable Development goal of responsible consumption and production. This research has produced a framework from which sustainable standards can be developed. The data acquired from using RFID tags imbedded in products allows manufacturers to control and analyse the materials in their products specific to hazardous chemicals. This data can also be used to track the product through the supply chain and onto its product life cycle. The data gathered in the product example in this thesis tracks the potential use of hazardous chemicals in the product, this is important information for endof-life decisions to be made on the product. The data can then be used to develop requirements and testing regimes for circular economy standards. Having identified some of the main areas of future activity in the CE, this research i.e., the circular economy, data science and standards development will continue to evoke research in the CE for the foreseeable future

    The Role of Synthetic Data in Improving Supervised Learning Methods: The Case of Land Use/Land Cover Classification

    Get PDF
    A thesis submitted in partial fulfillment of the requirements for the degree of Doctor in Information ManagementIn remote sensing, Land Use/Land Cover (LULC) maps constitute important assets for various applications, promoting environmental sustainability and good resource management. Although, their production continues to be a challenging task. There are various factors that contribute towards the difficulty of generating accurate, timely updated LULC maps, both via automatic or photo-interpreted LULC mapping. Data preprocessing, being a crucial step for any Machine Learning task, is particularly important in the remote sensing domain due to the overwhelming amount of raw, unlabeled data continuously gathered from multiple remote sensing missions. However a significant part of the state-of-the-art focuses on scenarios with full access to labeled training data with relatively balanced class distributions. This thesis focuses on the challenges found in automatic LULC classification tasks, specifically in data preprocessing tasks. We focus on the development of novel Active Learning (AL) and imbalanced learning techniques, to improve ML performance in situations with limited training data and/or the existence of rare classes. We also show that much of the contributions presented are not only successful in remote sensing problems, but also in various other multidisciplinary classification problems. The work presented in this thesis used open access datasets to test the contributions made in imbalanced learning and AL. All the data pulling, preprocessing and experiments are made available at https://github.com/joaopfonseca/publications. The algorithmic implementations are made available in the Python package ml-research at https://github.com/joaopfonseca/ml-research

    Heterogeneous Acceleration for 5G New Radio Channel Modelling Using FPGAs and GPUs

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore