2,493 research outputs found
On the real world practice of Behaviour Driven Development
Surveys of industry practice over the last decade suggest that Behaviour Driven Development is a popular Agile practice. For example, 19% of respondents to the 14th State of Agile annual survey reported using BDD, placing it in the top 13 practices reported. As well as potential benefits, the adoption of BDD necessarily involves an additional cost of writing and maintaining Gherkin features and scenarios, and (if used for acceptance testing,) the associated step functions. Yet there is a lack of published literature exploring how BDD is used in practice and the challenges experienced by real world software development efforts. This gap is significant because without understanding current real world practice, it is hard to identify opportunities to address and mitigate challenges. In order to address this research gap concerning the challenges of using BDD, this thesis reports on a research project which explored: (a) the challenges of applying agile and undertaking requirements engineering in a real world context; (b) the challenges of applying BDD specifically and (c) the application of BDD in open-source projects to understand challenges in this different context.
For this purpose, we progressively conducted two case studies, two series of interviews, four iterations of action research, and an empirical study. The first case study was conducted in an avionics company to discover the challenges of using an agile process in a large scale safety critical project environment. Since requirements management was found to be one of the biggest challenges during the case study, we decided to investigate BDD because of its reputation for requirements management. The second case study was conducted in the company with an aim to discover the challenges of using BDD in real life. The case study was complemented with an empirical study of the practice of BDD in open source projects, taking a study sample from the GitHub open source collaboration site.
As a result of this Ph.D research, we were able to discover: (i) challenges of using an agile process in a large scale safety-critical organisation, (ii) current state of BDD in practice, (iii) technical limitations of Gherkin (i.e., the language for writing requirements in BDD), (iv) challenges of using BDD in a real project, (v) bad smells in the Gherkin specifications of open source projects on GitHub. We also presented a brief comparison between the theoretical description of BDD and BDD in practice. This research, therefore, presents the results of lessons learned from BDD in practice, and serves as a guide for software practitioners planning on using BDD in their projects
Configuration Management of Distributed Systems over Unreliable and Hostile Networks
Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems.
This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration.
Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture.
The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn.
Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts
Pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inΕΎenjerstvu voΔenom modelima
In this thesis, we present an approach to the production process specification and generation based on the model-driven paradigm, with the goal to increase the flexibility of factories and respond to the challenges that emerged in the era of Industry 4.0 more efficiently. To formally specify production processes and their variations in the Industry 4.0 environment, we created a novel domain-specific modeling language, whose models are machine-readable. The created language can be used to model production processes that can be independent of any production system, enabling process models to be used in different production systems, and process models used for the specific production system. To automatically transform production process models dependent on the specific production system into instructions that are to be executed by production system resources, we created an instruction generator. Also, we created generators for different manufacturing documentation, which automatically transform production process models into manufacturing documents of different types. The proposed approach, domain-specific modeling language, and software solution contribute to introducing factories into the digital transformation process. As factories must rapidly adapt to new products and their variations in the era of Industry 4.0, production must be dynamically led and instructions must be automatically sent to factory resources, depending on products that are to be created on the shop floor. The proposed approach contributes to the creation of such a dynamic environment in contemporary factories, as it allows to automatically generate instructions from process models and send them to resources for execution. Additionally, as there are numerous different products and their variations, keeping the required manufacturing documentation up to date becomes challenging, which can be done automatically by using the proposed approach and thus significantly lower process designers' time.Π£ ΠΎΠ²ΠΎΡ Π΄ΠΈΡΠ΅ΡΡΠ°ΡΠΈΡΠΈ ΠΏΡΠ΅Π΄ΡΡΠ°Π²ΡΠ΅Π½ ΡΠ΅ ΠΏΡΠΈΡΡΡΠΏ ΡΠΏΠ΅ΡΠΈΡΠΈΠΊΠ°ΡΠΈΡΠΈ ΠΈ Π³Π΅Π½Π΅ΡΠΈΡΠ°ΡΡ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΈΡ
ΠΏΡΠΎΡΠ΅ΡΠ° Π·Π°ΡΠ½ΠΎΠ²Π°Π½ Π½Π° ΠΈΠ½ΠΆΠ΅ΡΠ΅ΡΡΡΠ²Ρ Π²ΠΎΡΠ΅Π½ΠΎΠΌ ΠΌΠΎΠ΄Π΅Π»ΠΈΠΌΠ°, Ρ ΡΠΈΡΡ ΠΏΠΎΠ²Π΅ΡΠ°ΡΠ° ΡΠ»Π΅ΠΊΡΠΈΠ±ΠΈΠ»Π½ΠΎΡΡΠΈ ΠΏΠΎΡΡΡΠΎΡΠ΅ΡΠ° Ρ ΡΠ°Π±ΡΠΈΠΊΠ°ΠΌΠ° ΠΈ Π΅ΡΠΈΠΊΠ°ΡΠ½ΠΈΡΠ΅Π³ ΡΠ°Π·ΡΠ΅ΡΠ°Π²Π°ΡΠ° ΠΈΠ·Π°Π·ΠΎΠ²Π° ΠΊΠΎΡΠΈ ΡΠ΅ ΠΏΠΎΡΠ°Π²ΡΡΡΡ Ρ Π΅ΡΠΈ ΠΠ½Π΄ΡΡΡΡΠΈΡΠ΅ 4.0. ΠΠ° ΠΏΠΎΡΡΠ΅Π±Π΅ ΡΠΎΡΠΌΠ°Π»Π½Π΅ ΡΠΏΠ΅ΡΠΈΡΠΈΠΊΠ°ΡΠΈΡΠ΅ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΈΡ
ΠΏΡΠΎΡΠ΅ΡΠ° ΠΈ ΡΠΈΡ
ΠΎΠ²ΠΈΡ
Π²Π°ΡΠΈΡΠ°ΡΠΈΡΠ° Ρ Π°ΠΌΠ±ΠΈΡΠ΅Π½ΡΡ ΠΠ½Π΄ΡΡΡΡΠΈΡΠ΅ 4.0, ΠΊΡΠ΅ΠΈΡΠ°Π½ ΡΠ΅ Π½ΠΎΠ²ΠΈ Π½Π°ΠΌΠ΅Π½ΡΠΊΠΈ ΡΠ΅Π·ΠΈΠΊ, ΡΠΈΡΠ΅ ΠΌΠΎΠ΄Π΅Π»Π΅ ΡΠ°ΡΡΠ½Π°Ρ ΠΌΠΎΠΆΠ΅ Π΄Π° ΠΎΠ±ΡΠ°Π΄ΠΈ Π½Π° Π°ΡΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ Π½Π°ΡΠΈΠ½. ΠΡΠ΅ΠΈΡΠ°Π½ΠΈ ΡΠ΅Π·ΠΈΠΊ ΠΈΠΌΠ° ΠΌΠΎΠ³ΡΡΠ½ΠΎΡΡ ΠΌΠΎΠ΄Π΅Π»ΠΎΠ²Π°ΡΠ° ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΈΡ
ΠΏΡΠΎΡΠ΅ΡΠ° ΠΊΠΎΡΠΈ ΠΌΠΎΠ³Ρ Π±ΠΈΡΠΈ Π½Π΅Π·Π°Π²ΠΈΡΠ½ΠΈ ΠΎΠ΄ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΈΡ
ΡΠΈΡΡΠ΅ΠΌΠ° ΠΈ ΡΠΈΠΌΠ΅ ΡΠΏΠΎΡΡΠ΅Π±ΡΠ΅Π½ΠΈ Ρ ΡΠ°Π·Π»ΠΈΡΠΈΡΠΈΠΌ ΠΏΠΎΡΡΡΠΎΡΠ΅ΡΠΈΠΌΠ° ΠΈΠ»ΠΈ ΡΠ°Π±ΡΠΈΠΊΠ°ΠΌΠ°, Π°Π»ΠΈ ΠΈ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΈΡ
ΠΏΡΠΎΡΠ΅ΡΠ° ΠΊΠΎΡΠΈ ΡΡ ΡΠΏΠ΅ΡΠΈΡΠΈΡΠ½ΠΈ Π·Π° ΠΎΠ΄ΡΠ΅ΡΠ΅Π½ΠΈ ΡΠΈΡΡΠ΅ΠΌ. ΠΠ°ΠΊΠΎ Π±ΠΈ ΠΌΠΎΠ΄Π΅Π»Π΅ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΈΡ
ΠΏΡΠΎΡΠ΅ΡΠ° Π·Π°Π²ΠΈΡΠ½ΠΈΡ
ΠΎΠ΄ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΠΎΠ³ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΎΠ³ ΡΠΈΡΡΠ΅ΠΌΠ° Π±ΠΈΠ»ΠΎ ΠΌΠΎΠ³ΡΡΠ΅ Π½Π° Π°ΡΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ Π½Π°ΡΠΈΠ½ ΡΡΠ°Π½ΡΡΠΎΡΠΌΠΈΡΠ°ΡΠΈ Ρ ΠΈΠ½ΡΡΡΡΠΊΡΠΈΡΠ΅ ΠΊΠΎΡΠ΅ ΡΠ΅ΡΡΡΡΠΈ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΎΠ³ ΡΠΈΡΡΠ΅ΠΌΠ° ΠΈΠ·Π²ΡΡΠ°Π²Π°ΡΡ, ΠΊΡΠ΅ΠΈΡΠ°Π½ ΡΠ΅ Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡ ΠΈΠ½ΡΡΡΡΠΊΡΠΈΡΠ°. Π’Π°ΠΊΠΎΡΠ΅ ΡΡ ΠΊΡΠ΅ΠΈΡΠ°Π½ΠΈ ΠΈ Π³Π΅Π½Π΅ΡΠ°ΡΠΎΡΠΈ ΡΠ΅Ρ
Π½ΠΈΡΠΊΠ΅ Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ°ΡΠΈΡΠ΅, ΠΊΠΎΡΠΈ Π½Π° Π°ΡΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ Π½Π°ΡΠΈΠ½ ΡΡΠ°Π½ΡΡΠΎΡΠΌΠΈΡΡ ΠΌΠΎΠ΄Π΅Π»Π΅ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π½ΠΈΡ
ΠΏΡΠΎΡΠ΅ΡΠ° Ρ Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ΅ ΡΠ°Π·Π»ΠΈΡΠΈΡΠΈΡ
ΡΠΈΠΏΠΎΠ²Π°. Π£ΠΏΠΎΡΡΠ΅Π±ΠΎΠΌ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΎΠ³ ΠΏΡΠΈΡΡΡΠΏΠ°, Π½Π°ΠΌΠ΅Π½ΡΠΊΠΎΠ³ ΡΠ΅Π·ΠΈΠΊΠ° ΠΈ ΡΠΎΡΡΠ²Π΅ΡΡΠΊΠΎΠ³ ΡΠ΅ΡΠ΅ΡΠ° Π΄ΠΎΠΏΡΠΈΠ½ΠΎΡΠΈ ΡΠ΅ ΡΠ²ΠΎΡΠ΅ΡΡ ΡΠ°Π±ΡΠΈΠΊΠ° Ρ ΠΏΡΠΎΡΠ΅Ρ Π΄ΠΈΠ³ΠΈΡΠ°Π»Π½Π΅ ΡΡΠ°Π½ΡΡΠΎΡΠΌΠ°ΡΠΈΡΠ΅. ΠΠ°ΠΊΠΎ ΡΠ°Π±ΡΠΈΠΊΠ΅ Ρ Π΅ΡΠΈ ΠΠ½Π΄ΡΡΡΡΠΈΡΠ΅ 4.0 ΠΌΠΎΡΠ°ΡΡ Π±ΡΠ·ΠΎ Π΄Π° ΡΠ΅ ΠΏΡΠΈΠ»Π°Π³ΠΎΠ΄Π΅ Π½ΠΎΠ²ΠΈΠΌ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΠΈΠΌΠ° ΠΈ ΡΠΈΡ
ΠΎΠ²ΠΈΠΌ Π²Π°ΡΠΈΡΠ°ΡΠΈΡΠ°ΠΌΠ°, Π½Π΅ΠΎΠΏΡ
ΠΎΠ΄Π½ΠΎ ΡΠ΅ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠΊΠΈ Π²ΠΎΠ΄ΠΈΡΠΈ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄ΡΡ ΠΈ Π½Π° Π°ΡΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ Π½Π°ΡΠΈΠ½ ΡΠ»Π°ΡΠΈ ΠΈΠ½ΡΡΡΡΠΊΡΠΈΡΠ΅ ΡΠ΅ΡΡΡΡΠΈΠΌΠ° Ρ ΡΠ°Π±ΡΠΈΡΠΈ, Ρ Π·Π°Π²ΠΈΡΠ½ΠΎΡΡΠΈ ΠΎΠ΄ ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π° ΠΊΠΎΡΠΈ ΡΠ΅ ΠΊΡΠ΅ΠΈΡΠ°ΡΡ Ρ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΠΎΠΌ ΠΏΠΎΡΡΡΠΎΡΠ΅ΡΡ. Π’ΠΈΠΌΠ΅ ΡΡΠΎ ΡΠ΅ Ρ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΎΠΌ ΠΏΡΠΈΡΡΡΠΏΡ ΠΌΠΎΠ³ΡΡΠ΅ ΠΈΠ· ΠΌΠΎΠ΄Π΅Π»Π° ΠΏΡΠΎΡΠ΅ΡΠ° Π°ΡΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ΠΎ Π³Π΅Π½Π΅ΡΠΈΡΠ°ΡΠΈ ΠΈΠ½ΡΡΡΡΠΊΡΠΈΡΠ΅ ΠΈ ΠΏΠΎΡΠ»Π°ΡΠΈ ΠΈΡ
ΡΠ΅ΡΡΡΡΠΈΠΌΠ°, Π΄ΠΎΠΏΡΠΈΠ½ΠΎΡΠΈ ΡΠ΅ ΠΊΡΠ΅ΠΈΡΠ°ΡΡ ΡΠ΅Π΄Π½ΠΎΠ³ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠΊΠΎΠ³ ΠΎΠΊΡΡΠΆΠ΅ΡΠ° Ρ ΡΠ°Π²ΡΠ΅ΠΌΠ΅Π½ΠΈΠΌ ΡΠ°Π±ΡΠΈΠΊΠ°ΠΌΠ°. ΠΠΎΠ΄Π°ΡΠ½ΠΎ, ΡΡΠ»Π΅Π΄ Π²Π΅Π»ΠΈΠΊΠΎΠ³ Π±ΡΠΎΡΠ° ΡΠ°Π·Π»ΠΈΡΠΈΡΠΈΡ
ΠΏΡΠΎΠΈΠ·Π²ΠΎΠ΄Π° ΠΈ ΡΠΈΡ
ΠΎΠ²ΠΈΡ
Π²Π°ΡΠΈΡΠ°ΡΠΈΡΠ°, ΠΏΠΎΡΡΠ°ΡΠ΅ ΠΈΠ·Π°Π·ΠΎΠ²Π½ΠΎ ΠΎΠ΄ΡΠΆΠ°Π²Π°ΡΠΈ Π½Π΅ΠΎΠΏΡ
ΠΎΠ΄Π½Ρ ΡΠ΅Ρ
Π½ΠΈΡΠΊΡ Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠ°ΡΠΈΡΡ, ΡΡΠΎ ΡΠ΅ Ρ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΎΠΌ ΠΏΡΠΈΡΡΡΠΏΡ ΠΌΠΎΠ³ΡΡΠ΅ ΡΡΠ°Π΄ΠΈΡΠΈ Π½Π° Π°ΡΡΠΎΠΌΠ°ΡΠΈΠ·ΠΎΠ²Π°Π½ Π½Π°ΡΠΈΠ½ ΠΈ ΡΠΈΠΌΠ΅ Π·Π½Π°ΡΠ°ΡΠ½ΠΎ ΡΡΡΠ΅Π΄Π΅ΡΠΈ Π²ΡΠ΅ΠΌΠ΅ ΠΏΡΠΎΡΠ΅ΠΊΡΠ°Π½Π°ΡΠ° ΠΏΡΠΎΡΠ΅ΡΠ°.U ovoj disertaciji predstavljen je pristup specifikaciji i generisanju proizvodnih procesa zasnovan na inΕΎenjerstvu voΔenom modelima, u cilju poveΔanja fleksibilnosti postrojenja u fabrikama i efikasnijeg razreΕ‘avanja izazova koji se pojavljuju u eri Industrije 4.0. Za potrebe formalne specifikacije proizvodnih procesa i njihovih varijacija u ambijentu Industrije 4.0, kreiran je novi namenski jezik, Δije modele raΔunar moΕΎe da obradi na automatizovan naΔin. Kreirani jezik ima moguΔnost modelovanja proizvodnih procesa koji mogu biti nezavisni od proizvodnih sistema i time upotrebljeni u razliΔitim postrojenjima ili fabrikama, ali i proizvodnih procesa koji su specifiΔni za odreΔeni sistem. Kako bi modele proizvodnih procesa zavisnih od konkretnog proizvodnog sistema bilo moguΔe na automatizovan naΔin transformisati u instrukcije koje resursi proizvodnog sistema izvrΕ‘avaju, kreiran je generator instrukcija. TakoΔe su kreirani i generatori tehniΔke dokumentacije, koji na automatizovan naΔin transformiΕ‘u modele proizvodnih procesa u dokumente razliΔitih tipova. Upotrebom predloΕΎenog pristupa, namenskog jezika i softverskog reΕ‘enja doprinosi se uvoΔenju fabrika u proces digitalne transformacije. Kako fabrike u eri Industrije 4.0 moraju brzo da se prilagode novim proizvodima i njihovim varijacijama, neophodno je dinamiΔki voditi proizvodnju i na automatizovan naΔin slati instrukcije resursima u fabrici, u zavisnosti od proizvoda koji se kreiraju u konkretnom postrojenju. Time Ε‘to je u predloΕΎenom pristupu moguΔe iz modela procesa automatizovano generisati instrukcije i poslati ih resursima, doprinosi se kreiranju jednog dinamiΔkog okruΕΎenja u savremenim fabrikama. Dodatno, usled velikog broja razliΔitih proizvoda i njihovih varijacija, postaje izazovno odrΕΎavati neophodnu tehniΔku dokumentaciju, Ε‘to je u predloΕΎenom pristupu moguΔe uraditi na automatizovan naΔin i time znaΔajno uΕ‘tedeti vreme projektanata procesa
A clinical decision support system for detecting and mitigating potentially inappropriate medications
Background: Medication errors are a leading cause of preventable harm to patients. In older adults, the impact of ageing on the therapeutic effectiveness and safety of drugs is a significant concern, especially for those over 65. Consequently, certain medications called Potentially Inappropriate Medications (PIMs) can be dangerous in the elderly and should be avoided. Tackling PIMs by health professionals and patients can be time-consuming and error-prone, as the criteria underlying the definition of PIMs are complex and subject to frequent updates. Moreover, the criteria are not available in a representation that health systems can interpret and reason with directly.
Objectives: This thesis aims to demonstrate the feasibility of using an ontology/rule-based approach in a clinical knowledge base to identify potentially inappropriate medication(PIM). In addition, how constraint solvers can be used effectively to suggest alternative medications and administration schedules to solve or minimise PIM undesirable side effects.
Methodology: To address these objectives, we propose a novel integrated approach using formal rules to represent the PIMs criteria and inference engines to perform the reasoning presented in the context of a Clinical Decision Support System (CDSS). The approach aims to detect, solve, or minimise undesirable side-effects of PIMs through an ontology (knowledge base) and inference engines incorporating multiple reasoning approaches.
Contributions: The main contribution lies in the framework to formalise PIMs, including the steps required to define guideline requisites to create inference rules to detect and propose alternative drugs to inappropriate medications. No formalisation of the selected guideline (Beers Criteria) can be found in the literature, and hence, this thesis provides a novel ontology for it. Moreover, our process of minimising undesirable side effects offers a novel approach that enhances and optimises the drug rescheduling process, providing a more accurate way to minimise the effect of drug interactions in clinical practice
Towards a centralized multicore automotive system
Todayβs automotive systems are inundated with embedded electronics to host chassis, powertrain, infotainment, advanced driver assistance systems, and other modern vehicle functions. As many as 100 embedded microcontrollers execute hundreds of millions of lines of code in a single vehicle. To control the increasing complexity in vehicle electronics and services, automakers are planning to consolidate different on-board automotive functions as software tasks on centralized multicore hardware platforms. However, these vehicle software services have different and contrasting timing, safety, and security requirements. Existing vehicle operating systems are ill-equipped to provide all the required service guarantees on a single machine. A centralized automotive system aims to tackle this by assigning software tasks to multiple criticality domains or levels according to their consequences of failures, or international safety standards like ISO 26262. This research investigates several emerging challenges in time-critical systems for a centralized multicore automotive platform and proposes a novel vehicle operating system framework to address them.
This thesis first introduces an integrated vehicle management system (VMS), called DriveOSβ’, for a PC-class multicore hardware platform. Its separation kernel design enables temporal and spatial isolation among critical and non-critical vehicle services in different domains on the same machine. Time- and safety-critical vehicle functions are implemented in a sandboxed Real-time Operating System (OS) domain, and non-critical software is developed in a sandboxed general-purpose OS (e.g., Linux, Android) domain. To leverage the advantages of model-driven vehicle function development, DriveOS provides a multi-domain application framework in Simulink. This thesis also presents a real-time task pipeline scheduling algorithm in multiprocessors for communication between connected vehicle services with end-to-end guarantees. The benefits and performance of the overall automotive system framework are demonstrated with hardware-in-the-loop testing using real-world applications, car datasets and simulated benchmarks, and with an early-stage deployment in a production-grade luxury electric vehicle
Fictocritical Cyberfeminism: A Paralogical Model for Post-Internet Communication
This dissertation positions the understudied and experimental writing practice of fictocriticism as an analog for the convergent and indeterminate nature of βpost-Internetβ communication as well a cyberfeminist technology for interfering and in-tervening in metanarratives of technoscience and technocapitalism that structure contemporary media. Significant theoretical valences are established between twen-tieth century literary works of fictocriticism and the hybrid and ephemeral modes of writing endemic to emergent, twenty-first century forms of networked communica-tion such as social media. Through a critical theoretical understanding of paralogy, or that countercultural logic of deploying language outside legitimate discourses, in-volving various tactics of multivocity, mimesis and metagraphy, fictocriticism is ex-plored as a self-referencing linguistic machine which exists intentionally to occupy those liminal territories βsomewhere in among/between criticism, autobiography and fictionβ (Hunter qtd. in Kerr 1996). Additionally, as a writing practice that orig-inated in Canada and yet remains marginal to national and international literary scholarship, this dissertation elevates the origins and ongoing relevance of fictocriti-cism by mapping its shared aims and concerns onto proximal discourses of post-structuralism, cyberfeminism, network ecology, media art, the avant-garde, glitch feminism, and radical self-authorship in online environments. Theorized in such a matrix, I argue that fictocriticism represents a capacious framework for writing and reading media that embodies the self-reflexive politics of second-order cybernetic theory while disrupting the rhetoric of technoscientific and neoliberal economic forc-es with speech acts of calculated incoherence. Additionally, through the inclusion of my own fictocritical writing as works of research-creation that interpolate the more traditional chapters and subchapters, I theorize and demonstrate praxis of this dis-tinctively indeterminate form of criticism to empirically and meaningfully juxtapose different modes of knowing and speaking about entangled matters of language, bod-ies, and technologies. In its conclusion, this dissertation contends that the βcreative paranoiaβ engendered by fictocritical cyberfeminism in both print and digital media environments offers a pathway towards a more paralogical media literacy that can transform the terms and expectations of our future media ecology
The Modernization Process of a Data Pipeline
Data plays an integral part in a companyβs decision-making. Therefore, decision-makers must have the right data available at the right time. Data volumes grow constantly, and new data is continuously needed for analytical purposes. Many companies use data warehouses to store data in an easy-to-use format for reporting and analytics. The challenge with data warehousing is displaying data using one unified structure. The source data is often gathered from many systems that are structured in various ways.
A process called extract, transform, and load (ETL) or extract, load, and transform (ELT) is used to load data into the data warehouse. This thesis describes the modernization process of one such pipeline. The previous solution, which used an on-premises Teradata platform for computation and SQL stored procedures for the transformation logic, is replaced by a new solution. The goal of the new solution is a process that uses modern tools, is scalable, and follows programming best practises. The cloud-based Databricks platform is used for computation, and dbt is used as the transformation tool. Lastly, a comparison is made between the new and old solutions, and their benefits and drawbacks are discussed
A BIM - GIS Integrated Information Model Using Semantic Web and RDF Graph Databases
In recent years, 3D virtual indoor and outdoor urban modelling has become an essential geospatial information framework for civil and engineering applications such as emergency response, evacuation planning, and facility management. Building multi-sourced and multi-scale 3D urban models are in high demand among architects, engineers, and construction professionals to achieve these tasks and provide relevant information to decision support systems. Spatial modelling technologies such as Building Information Modelling (BIM) and Geographical Information Systems (GIS) are frequently used to meet such high demands. However, sharing data and information between these two domains is still challenging. At the same time, the semantic or syntactic strategies for inter-communication between BIM and GIS do not fully provide rich semantic and geometric information exchange of BIM into GIS or vice-versa. This research study proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graph databases. The suggested solution's originality and novelty come from combining the advantages of integrating BIM and GIS models into a semantically unified data model using a semantic framework and ontology engineering approaches. The new model will be named Integrated Geospatial Information Model (IGIM). It is constructed through three stages. The first stage requires BIMRDF and GISRDF graphs generation from BIM and GIS datasets. Then graph integration from BIM and GIS semantic models creates IGIMRDF. Lastly, the information from IGIMRDF unified graph is filtered using a graph query language and graph data analytics tools. The linkage between BIMRDF and GISRDF is completed through SPARQL endpoints defined by queries using elements and entity classes with similar or complementary information from properties, relationships, and geometries from an ontology-matching process during model construction. The resulting model (or sub-model) can be managed in a graph database system and used in the backend as a data-tier serving web services feeding a front-tier domain-oriented application. A case study was designed, developed, and tested using the semantic integrated information model for validating the newly proposed solution, architecture, and performance
Efficient concurrent data structure access parallelism techniques for increasing scalability
Multi-core processors have revolutionised the way data structures are designed by bringing parallelism to mainstream computing. Key to exploiting hardware parallelism available in multi-core processors are concurrent data structures. However, some concurrent data structure abstractions are inherently sequential and incapable of harnessing the parallelism performance of multi-core processors. Designing and implementing concurrent data structures to harness hardware parallelism is challenging due to the requirement of correctness, efficiency and practicability under various application constraints. In this thesis, our research contribution is towards improving concurrent data structure access parallelism to increase data structure performance. We propose new design frameworks that improve access parallelism of already existing concurrent data structure designs. Also, we propose new concurrent data structure designs with significant performance improvements. To give an insight into the interplay between hardware and concurrent data structure access parallelism, we give a detailed analysis and model the performance scalability with varying parallelism.In the first part of the thesis, we focus on data structure semantic relaxation. By relaxing the semantics of a data structure, a bigger design space, that allows weaker synchronization and more useful parallelism, is unveiled. Investigating new data structure designs, capable of trading semantics for achieving better performance in a monotonic way, is a major challenge in the area. We algorithmically address this challenge in this part of the thesis. We present an efficient, lock-free, concurrent data structure design framework for out-of-order semantic relaxation. We introduce a new two-dimensional algorithmic design, that uses multiple instances of a given data structure to improve access parallelism. In the second part of the thesis, we propose an efficient priority queue that improves access parallelism by reducing the number of synchronization points for each operation. Priority queues are fundamental abstract data types, often used to manage limited resources in parallel systems. Typical proposed parallel priority queue implementations are based on heaps or skip lists. In recent literature, skip lists have been shown to be the most efficient design choice for implementing priority queues. Though numerous intricate implementations of skip list based queues have been proposed in the literature, their performance is constrained by the high number of global atomic updates per operation and the high memory consumption, which are proportional to the number of sub-lists in the queue. In this part of the thesis, we propose an alternative approach for designing lock-free linearizable priority queues, that significantly improve memory efficiency and throughput performance, by reducing the number of global atomic updates and memory consumption as compared to skip-list based queues. To achieve this, our new design combines two structures; a search tree and a linked list, forming what we call a Tree Search List Queue (TSLQueue). Subsequently, we analyse and introduce a model for lock-free concurrent data structure access parallelism. The major impediment to scaling concurrent data structures is memory contention when accessing shared data structure access points, leading to thread serialisation, and hindering parallelism. Aiming to address this challenge, a significant amount of work in the literature has proposed multi-access techniques that improve concurrent data structure parallelism. However, there is little work on analysing and modelling the execution behaviour of concurrent multi-access data structures especially in a shared memory setting. In this part of the thesis, we analyse and model the general execution behaviour of concurrent multi-access data structures in the shared memory setting. We study and analyse the behaviour of the two popular random access patterns: shared (Remote) and exclusive (Local) access, and the behaviour of the two most commonly used atomic primitives for designing lock-free data structures: Compare and Swap, and, Fetch and Add
- β¦