3,447 research outputs found

    High School Students’ Expulsion Reintegration Following Alternative School: A Phenomenological Study

    Get PDF
    The purpose of this transcendental phenomenological study was used to understand the lived experience of ninth through twelfth-grade high school expelled students and their perception of support when returning to their home campus in Pecan Canyon School District (a pseudonym). This qualitative study examined existing research on exclusionary discipline and documents the expulsion and reentry processes the students faced when leaving and returning to their home campuses. The research, deeply rooted in Moustakas\u27s approach to transcendental phenomenology, has produced a description text of life. Transcendental phenomenology is an attitude of approaching lived experience. Short-term expulsion was the removal of students for a period not to exceed the remainder of the current school quarter or semester. Permanent expulsion was the removal of a student for the remainder of the current school year but not beyond one full school term. The research analyzed the academic school year, 2022–2023. The study consisted of three high school African American females and seven high school African American male students varying in ages from 15-19. The participants selected attended a discipline-based alternative program before returning to their home school. Data were collected through student interviews involving an informal, interactive process utilizing open-ended questions; electronic journal questions allowed students to share their experiences in a relaxed environment; and the focus group consisted of the four returning students sharing their experiences. The research revealed the following six themes: frustrating alternative school experiences, no relationship with peers, no support from teachers and counselors in transition, support from family members, coping strategies, and changes in behavior

    Mitigating the Event and Effect of Energy Holes in Multi-hop Wireless Sensor Networks Using an Ultra-Low Power Wake-up Receiver and an Energy Scheduling Technique

    Get PDF
    This research work presents an algorithm for extending network lifetime in multi-hop wireless sensor networks (WSN). WSNs face energy gap issues around sink nodes due to the transmission of large amounts of data through nearby sensor nodes. The limited power supply to the nodes limits the lifetime of the network, which makes energy efficiency crucial. Multi-hop communication has been proposed as an efficient strategy, but its power consumption remains a research challenge. In this study, an algorithm is developed to mitigate energy holes around the sink nodes by using a modified ultra-low-power wake-up receiver and an energy scheduling technique. Efficient power scheduling reduces the power consumption of the relay node, and when the residual power of the sensor node falls below a defined threshold, the power emitters charge the nodes to eliminate energy-hole problems. The modified wake-up receiver improves sensor sensitivity while staying within the micro-power budget. This study's simulations showed that the developed RF energy harvesting algorithm outperformed previous work, achieving a 30% improvement in average charged energy (AEC), a 0.41% improvement in average energy (AEH), an 8.39% improvement in the number of energy transmitters, an 8.59% improvement in throughput, and a 0.19 decrease in outage probability compared to the existing network lifetime enhancement of multi-hop wireless sensor networks by RF Energy Harvesting algorithm. Overall, the enhanced power efficiency technique significantly improves the performance of WSNs

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Efficient abstraction of clock synchronization at the operating system level

    Get PDF
    Distributed embedded systems are emerging and gaining importance in various domains, including industrial control applications where time determinism – hence network clock synchronization – is fundamental. In modern applications, moreover, this core functionality is required by many different software components, from OS kernel and radio stack up to applications. An abstraction layer devoted to handling time needs therefore introducing, and to encapsulate time corrections at the lowest possible level, the said layer should take the form of a timer device driver offering a Virtual Clock to the entire system. In this paper we show that doing so introduces a nonlinearity in the dynamics of the clock, and we design a controller based on feedback linearization to handle the issue. To put the idea to work, we extend the Miosix RTOS with a generic interface allowing to implement virtual clocks, including the newly designed controller that we call FLOPSYNC-3 after its ancestor. Also, we introduce the resulting virtual clock in the TDMH [20] real-time wireless mesh protocol

    Re-imagining an African ethical value-based social contract: in the context of using “Ubuntugogy” as an African framework

    Get PDF
    South Africa is a democratic country. However, some missiology scholars, citizens, and leaders see it as one of the countries that are very violent by comparison to others. On one hand, others see it as a country that does not abide by its laws. On the other hand, others see it as a morally depraved country. What causes this negative situation? One of the answers could be political freedom and political transformation, which basically came after a long period of struggle against the unfair conditions of the past apartheid era. As a result, issues around social imbalance are exacerbated by an influx of refugees and a resultant increase in population. Creating a more tangible rationale for social contracts that South African citizens and other neighboring countries would be attracted to commit to, is indeed a huge task. A missiological examination uncovers, however, that despite of all the cultural differences and frictions prevalent in societies, some people are genuinely able to share a set of basic values that arguably could form the core of the sought-for moral direction needed in South Africa. Thus, combining the missiological trends (Mangayi & Baron, 2020) and the ideas of Ubuntugogy (Bangura, 2005) could be functional in bringing home to the next generation the notion of henceforth living by the precepts of such a moral direction and social contract. It is therefore this articles intention to use missiological, philosophical, theological, and African knowledge systems anchored on Ubuntugogy (Bangura, 2005) to illuminate social contract theories for decent social change. Writing from a southern African context, I propose Ubuntugogy, because its deep commitment to community, character and hospitality, has much to contribute to interculturality in the important missiological mandate

    SCALING UP TASK EXECUTION ON RESOURCE-CONSTRAINED SYSTEMS

    Get PDF
    The ubiquity of executing machine learning tasks on embedded systems with constrained resources has made efficient execution of neural networks on these systems under the CPU, memory, and energy constraints increasingly important. Different from high-end computing systems where resources are abundant and reliable, resource-constrained systems only have limited computational capability, limited memory, and limited energy supply. This dissertation focuses on how to take full advantage of the limited resources of these systems in order to improve task execution efficiency from different aspects of the execution pipeline. While the existing literature primarily aims at solving the problem by shrinking the model size according to the resource constraints, this dissertation aims to improve the execution efficiency for a given set of tasks from the following two aspects. Firstly, we propose SmartON, which is the first batteryless active event detection system that considers both the event arrival pattern as well as the harvested energy to determine when the system should wake up and what the duty cycle should be. Secondly, we propose Antler, which exploits the affinity between all pairs of tasks in a multitask inference system to construct a compact graph representation of the task set for a given overall size budget. To achieve the aforementioned algorithmic proposals, we propose the following hardware solutions. One is a controllable capacitor array that can expand the system’s energy storage on-the-fly. The other is a FRAM array that can accommodate multiple neural networks running on one system.Doctor of Philosoph

    Robust and Listening-Efficient Contention Resolution

    Full text link
    This paper shows how to achieve contention resolution on a shared communication channel using only a small number of channel accesses -- both for listening and sending -- and the resulting algorithm is resistant to adversarial noise. The shared channel operates over a sequence of synchronized time slots, and in any slot agents may attempt to broadcast a packet. An agent's broadcast succeeds if no other agent broadcasts during that slot. If two or more agents broadcast in the same slot, then the broadcasts collide and both broadcasts fail. An agent listening on the channel during a slot receives ternary feedback, learning whether that slot had silence, a successful broadcast, or a collision. Agents are (adversarially) injected into the system over time. The goal is to coordinate the agents so that each is able to successfully broadcast its packet. A contention-resolution protocol is measured both in terms of its throughput and the number of slots during which an agent broadcasts or listens. Most prior work assumes that listening is free and only tries to minimize the number of broadcasts. This paper answers two foundational questions. First, is constant throughput achievable when using polylogarithmic channel accesses per agent, both for listening and broadcasting? Second, is constant throughput still achievable when an adversary jams some slots by broadcasting noise in them? Specifically, for NN packets arriving over time and JJ jammed slots, we give an algorithm that with high probability in N+JN+J guarantees Θ(1)\Theta(1) throughput and achieves on average O(polylog(N+J))O(\texttt{polylog}(N+J)) channel accesses against an adaptive adversary. We also have per-agent high-probability guarantees on the number of channel accesses -- either O(polylog(N+J))O(\texttt{polylog}(N+J)) or O((J+1)polylog(N))O((J+1) \texttt{polylog}(N)), depending on how quickly the adversary can react to what is being broadcast

    Trade-Off Exploration for Acceleration of Continuous Integration

    Get PDF
    Continuous Integration (CI) is a popular software development practice that allows developers to quickly verify modifications to their projects. To cope with the ever-increasing demand for faster software releases, CI acceleration approaches have been proposed to expedite the feedback that CI provides. However, adoption of CI acceleration is not without cost. The trade-off in duration and trustworthiness of a CI acceleration approach determines the practicality of the CI acceleration process. Indeed, if a CI acceleration approach takes longer to prime than to run the accelerated build, the benefits of acceleration are unlikely to outweigh the costs. Moreover, CI acceleration techniques may mislabel change sets (e.g., a build labelled as failing that passes in an unaccelerated setting or vice versa) or produce results that are inconsistent with an unaccelerated build (e.g., the underlying reason for failure does not match with the unaccelerated build). These inconsistencies call into question the trustworthiness of CI acceleration products. We first evaluate the time trade-off of two CI acceleration products — one based on program analysis (PA) and the other on machine learning (ML). After replaying the CI process of 100,000 builds spanning ten open-source projects, we find that the priming costs (i.e., the extra time spent preparing for acceleration) of the program analysis product are substantially less than that of the machine learning product (e.g., average project-wise median cost difference of 148.25 percentage points). Furthermore, the program analysis product generally provides more time savings than the machine learning product (e.g., average project-wise median savings improvement of 5.03 percentage points). Given their deterministic nature, and our observations about priming costs and benefits, we recommend that organizations consider the adoption of program analysis based acceleration. Next, we study the trustworthiness of the same PA and ML CI acceleration products. We re-execute 50 failing builds from ten open-source projects in non-accelerated (baseline), program analysis accelerated, and machine learning accelerated settings. We find that when applied to known failing builds, program analysis accelerated builds more often (43.83 percentage point difference across ten projects) align with the non-accelerated build results. Accordingly, we conclude that while there is still room for improvement for both CI acceleration products, the selected program analysis product currently provides a more trustworthy signal of build outcomes than the machine learning product. Finally, we propose a mutation testing approach to systematically evaluate the trustworthiness of CI acceleration. We apply our approach to the deterministic PA-based CI acceleration product and uncover issues that hinder its trustworthiness. Our analysis consists of three parts: we first study how often the same build in accelerated and unaccelerated CI settings produce different mutation testing outcomes. We call mutants with different outcomes in the two settings “gap mutants”. Next, we study the code locations where gap mutants appear. Finally, we inspect gap mutants to understand why acceleration causes them to survive. Our analysis of ten thriving open-source projects uncovers 2,237 gap mutants. We find that: (1) the gap in mutation outcomes between accelerated and unaccelerated settings varies from 0.11%–23.50%; (2) 88.95% of gap mutants can be mapped to specific source code functions and classes using the dependency representation of the studied CI acceleration product; (3) 69% of gap mutants survive CI acceleration due to deterministic reasons that can be classified into six fault patterns. Our results show that deterministic CI acceleration suffers from trustworthiness limitations, and highlights the ways in which trustworthiness could be improved in a pragmatic manner. This thesis demonstrates that CI acceleration techniques, whether PA or ML-based, present time trade-offs and can reduce software build trustworthiness. Our findings lead us to encourage users of CI acceleration to carefully weigh both the time costs and trustworthiness of their chosen acceleration technique. This study also demonstrates that the following improvements for PA-based CI acceleration approaches would improve their trustworthiness: (1) depending on the size and complexity of the codebase, it may be necessary to manually refine the dependency graph, especially by concentrating on class properties, global variables, and constructor components; and (2) solutions should be added to detect and bypass flaky test during CI acceleration to minimize the impact of flakiness
    • 

    corecore