22 research outputs found

    Critical Thinking

    Get PDF
    Women are the foundation of society and breeders and teachers of the next generation. We comprise half of the population of the world, and thus should be equals in every step with men. In order to do so, logical, analytical thinking (devoid of emotional drivers and personal pride) is crucial for a society to function healthily. To incline women to realize the benefits being more curious and analytical about the decisions that is made for them by men in their lives. If women remain as a mass of silent sufferers instead of individual thinkers who challenge status quo, then lack of societal respect for and protection of women’s personal decision-making in regard to unwanted pregnancies, style of dress, a their right to choose their partners, etc. will continue. Here lies the relevance of critical thinking that will allow them to become the experts in all fields

    Conflict Management

    Get PDF
    Conflict happens all the time. It is the part of everyday life and it can happen to anyone on all age groups. Many of us face some form of conflict on a daily basis. It could be something as simple as what to eat for breakfast or much more complicated, like an argument between two coworkers. People have different preferences, habits, and opinions—sometimes those differences create conflict People often feel reluctant to get involved in a conflict situation. Unfortunately, conflicts are rarely self-healing conditions. Because of the increasing diversity of life, we’re seeing more conflict than ever before. Conflict can be a positive or negative experience. What makes the difference is the ability to deal with and resolve conflict which is important in this present world

    Programming the Adapteva Epiphany 64-core Network-on-chip Coprocessor

    Full text link
    In the construction of exascale computing systems energy efficiency and power consumption are two of the major challenges. Low-power high performance embedded systems are of increasing interest as building blocks for large scale high- performance systems. However, extracting maximum performance out of such systems presents many challenges. Various aspects from the hardware architecture to the programming models used need to be explored. The Epiphany architecture integrates low-power RISC cores on a 2D mesh network and promises up to 70 GFLOPS/Watt of processing efficiency. However, with just 32 KB of memory per eCore for storing both data and code, and only low level inter-core communication support, programming the Epiphany system presents several challenges. In this paper we evaluate the performance of the Epiphany system for a variety of basic compute and communication operations. Guided by this data we explore strategies for implementing scientific applications on memory constrained low-powered devices such as the Epiphany. With future systems expected to house thousands of cores in a single chip, the merits of such architectures as a path to exascale is compared to other competing systems.Comment: 14 pages, submitted to IJHPCA Journal special editio

    GRAPHENE BASED MATERIALS FOR SUPERCAPACITORS

    Get PDF
    The adoption of more environmental friendly means of harnessing and storing energy while minimizing harmful effects on the environment is becoming more significant. Supercapacitors are becoming a more favored means of energy storage systems owing to their higher surface area electrodes and thinner dielectrics. For greater capacitances, a suitable material must have high porosity. Such a suitable material is carbon, most notably graphene, with superior electrical properties, chemical stability and high surface area. This review focuses on the types of mechanism for storing energy, the types of materials used in supercapacitors, and the applications and scope of supercapacitor research and development

    Immune boosting by B.1.1.529 (Omicron) depends on previous SARS-CoV-2 exposure

    Get PDF
    The Omicron, or Pango lineage B.1.1.529, variant of SARS-CoV-2 carries multiple spike mutations with high transmissibility and partial neutralizing antibody (nAb) escape. Vaccinated individuals show protection from severe disease, often attributed to primed cellular immunity. We investigated T and B cell immunity against B.1.1.529 in triple mRNA vaccinated healthcare workers (HCW) with different SARS-CoV-2 infection histories. B and T cell immunity against previous variants of concern was enhanced in triple vaccinated individuals, but magnitude of T and B cell responses against B.1.1.529 spike protein was reduced. Immune imprinting by infection with the earlier B.1.1.7 (Alpha) variant resulted in less durable binding antibody against B.1.1.529. Previously infection-naïve HCW who became infected during the B.1.1.529 wave showed enhanced immunity against earlier variants, but reduced nAb potency and T cell responses against B.1.1.529 itself. Previous Wuhan Hu-1 infection abrogated T cell recognition and any enhanced cross-reactive neutralizing immunity on infection with B.1.1.529

    Quantitative, multiplexed, targeted proteomics for ascertaining variant specific SARS-CoV-2 antibody response

    Get PDF
    Determining the protection an individual has to severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) variants of concern (VoCs) is crucial for future immune surveillance, vaccine development, and understanding of the changing immune response. We devised an informative assay to current ELISA-based serology using multiplexed, baited, targeted proteomics for direct detection of multiple proteins in the SARS-CoV-2 anti-spike antibody immunocomplex. Serum from individuals collected after infection or first- and second-dose vaccination demonstrates this approach and shows concordance with existing serology and neutralization. Our assays show altered responses of both immunoglobulins and complement to the Alpha (B.1.1.7), Beta (B.1.351), and Delta (B.1.617.1) VoCs and a reduced response to Omicron (B1.1.1529). We were able to identify individuals who had prior infection, and observed that C1q is closely associated with IgG1 (r > 0.82) and may better reflect neutralization to VoCs. Analyzing additional immunoproteins beyond immunoglobulin (Ig) G, provides important information about our understanding of the response to infection and vaccination

    Developing Scientific Software for Low-power System-on-Chip Processors: Optimising for Energy

    Get PDF
    Energy consumption has been identified as the major bottleneck in the push to increase the scale of current High Performance Computing (HPC) systems. Consequently there has been an increased effort to investigate the suitability of low-power hardware for HPC. Low-power system-on-chips (LPSoCs), which are widely used in a mobile and embedded context, typically integrate multicore Central Processing Units (CPUs) and accelerators on a single chip, offering high floating point capabilities while consuming low power. While there are merits to using such low-power systems for scientific computing, there are a number of challenges in using them efficiently. This thesis considers three issues. i) development of applications which are able to use all the LPSoC processing elements effectively, ii) measurement, understanding and modelling of the energy usage of an application executing on such platforms, iii) strategies for deciding the optimal partitioning of an application's workload between the different processing elements in order to minimise energy-to-solution. Each of these issues are investigated in the context of three applications - two core computational science kernels, namely matrix multiplication as an exemplar of dense linear algebra and stencil computation as an exemplar of grid based numerical methods, and the complex block tridiagonal benchmark from the multizone NAS parallel benchmark suite. To study the challenges associated with the development of scientific software for LPSoCs, two fundamentally different systems are considered, the Epiphany-IV Network-on-chip (NoC) and the Tegra systems. The former was a kickstarter project which aimed to design a LPSoC that could scale to over 4096 cores with a peak performance in excess of 5 trillion single-precision floating point operations per second (TFLOP/s) while operating at an energy efficiency of 70 GFLOP/s per Watt. By contrast, the latter is a product range from multinational company NVIDIA that combines their popular Graphics Processing Unit (GPU) technology with a general purpose ARM processor in a mass market LPSoC. This thesis reports the implementation of both the matrix multiplication and stencil kernels on both systems comparing their performance, energy usage and the programming challenges associated with developing code for these systems to those on conventional systems. In order to analyse the energy efficiency of applications running on an LPSoC, the ability to measure its energy usage is crucial. However, very few platforms have internal sensors which provide details of energy usage, and when they do measurements obtained using such sensors are usually low-resolution and intrusive. This thesis presents a high-resolution, non-intrusive, energy measurement framework along with an Application Programming Interface (API) which enables an application to obtain real-time measurement of its energy usage at the function level. Based on these measurements a simple energy usage model is proposed to describe the energy usage as a function of how the workload is partitioned between the different computing devices. This model predicts the conditions under which energy minimisation occurs when using all available computing devices. This prediction is tested and demonstrated for the matrix multiplication and stencil kernels. Given access to high resolution, real-time energy measurements and a model describing energy usage as a function of how an application is partitioned between the available computing devices, this thesis explores various strategies for runtime energy tuning. Different scenarios are considered; offline pre-tuning, tuning based on estimates gained from solving a small fraction of the complete problem, and tuning based on iteratively solving fractions of the entire problem a small number of times with the expectation that the final solution involves many repetitions of this. The applicability of these for the model kernels is discussed and tested

    Programming the Adapteva Ephipany 64-core network-on-chip coprocessor

    No full text
    Energy efficiency is the primary impediment in the path to exascale computing. Consequently, the high-performance computing community is increasingly interested in low-power high-performance embedded systems as building blocks for large-scale high-performance systems. The Adapteva Epiphany architecture integrates low-power RISC cores on a 2D mesh network and promises up to 70 GFLOPS/Watt of theoretical performance. However, with just 32 KB of memory per eCore for storing both data and code, programming the Epiphany system presents significant challenges. In this paper we evaluate the performance of a 64-core Epiphany system with a variety of basic compute and communication micro-benchmarks. Further, we implemented two well known application kernels, 5-point star-shaped heat stencil with a peak performance of 65.2 GFLOPS and matrix multiplication with 65.3 GFLOPS in single precision across 64 Epiphany cores. We discuss strategies for implementing high-performance computing application kernels on such memory constrained low-power devices and compare the Epiphany with competing low-power systems. With future Epiphany revisions expected to house thousands of cores on a single chip, understanding the merits of such an architecture is of prime importance to the exascale initiative.This work is supported in part by the Australian Research Council Discovery Project DP0987773
    corecore