91 research outputs found

    Ubiquitous robust communications for emergency response using multi-operator heterogeneous networks

    Get PDF
    A number of disasters in various places of the planet have caused an extensive loss of lives, severe damages to properties and the environment, as well as a tremendous shock to the survivors. For relief and mitigation operations, emergency responders are immediately dispatched to the disaster areas. Ubiquitous and robust communications during the emergency response operations are of paramount importance. Nevertheless, various reports have highlighted that after many devastating events, the current technologies used, failed to support the mission critical communications, resulting in further loss of lives. Inefficiencies of the current communications used for emergency response include lack of technology inter-operability between different jurisdictions, and high vulnerability due to their centralized infrastructure. In this article, we propose a flexible network architecture that provides a common networking platform for heterogeneous multi-operator networks, for interoperation in case of emergencies. A wireless mesh network is the main part of the proposed architecture and this provides a back-up network in case of emergencies. We first describe the shortcomings and limitations of the current technologies, and then we address issues related to the applications and functionalities a future emergency response network should support. Furthermore, we describe the necessary requirements for a flexible, secure, robust, and QoS-aware emergency response multi-operator architecture, and then we suggest several schemes that can be adopted by our proposed architecture to meet those requirements. In addition, we suggest several methods for the re-tasking of communication means owned by independent individuals to provide support during emergencies. In order to investigate the feasibility of multimedia transmission over a wireless mesh network, we measured the performance of a video streaming application in a real wireless metropolitan multi-radio mesh network, showing that the mesh network can meet the requirements for high quality video transmissions

    A conserved myotubularin-related phosphatase regulates autophagy by maintaining autophagic flux

    Get PDF
    Macroautophagy (autophagy) targets cytoplasmic cargoes to the lysosome for degradation. Like all vesicle trafficking, autophagy relies on phosphoinositide identity, concentration, and localization to execute multiple steps in this catabolic process. Here, we screen for phosphoinositide phosphatases that influence autophagy in Drosophila and identify CG3530. CG3530 is homologous to the human MTMR6 subfamily of myotubularin-related 3-phosphatases, and therefore, we named it dMtmr6. dMtmr6, which is required for development and viability in Drosophila, functions as a regulator of autophagic flux in multiple Drosophila cell types. The MTMR6 family member MTMR8 has a similar function in autophagy of higher animal cells. Decreased dMtmr6 and MTMR8 function results in autophagic vesicle accumulation and influences endolysosomal homeostasis

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    The Use of Thought Experiments in Teaching Physics to Upper Secondary-Level Students: Two examples from the theory of relativity

    No full text
    The present study focuses on the way thought experiments (TEs) can be used as didactical tools in teaching physics to upper secondary-level students. A qualitative study was designed to investigate to what extent the TEs called 'Einstein's elevator' and 'Einstein's train' can function as tools in teaching basic concepts of the theory of relativity to upper secondary-level students. The above TEs were used in the form they are presented by Einstein himself and by Landau and Rumer in books that popularize theories of physics. The research sample consisted of 40 Greek students, divided into 11 groups of three to four students each. The findings of this study reveal that the use of TEs in teaching the theory of relativity can help students realize situations which refer to a world beyond their everyday experience and develop syllogisms according to the theory. In this way, students can grasp physics laws and principles which demand a high degree of abstract thinking, such as the principle of equivalence and the consequences of the constancy of the speed of light to concepts of time and space. © 2013 Copyright Taylor and Francis Group, LLC

    From Earth to Heaven: Using 'Newton's Cannon' Thought Experiment for Teaching Satellite Physics

    No full text
    Thought Experiments are powerful tools in both scientific thinking and in the teaching of science. In this study, the historical Thought Experiment (TE) 'Newton's Cannon' was used as a tool to teach concepts relating to the motion of satellites to students at upper secondary level. The research instruments were: (a) a teaching-interview designed and implemented according to the Teaching Experiment methodology and (b) an open-ended questionnaire administered to students 2 weeks after the teaching-interview. The sample consisted of forty students divided into eleven groups. The teaching and learning processes which occurred during the teaching-interview were recorded and analyzed. The findings of the present study show that the use of the TE helped students to mentally construct a physical system which has nothing to do with their everyday experience (i.e. they had to imagine themselves as observers in a context in which the whole Earth was visible) and to draw conclusions about phenomena within this system. Specifically, students managed (1) to conclude that if an object is appropriately launched, it may be placed in an orbit around the Earth and to support this conclusion by giving necessary arguments, and (2) to realize that the same laws of physics describe, on the one hand, the motion of the Moon around the Earth (and the motion of other celestial bodies as well) and, on the other hand, the motion of 'terrestrial' objects (i.e. objects on the Earth, such as a tennis ball). The main difficulties students met were caused by their idea that there is no gravity in the vacuum (i.e. the area outside of the Earth's atmosphere) and also by their everyday experience, according to which it is impossible for a projectile to move continuously parallel to the ground. © 2013 Springer Science+Business Media Dordrecht

    The 'Heisenberg's Microscope' as an Example of Using Thought Experiments in Teaching Physics Theories to Students of the Upper Secondary School

    No full text
    In this work an attempt is made to explore the possible value of using Thought Experiments (TEs) in teaching physics to upper secondary education students. Specifically, a qualitative research project is designed to investigate the extent to which the Thought Experiment (TE) called 'Heisenberg's Microscope', as it has been transformed by Gamow for the public in his book Mr. Tompkins in Paperback, can function as a tool in the teaching of the 'uncertainty principle'. The sample in the research consisted of 40 Greek students, in 11 groups of 3-4 students each. The findings of this study reveal that the use of this TE has positive results in teaching the uncertainty principle. Students, based on the TE, were able (i) to derive a formula of the uncertainty principle, (ii) to explain that the uncertainty principle is a general principle in nature and it is not a result of incompleteness of the experimental devices and (iii) to argue that it is impossible to determine the trajectory of a particle as a mathematical line. © 2010 Springer Science+Business Media B.V

    Scientific explanations in Greek upper secondary physics textbooks

    No full text
    In this study, an analysis of the structure of scientific explanations included in physics textbooks of upper secondary schools in Greece was completed. In scientific explanations for specific phenomena found in the sample textbooks, the explanandum is a logical consequence of the explanans, which in all cases include at least one scientific law (and/or principle, model or rule) previously presented, as well as statements concerning a specific case or specific conditions. The same structure is also followed in most of the cases in which the textbook authors explain regularities (i.e. laws, rules) as consequences of one or more general law or principle of physics. Finally, a number of the physics laws and principles presented in textbooks are not deduced as consequences from other, more general laws, but they are formulated axiomatically or inductively derived and the authors argue for their validity. Since, as it was found, the scientific explanations presented in the textbooks used in the study have similar structures to the explanations in internationally known textbooks, the findings of the present work may be of interest not only to science educators in Greece, but also to the community of science educators in other countries. © 2017 Informa UK Limited, trading as Taylor & Francis Group

    Big spatial and spatio-temporal data analytics systems

    No full text
    We are living in the era of Big Data, and Spatial and Spatio-temporal Data are not an exception. Mobile apps, cars, GPS devices, ships, airplanes, medical devices, IoT devices, etc. are generating explosive amounts of data with spatial and temporal characteristics. Social networking systems also generate and store vast amounts of geo-located information, like geo-located tweets, or captured mobile users’ locations. To manage this huge volume of spatial and spatio-temporal data we need parallel and distributed frameworks. For this reason, modeling, storing, querying and analyzing big spatial and spatio-temporal data in distributed environments is an active area for researching with many interesting challenges. In recent years a lot of spatial and spatio-temporal analytics systems have emerged. This paper provides a comparative overview of such systems based on a set of characteristics (data types, indexing, partitioning techniques, distributed processing, query Language, visualization and case-studies of applications). We will present selected systems (the most promising and/or most popular ones), considering their acceptance in the research and advanced applications communities. More specifically, we will present two systems handling spatial data only (SpatialHaddop and GeoSpark) and two systems able to handle spatio-temporal data, too (ST-Hadoop and STARK) and compare their characteristics and capabilities. Moreover, we will also present in brief other recent/emerging spatial and spatio-temporal analytics systems with interesting characteristics. The paper closes with our conclusions arising from our investigation of the rather new, though quite large world of ecosystems supporting management of big spatial and spatio-temporal data. © Springer-Verlag GmbH Germany, part of Springer Nature 2021

    A Partitioning GPU-based Algorithm for Processing the k Nearest-Neighbor Query

    No full text
    The k Nearest-Neighbor (k-NN) query is a common spatial query that appears in several big data applications. Typically, GPU devices have much larger numbers of processing cores than CPUs and faster device memory than main memory accessed by CPUs, thus, providing higher computing power. We propose and implement a new GPU-based partitioning algorithm for the k-NN query, using the CUDA runtime API. Due to partitioning, this algorithm avoids calculating distances for the whole dataset. Using synthetic and real datasets, we present an extensive experimental performance comparison against six existing algorithms. These algorithms are based on calculating distances for the whole in-memory dataset. This comparison shows that the new algorithm excels in all the conducted experiments and outperforms these six algorithms. © 2020 ACM

    GPU-aided edge computing for processing the k nearest-neighbor query on SSD-resident data

    No full text
    Edge computing aims at improving performance by storing and processing data closer to their source. The k Nearest-Neighbor (k-NN) query is a common spatial query in several applications. For example, this query can be used for distance classification of a group of points against a big reference dataset to derive the dominating feature class. Typically, GPU devices have much larger numbers of processing cores than CPUs and faster device memory than main memory accessed by CPUs, thus, providing higher computing power. However, since device and/or main memory may not be able to host an entire reference dataset, the use of secondary storage is inevitable. Solid State Disks (SSDs) could be used for storing such a dataset. In this paper, we propose an architecture of a distributed edge-computing environment where large-scale processing of the k-NN query can be accomplished by executing an efficient algorithm for processing the k-NN query on its (GPU and SSD enabled) edge nodes. We also propose a new algorithm for this purpose, a GPU-based partitioning algorithm for processing the k-NN query on big reference data stored on SSDs. We implement this algorithm in a GPU-enabled edge-computing device, hosting reference data on an SSD. Using synthetic datasets, we present an extensive experimental performance comparison of the new algorithm against two existing ones (working on memory-resident data) proposed by other researchers and two existing ones (working on SSD-resident data) recently proposed by us. The new algorithm excels in all the conducted experiments and outperforms its competitors. © 2021 Elsevier B.V
    corecore