1,620 research outputs found

    REMOTE MOBILE SCREEN (RMS): AN APPROACH FOR SECURE BYOD ENVIRONMENTS

    Get PDF
    Bring Your Own Device (BYOD) is a policy where employees use their own personal mobile devices to perform work-related tasks. Enterprises reduce their costs since they do not have to purchase and provide support for the mobile devices. BYOD increases job satisfaction and productivity in the employees, as they can choose which device to use and do not need to carry two or more devices. However, BYOD policies create an insecure environment, as the corporate network is extended and it becomes harder to protect it from attacks. In this scenario, the corporate information can be leaked, personal and corporate spaces are not separated, it becomes difficult to enforce security policies on the devices, and employees are worried about their privacy. Consequently, a secure BYOD environment must achieve the following goals: space isolation, corporate data protection, security policy enforcement, true space isolation, non-intrusiveness, and low resource consumption. We found that none of the currently available solutions achieve all of these goals. We developed Remote Mobile Screen (RMS), a framework that meets all the goals for a secure BYOD environment. To achieve this, the enterprise provides the employee with a Virtual Machine (VM) running a mobile operating system, which is located in the enterprise network and to which the employee connects using the mobile device. We provide an implementation of RMS using commonly available software for an x86 architecture. We address RMS challenges related to compatibility, scalability and latency. For the first challenge, we show that at least 90.2% of the productivity applications from Google Play can be installed on an x86 architecture, while at least 80.4% run normally. For the second challenge, we deployed our implementation on a high-performance server and run up to 596 VMs using 256 GB of RAM. Further, we show that the number of VMs is proportional to the available RAM. For the third challenge, we used our implementation on GENI and conclude that an application latency of 150 milliseconds can be achieved. Adviser: Byrav Ramamurth

    Analyzing epigenomic data in a large-scale context

    Get PDF
    While large amounts of epigenomic data are publicly available, their retrieval in a form suitable for downstream analysis is a bottleneck in current research. In a typical analysis, users are required to download huge files that span the entire genome, even if they are only interested in a small subset (e.g., promoter regions) or an aggregation thereof. Moreover, complex operations on genome-level data are not always feasible on a local computer due to resource limitations. The DeepBlue Epigenomic Data Server mitigates this issue by providing a robust server that affords a powerful API for searching, filtering, transforming, aggregating, enriching, and downloading data from several epigenomic consortia. Furthermore, its main component implements scalable data storage and Manipulation methods that scale with the increasing amount of epigenetic data, thereby making it the ideal resource for researchers that seek to integrate epigenomic data into their analysis workflow. This work also presents companion tools that utilize the DeepBlue API to enable users not proficient in scripting or programming languages to analyze epigenomic data in a user-friendly way: (i) an R/Bioconductor package that integrates DeepBlue into the R analysis workflow. The extracted data are automatically converted into suitable R data structures for downstream analysis and visualization within the Bioconductor frame- work; (ii) a web portal that enables users to search, select, filter and download the epigenomic data available in the DeepBlue Server. This interface provides elements, such as data tables, grids, data selections, developed for empowering users to find the required epigenomic data in a straightforward interface; (iii) DIVE, a web data analysis tool that allows researchers to perform large-epigenomic data analysis in a programming-free environment. DIVE enables users to compare their datasets to the datasets available in the DeepBlue Server in an intuitive interface, which summarizes the comparison of hundreds of datasets in a simple chart. Furthermore, these tools are integrated, being capable of sharing results among themselves, creating a powerful large-scale epigenomic data analysis environment. The DeepBlue Epigenomic Data Server and its ecosystem was well received by the International Human Epigenome Consortium and already attracted much attention by the epigenomic research community with currently 160 registered users and more than three million anonymous workflow processing requests since its release.WĂ€hrend große Mengen epigenomischer Daten öffentlich verfĂŒgbar sind, ist ihre Abfrage in einer fĂŒr die Downstream-Analyse geeigneten Form ein Engpass in der aktuellen Forschung. Bei einer typischen Analyse mĂŒssen Benutzer riesige Dateien herunterladen, die das gesamte Genom umfassen, selbst wenn sie nur an einer kleinen Teilmenge (z.B., Promotorregionen) oder einer Aggregation davon interessiert sind. DarĂŒber hinaus sind komplexe VorgĂ€nge mit Daten auf Genomebene aufgrund von RessourceneinschrĂ€nkungen auf einem lokalen Computer nicht immer möglich. Der DeepBlue Epigenomic Data Server behebt dieses Problem, indem er eine leistungsstarke API zum Suchen, Filtern, Umwandeln, Aggregieren, Anreichern und Herunterladen von Daten verschiedener epigenomischer Konsortien bietet. DarĂŒber hinaus implementiert der DeepBlue-Server skalierbare Datenspeicherungs- und manipulationsmethoden, die der zunehmenden Menge epigenetischer Daten gerecht werden. Dadurch ist der DeepBlue Server ideal fĂŒr Forscher geeignet, die die aktuellen epigenomischen Ressourcen in ihren Analyse-Workflow integrieren möchten. In dieser Arbeit werden zusĂ€tzlich Begleittools vorgestellt, die die DeepBlue-API verwenden, um Benutzern, die sich mit Scripting oder Programmiersprachen nicht auskennen, die Möglichkeit zu geben, epigenomische Daten auf benutzerfreundliche Weise zu analysieren: (i) ein R/ Bioconductor-Paket, das DeepBlue in den R-Analyse-Workflow integriert. Die extrahierten Daten werden automatisch in geeignete R-Datenstrukturen fĂŒr die Downstream-Analyse und Visualisierung innerhalb des Bioconductor-Frameworks konvertiert; (ii) ein Webportal, ĂŒber das Benutzer die auf dem DeepBlue Server verfĂŒgbaren epigenomischen Daten suchen, auswĂ€hlen, filtern und herunterladen können. Diese Schnittstelle bietet Elemente wie Datentabellen, Raster, Datenselektionen, mit denen Benutzer die erforderlichen epigenomischen Daten in einer einfachen Schnittstelle finden können; (iii) DIVE, ein Webdatenanalysetool, mit dem Forscher umfangreiche epigenomische Datenanalysen in einer programmierungsfreien Umgebung durchfĂŒhren können. Mit DIVE können Benutzer ihre DatensĂ€tze mit den im Deep- Blue Server verfĂŒgbaren DatensĂ€tzen in einer intuitiven BenutzeroberflĂ€che vergleichen. Dabei kann der Vergleich hunderter DatensĂ€tze in einem Diagramm ausgedrĂŒckt werden. Aufgrund der großen Datenmenge, die in DIVE verfĂŒgbar ist, werden Methoden bereitgestellt, mit denen die Ă€hnlichsten DatensĂ€tze fĂŒr eine vergleichende Analyse vorgeschlagen werden können. Alle zuvor genannten Tools sind miteinander integriert, so dass sie die Ergebnisse untereinander austauschen können, wodurch eine leistungsstarke Umgebung fĂŒr die Analyse epigenomischer Daten entsteht. Der DeepBlue Epigenomic Data Server und sein Ökosystem wurden vom International Human Epigenome Consortium Ă€ußerst gut aufgenommen und erreichten seit ihrer Veröffentlichung große Aufmerksamkeit bei der epigenomischen Forschungsgemeinschaft mit derzeit 160 registrierten Benutzern und mehr als drei Millionen anonymen Verarbeitungsanforderungen

    Factors shaping the evolution of electronic documentation systems

    Get PDF
    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments

    Satellite Networks: Architectures, Applications, and Technologies

    Get PDF
    Since global satellite networks are moving to the forefront in enhancing the national and global information infrastructures due to communication satellites' unique networking characteristics, a workshop was organized to assess the progress made to date and chart the future. This workshop provided the forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. Presentations on overview, state-of-the-art in research, development, deployment and applications and future trends on satellite networks are assembled

    Mobile Car Rental System

    Get PDF
    This paper gives an overview of "M-commerce" approaches that allow car-rental companies in Malaysia to upgrade and optimize their business processes. First it addresses issues that are found in traditional car rental sector like slow turnaround and large amount of time needed to do reservation. Later it describes how these issues are solved using new mobile technology that provides true mobility, instant information and better customer service over traditional manual and wired online car rental business. To achieve a positive result author applied prototyping methodology tliroughout this project that quickly provides a system for the users to interact with. The successful result in the end of this project proved that mobile technology can easily and efficiently be applied to businesses that provide services, such as car rental companies. However drawbacks were also found, which are small memory, limited functionality and limited content of mobile devices that can be eliminated in future with new developments of wireless devices and communication infrastructures. Finally this project paper highlights the result of research and development activity that was carried out to develop new system, which will benefit car-rental companies in a way increasing number of customers, reducing operating cost and finally making them more competitive in our rushing world

    Proceedings of the Second International Mobile Satellite Conference (IMSC 1990)

    Get PDF
    Presented here are the proceedings of the Second International Mobile Satellite Conference (IMSC), held June 17-20, 1990 in Ottawa, Canada. Topics covered include future mobile satellite communications concepts, aeronautical applications, modulation and coding, propagation and experimental systems, mobile terminal equipment, network architecture and control, regulatory and policy considerations, vehicle antennas, and speech compression

    VenCode – a versatile entry code for post-DNA delivery identification of target cells

    Get PDF
    RESUMO: O corpo humano Ă© feito de centenas, talvez milhares de tipos e estados celulares distintos, a maioria atualmente inacessĂ­veis atravĂ©s de ferramentas genĂ©ticas. A acessibilidade genĂ©tica traz consigo um potencial terapĂȘutico e de diagnostico significante, ao permitir a entrega seletiva de mensagens genĂ©ticas, ou terapias, diretamente Ă s cĂ©lulas. Trabalhos em organismos modelo mostram que a atividade de um sĂł elemento regulatĂłrio (ER) Ă© raramente especĂ­fica para um tipo celular, o que limita o seu uso a sistemas genĂ©ticos desenhados para controlar a expressĂŁo genica apĂłs a sua entrega nas cĂ©lulas. Abordagens de genĂ©tica interseccional podem, em teoria, aumentar o nĂșmero de cĂ©lulas acessĂ­veis sem esta restrição, mas o Ăąmbito e a segurança dessas abordagens para o organismo humano nĂŁo foram ainda sistematicamente estudados devido a uma falta de bases de dados de ERs para um extenso nĂșmero de tipos celulares, e de mĂ©todos para as explorar. Um tĂ­pico mĂ©todo interseccional funciona como uma porta logica AND ao converter a informação de dois ou mais ERs ativos num sĂł sinal de saĂ­da, que se torna Ășnico para a cĂ©lula analisada. Neste trabalho, estudamos sistematicamente o panorama da genĂ©tica intersecional no organismo humano, usando um grupo de cĂ©lulas selecionado a partir de um atlas de atividade de ERs obtido atravĂ©s do sequenciamento “Cap analysis of Gene Expression” (CAGE-seq) de centenas de cĂ©lulas primĂĄrias e de cancro (o atlas do consĂłrcio FANTOM5). Desenvolvemos algoritmos e heurĂ­sticas para encontrar e recolher intersecçÔes do tipo porta AND e em seguida para determinar a sua qualidade. Descobrimos que mais de 90% dos 154 tipos celulares primĂĄrios estudados podem ser distinguidos uns dos outros com apenas 3 ou 4 ERs ativos, com segurança e robustez. Chamamos de “Versatile Entry Codes” (VEnCodes) a esse nĂșmero mĂ­nimo de intersecçÔes de ERs ativos com potencial de distinção celular. Cada uma das 158 cĂ©lulas cancerĂ­genas estudadas poderam ser distinguidas do grupo de cĂ©lulas saudĂĄveis com VEnCodes de poucos ERs, a maioria dos quais sĂŁo altamente robustos Ă  variação intra- e inter-individual. Finalmente, fornecemos mĂ©todos para a validação dos VEnCodes obtidos e para a obtenção de VEnCodes a partir de bases de dados de sequenciamento ao nĂ­vel de cĂ©lula-a-cĂ©lula. O nosso trabalho oferece uma visĂŁo sistemĂĄtica do panorama da genĂ©tica interseccional no organismo humano e demonstra o potencial dessas abordagens para tecnologias futuras de terapia genĂ©tica.ABSTRACT: The human body is made up of hundreds, perhaps thousands of cell types and states, most of which are currently inaccessible genetically. Genetic accessibility carries significant diagnostic and therapeutic potential by allowing the selective delivery of genetic messages or cures to cells. Research in model organisms has shown that single regulatory element (RE) activities are seldom cell type specific, limiting their usage in genetic systems designed to restrict gene expression posteriorly to their delivery to cells. Intersectional genetic approaches can theoretically increase the number of genetically accessible cells, but the scope and safety of these approaches to human have not been systematically assessed due primarily to the lack of suitable thorough RE activity databases and methods to explore them. A typical intersectional method acts like an AND logic gate by converting the input of two or more active REs into a single synthetic output, which becomes unique for that cell. Here, we systematically assessed the intersectional genetics landscape of human using a curated subset of cells from a large RE usage atlas obtained by Cap Analysis of Gene Expression sequencing (CAGE-seq) of thousands of primary and cancer cells (the FANTOM5 consortium atlas). We developed the heuristics and algorithms to retrieve AND gate intersections and quality-rank them intra- and interindividually. We find that >90% of the 154 primary cell types surveyed can be distinguished from each other with as little as 3 to 4 active REs, with quantifiable safety and robustness. We call these minimal intersections of active REs with cell-type diagnostic potential "Versatile Entry Codes" (VEnCodes). Each of the 158 cancer cell types surveyed could also be distinguished from the healthy primary cell types with small VEnCodes, most of which were highly robust to intra- and interindividual variation. Finally, we provide methods for the cross-validation of CAGE-seq-derived VEnCodes and for the extraction of VEnCodes from pooled single cell sequencing data. Our work provides a systematic view of the intersectional genetics landscape in human and demonstrates the potential of these approaches for future gene delivery technologies in human

    Epigenetics of Reprogramming to Induced Pluripotency

    Get PDF
    Reprogramming to induced pluripotent stem cells (iPSCs) proceeds in a stepwise manner with reprogramming factor binding, transcription, and chromatin states changing during transitions. Evidence is emerging that epigenetic priming events early in the process may be critical for pluripotency induction later. Chromatin and its regulators are important controllers of reprogramming, and reprogramming factor levels, stoichiometry, and extracellular conditions influence the outcome. The rapid progress in characterizing reprogramming is benefiting applications of iPSCs and is already enabling the rational design of novel reprogramming factor cocktails. However, recent studies have also uncovered an epigenetic instability of the X chromosome in human iPSCs that warrants careful consideration

    World Alzheimer report 2016: improving healthcare for people living with dementia: coverage, quality and costs now and in the future

    Get PDF
    The World Alzheimer Report 2016, Improving healthcare for people living with dementia: Coverage, quality and costs now and in the future, reviews research evidence on the elements of healthcare for people with dementia, and, using economic modelling, suggests how it should be improved and made more efficient. The report argues that current dementia healthcare services are over-specialised, and that a rebalancing is required with a more prominent role for primary and community care. This would increase capacity, limit the increased costs associated with scaling up coverage of care, and, coupled with the introduction of care pathways and case management, improve the coordination and integration of care. Modelling of the costs of care pathways was carried out in Canada, China, Indonesia, Mexico, South Africa, South Korea and Switzerland, to estimate the costs of dementia healthcare under different assumptions regarding delivery systems. The report was researched and authored by Prof Martin Prince, Ms Adelina Comas-Herrera, Prof Martin Knapp, Dr MaĂ«lenn Guerchet and Ms Maria Karagiannidou from The Global Observatory for Ageing and Dementia Care, King’s College London and the Personal Social Services Research Unit (PSSRU), London School of Economics and Political Science
    • 

    corecore