4,077 research outputs found

    Ubiquitous Interoperable Emergency Response System

    Get PDF
    In the United States, there is an emergency dispatch for fire department services more than once every second - 31,854,000 incidents in 2012. While large scale disasters present enormous response complexity, even the most common emergencies require a better way to communicate information between personnel. Through real-time location and status updates using integrated sensors, this system can significantly decrease emergency response times and improve the overall effectiveness of emergency responses. Aside from face-to-face communication, radio transmissions are the most common medium for transferring information during emergency incidents. However, this type of information sharing is riddled with issues that are nearly impossible to overcome on a scene. Poor sound quality, the failure to hear transmissions, the inability to reach a radio microphone, and the transient nature of radio messages illustrate just a few of the problems. Proprietary and closed systems that collect and present response data have been implemented, but lack interoperability and do not provide a full array of necessary services. Furthermore, the software and hardware that run the systems are generally poorly designed for emergency response scenarios. Pervasive devices, which can transmit data without human interaction, and software using open communication standards designed for multiple platforms and form factors are two essential components. This thesis explores the issues, history, design, and implementation of a ubiquitous interoperable emergency response system by taking advantage of the latest in hardware and software, including Google Glass, Android powered mobile devices, and a cloud based architecture that can automatically scale to 7 billion requests per day. Implementing this pervasive system that transcends physical barriers by allowing disparate devices to communicate and operate harmoniously without human interaction is a step towards a practical solution for emergency response management

    Strategies for including cloud-computing into an engineering modeling workflow

    Get PDF
    With the advent of cloud computing, high-end computing, networking, and storage resources are available on-demand at a relatively low price point. Internet applications in the consumer and increasingly in the enterprise space are making use of these resources to upgrade existing applications and build new ones. This is made possible by building decentralized applications that can be integrated with one another through web-enabled application programming interfaces (APIs). However, in the fields of engineering and computational science, cloud computing resources have been utilized primarily to augment existing high-performance computing hardware, but engineering model integrations still occur by the use of software libraries. In this research, a novel approach is proposed where engineering models are constructed as independent services that publish web-enabled APIs. To enable this, the engineering models are built as stateless microservices that solve a single computational problem. Composite services are then built utilizing these independent component models, much like in the consumer application space. Interactions between component models is orchestrated by a federation management system. This proposed approach is then demonstrated by disaggregating an existing monolithic model for a cookstove into a set of component models. The component models are then reintegrated and compared with the original model for computational accuracy and run-time. Additionally, a novel engineering workflow is proposed that reuses computational data by constructing reduced-order models (ROMs). This framework is evaluated empirically for a number of producers and consumers of engineering models based on computation and data synchronization aspects. The framework is also evaluated by simulating an engineering design workflow with multiple producers and consumers at various stages during the design process. Finally, concepts from the federated system of models and ROMs are combined to propose the concept of a hybrid model (information artefact). The hybrid model is a web-enabled microservice that encapsulates information from multiple engineering models at varying fidelities, and responds to queries based on the best available information. Rules for the construction of hybrid models have been proposed and evaluated in the context of engineering workflows

    Profile of investigative capacities that determine factors to investigate in the universities of Peru

    Get PDF
    The need to develop research in the Peruvian university context is urgently needed. That is why, in order to develop the research processes, it is required to have a profile of the research capacity of the students. For this reason, the purpose was to determine the profile of investigative capacities that derive from factors to investigate in universities. It corresponds to the quantitative approach, of a transversal type; a sample made up of 303 university students was used. Two instruments were used: investigative skills scale and the questionnaire of factors that influence investigative skills with construct validity by the KMO test (0.623 and 0.706,respectively). The results referring to the profiles showed that 32.3% of the respondents identified themselves with the reflective inquiry investigative capacity profile, followed by 26.7% with the generic conceptualization investigative capacity profile; while 27.7% were related to the specific cognitive investigative abilities profile and, finally, 13.2% ofthose evaluated were identified in the active cognitive construction investigative abilities profile. It is concluded that students have investigative skills at different progressive levels, with different characteristics depending on the influence of factors in the training process.Campus Huancay

    ChimpCheck: Property-Based Randomized Test Generation for Interactive Apps

    Full text link
    We consider the problem of generating relevant execution traces to test rich interactive applications. Rich interactive applications, such as apps on mobile platforms, are complex stateful and often distributed systems where sufficiently exercising the app with user-interaction (UI) event sequences to expose defects is both hard and time-consuming. In particular, there is a fundamental tension between brute-force random UI exercising tools, which are fully-automated but offer low relevance, and UI test scripts, which are manual but offer high relevance. In this paper, we consider a middle way---enabling a seamless fusion of scripted and randomized UI testing. This fusion is prototyped in a testing tool called ChimpCheck for programming, generating, and executing property-based randomized test cases for Android apps. Our approach realizes this fusion by offering a high-level, embedded domain-specific language for defining custom generators of simulated user-interaction event sequences. What follows is a combinator library built on industrial strength frameworks for property-based testing (ScalaCheck) and Android testing (Android JUnit and Espresso) to implement property-based randomized testing for Android development. Driven by real, reported issues in open source Android apps, we show, through case studies, how ChimpCheck enables expressing effective testing patterns in a compact manner.Comment: 20 pages, 21 figures, Symposium on New ideas, New Paradigms, and Reflections on Programming and Software (Onward!2017

    Security enhanced sentence similarity computing model based on convolutional neural network

    Get PDF
    Deep learning model shows great advantages in various fields. However, researchers pay attention to how to improve the accuracy of the model, while ignoring the security considerations. The problem of controlling the judgment result of deep learning model by attack examples and then affecting the system decision-making is gradually exposed. In order to improve the security of sentence similarity analysis model, we propose a convolution neural network model based on attention mechanism. First of all, the mutual information between sentences is correlated by attention weighting. Then, it is input into improved convolutional neural network. In addition, we add attack examples to the input, which is generated by the firefly algorithm. In the attack example, we replace the words in the sentence to some extent, which results in the adversarial data with great semantic change but slight sentence structure change. To a certain extent, the addition of attack example increases the ability of model to identify adversarial data and improves the robustness of the model. Experimental results show that the accuracy, recall rate and F1 value of the model are due to other baseline models.This work was supported in part by the Major Scientific and Technological Projects of China National Petroleum Corporation (CNPC) under Grant ZD2019-183-006, in part by the Shandong Provincial Natural Science Foundation, China, under Grant ZR2020MF006, in part by the Fundamental Research Funds for the Central Universities of China University of Petroleum (East China) under Grant 20CX05017A, and in part by the Open Foundation of State Key Laboratory of Networking and Switching Technology (Beijing University of Posts and Telecommunications) under Grant SKLNST-2021-1-17.Postprint (author's final draft

    What Makes Digital Technology? A Categorization Based on Purpose

    Get PDF
    Digital technology (DT) is creating and shaping today’s world. Building on its identity and history of technology research, the Information Systems discipline is at the forefront of understanding the nature of DT and related phenomena. Understanding the nature of DT requires understanding its purposes. Because of the growing number of DTs, these purposes are diversifying, and further examination is needed. To that end, we followed an organizational systematics paradigm and present a taxonomic theory for DT that enables its classification through its diverse purposes. The taxonomic theory comprises a multi-layer taxonomy of DT and purpose-related archetypes, which we inferred from a sample of 92 real-world DTs. In our empirical evaluation, we assessed reliability, validity, and usefulness of the taxonomy and archetypes. The taxonomic theory exceeds existing technology classifications by being the first that (1) has been rigorously developed, (2) considers the nature of DT, (3) is sufficiently concrete to reflect the diverse purposes of DT, and (4) is sufficiently abstract to be persistent. Our findings add to the descriptive knowledge on DT, advance our understanding of the diverse purposes of DT, and lay the ground for further theorizing. Our work also supports practitioners in managing and designing DTs

    Detection and Measurement of Sales Cannibalization in Information Technology Markets

    Get PDF
    Characteristic features of Information Technology (IT), such as its intrinsic modularity and distinctive cost structure, incentivize IT vendors to implement growth strategies based on launching variants of a basic offering. These variants are by design substitutable to some degree and may contend for the same customers instead of winning new ones from competitors or from an expansion of the market. They may thus generate intra-organizational sales diversion – i.e., sales cannibalization. The occurrence of cannibalization between two offerings must be verified (the detection problem) and quantified (the measurement problem), before the offering with cannibalistic potential is introduced into the market (ex-ante estimation) and/or afterwards (ex-post estimation). In IT markets, both detection and measurement of cannibalization are challenging. The dynamics of technological innovation featured in these markets may namely alter, hide, or confound cannibalization effects. To address these research problems, we elaborated novel methodologies for the detection and measurement of cannibalization in IT markets and applied them to four exemplary case studies. We employed both quantitative and qualitative methodologies, thus implementing a mixed-method multi- case research design. The first case study focuses on product cannibalization in the context of continuous product innovation. We investigated demand interrelationships among Apple handheld devices by means of econometric models with exogenous structural breaks (i.e., whose date of occurrence is given a priori). In particular, we estimated how sales of the iPod line of portable music players were affected by new-product launches within the iPod line itself and by the introduction of iPhone smartphones and iPad tablets. We could find evidence of expansion in total line revenues, driven by iPod line extensions, and inter- categorical cannibalization, due to iPhones and iPads Mini. The second empirical application tackles platform cannibalization, when a platform provider becomes complementor of an innovative third party platform thus competing with its own proprietary one. We ascertained whether the diffusion of GPS-enabled smartphones and navigation apps affected sales of portable navigation devices. Using a unit-root test with endogenous breaks (i.e., whose date of occurrence is estimated), we identified a negative shift in the sales of the two leaders in the navigation market and dated it at the third quarter of 2008, when the iOS and Android mobile ecosystems were introduced. Later launches of their own navigation apps did not significantly affect these manufacturers’ sales further. The third case study addresses channel cannibalization. We explored the channel adoption decision of organizational buyers of business software applications, in light of the rising popularity of online sales channels in consumer markets. We constructed a qualitative channel adoption model which takes into account the relevant drivers and barriers of channel adoption, their interdependences, and the buying process phases. Our findings suggest that, in the enterprise software market, online channels will not cannibalize offline ones unless some typical characteristics of enterprise software applications change. The fourth case study deals with business model cannibalization – the organizational decision to cannibalize an existent business model for a more innovative one. We examined the transition of two enterprise software vendors from on-premise to on-demand software delivery. Relying on a mixed- method research approach, built on the quantitative and qualitative methodologies from the previous case studies, we identified the transition milestones and assessed their impact on financial performances. The cannibalization between on-premise and on-demand is also the scenario for an illustrative simulation study of the cannibalization

    Advances in Information Security and Privacy

    Get PDF
    With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue

    Recent Advances in Image Restoration with Applications to Real World Problems

    Get PDF
    In the past few decades, imaging hardware has improved tremendously in terms of resolution, making widespread usage of images in many diverse applications on Earth and planetary missions. However, practical issues associated with image acquisition are still affecting image quality. Some of these issues such as blurring, measurement noise, mosaicing artifacts, low spatial or spectral resolution, etc. can seriously affect the accuracy of the aforementioned applications. This book intends to provide the reader with a glimpse of the latest developments and recent advances in image restoration, which includes image super-resolution, image fusion to enhance spatial, spectral resolution, and temporal resolutions, and the generation of synthetic images using deep learning techniques. Some practical applications are also included
    • …
    corecore