177 research outputs found

    Optimization in React.js: Methods, Tools, and Techniques to Improve Performance of Modern Web Applications

    Get PDF
    The complexity of modern web applications leads to increased performance demands to succeed in the competitive market. For example, the complexity and the number of commonly used libraries leads to slower page loading times. Furthermore, the number of slow Document Object Model (DOM) operations increases in single-page applications (SPA), which are currently trending. While the leading JavaScript library, React.js, includes multiple optimization methods, such as virtual DOM, the responsibility of further optimization is left to the developers. This paper examines which techniques, methods, and tools can be utilized in React.js ecosystem to increase the performance of a web application. The used research method is literary analysis. As a result of the literary analysis, the current best optimization techniques, methods and tools are listed, such as decreasing the number of re-renders through best practices of React.js state management and memoization, decreasing loading times, and handling compute-intensive tasks with multithreading and memoization. Furthermore, the overview of the optimization methods in React.js illustrates different techniques. However, some methods require a more custom implementation and thus general concepts, such as HTML5 Web Workers API and Webpack module bundler, are reviewed. Since the presented examples are general and the performance benefits of combinations of the reviewed methods is not predictable, performance profiling through browser developer tools and React.js Profiler component is also introduced. Performance profiling can be used to gain an understanding for the need of optimization and to analyse the actual benefits gained from utilizing the optimization techniques, methods, and tools

    Extending the distributed computing infrastructure of the CMS experiment with HPC resources

    Get PDF
    Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns

    A Survey of Performance Optimization for Mobile Applications

    Get PDF
    Nowadays there is a mobile application for almost everything a user may think of, ranging from paying bills and gathering information to playing games and watching movies. In order to ensure user satisfaction and success of applications, it is important to provide high performant applications. This is particularly important for resource constraint systems such as mobile devices. Thereby, non-functional performance characteristics, such as energy and memory consumption, play an important role for user satisfaction. This paper provides a comprehensive survey of non-functional performance optimization for Android applications. We collected 155 unique publications, published between 2008 and 2020, that focus on the optimization of non-functional performance of mobile applications. We target our search at four performance characteristics, in particular: responsiveness, launch time, memory and energy consumption. For each performance characteristic, we categorize optimization approaches based on the method used in the corresponding publications. Furthermore, we identify research gaps in the literature for future work

    Combining Cloud and Mobile Computing for Machine Learning

    Full text link
    Although the computing power of mobile devices is increasing, machine learning models are also growing in size. This trend creates problems for mobile devices due to limitations like their memory capacity and battery life. While many services, like ChatGPT and Midjourney, run all the inferences in the cloud, we believe a flexible and fine-grained task distribution is more desirable. In this work, we consider model segmentation as a solution to improving the user experience, dividing the computation between mobile devices and the cloud in a way that offloads the compute-heavy portion of the model while minimizing the data transfer required. We show that the division not only reduces the wait time for users but can also be fine-tuned to optimize the workloads of the cloud. To achieve that, we design a scheduler that collects information about network quality, client device capability, and job requirements, making decisions to achieve consistent performance across a range of devices while reducing the work the cloud needs to perform.Comment: Ruiqi Xu and Tianchi Zhang contributed equally to this wor

    Visualization of Simulations of a Robot Operated Car Park System

    Get PDF
    Magistritöö „Robotiseeritud parkimissüsteemi simulatsioonide visualiseerimine“ eesmärgiks on identifitseerida parimad praktikad ja tehnoloogiad robotiseeritud parkimissüsteemi algoritmi efektiivsuse esitlemiseks potentsiaalsetele klientidele. Magistritöös leitud praktikaid ja tehnoloogiaid kasutades luuakse robotite tööd demonstreeriv veebirakendus. Töö lõpus hinnatakse rakenduse pakutavat kasutajakogemust ning vastavust seatud nõuetele.The aim of the MA thesis "Visualization of simulations of a robot operated car park system" is to identify the best practices and technologies to present the effectiveness of robot operated car park system's underlying algorithm to potential customers. An application demonstrating the work of the robots is built using the identified technologies and best practices. The user experience of the application and the application's compliance to the requirements of the tool are validated in the thesis

    Inkjet bioprinting and 3D culture of human MSC-laden binary starPEG-heparin hydrogels for cartilage tissue engineering

    Get PDF
    Articular cartilage is a highly specialized, hierarchically organized tissue covering the articular surfaces of diarthrodial joints that absorbs and distributes forces upon mechanical loading and enables low-friction movement between opposing bone ends. Despite a strong resilience towards mechanical stress, once damaged cartilage is generally not regenerated due to a limited repair potential of the residing cells (chondrocytes) and the local absence of vascularized blood vessels and nerves. Eventually, this may lead to osteoarthritis, a chronic degenerative disorder of the synovial joints which has a strongly growing prevalence worldwide. Modern regenerative therapies that aim to rebuild cartilage tissue in vivo and in vitro using chondrocyte- and stem cell-based methods are still not able to produce tissue constructs with desired biomechanical properties and organization for long-term repair. Therefore, cartilage tissue engineering seeks for new ways to solve these problems. In this regard, the application of hydrogel-based scaffolding materials as artificial matrix environments to support the chondrogenesis of embedded cells and the implementation of appropriate biofabrication techniques that help to reconstitute the zonal structure of articular cartilage are considered as promising strategies for sophisticated cartilage regeneration approaches. In this thesis, a modular starPEG-heparin hydrogel platform as cell-instructive hydrogel scaffold was used in combination with a custom-designed 3D inkjet bioprinting method with the intention to develop a printable 3D in vitro culture system that promotes the chondrogenic differentiation of human mesenchymal stromal cells (hMSC) in printed cell-laden hydrogels with layered architectures in order to fabricate cartilage-like tissue constructs with hierarchical organization. Firstly, the successful bioprinting of horizontally and vertically structured, cell-free and -laden hydrogel scaffolds that exhibit layer thicknesses in the range of the superficial zone, the thinnest articular cartilage layer is demonstrated. The long-term integrity of the printed constructs and the cellular functionality of the plotted cells that generally had a high viability after the printing process are shown by a successful PDGF-BB-mediated hMSC migration assay in a printed multilayered hydrogel construct over a culture period of 4 weeks. Secondly, when the established printing procedures were applied for the chondrogenic differentiation of hMSCs, it was found that the printed cell-laden constructs showed a limited potential for in vitro chondrogenesis as indicated by a weaker immunostaining for cartilage-specific markers compared to casted hydrogel controls. In order to increase the post-printing cell density to tackle the limited printable cell concentration which was regarded as the primary reason for the impaired performance of the printed scaffolds, different conditions with varying culture medium and hydrogel compositions were tested to stimulate 3D cell proliferation. However, a significant 3D cell number increase could not be achieved which ultimately resulted in shifting the further focus to casted hMSC-laden starPEG-heparin hydrogels. Thirdly, the chondrogenic differentiation of hMSCs in casted hydrogels proved to be successful which was indicated by a uniform deposition of cartilage-specific ECM molecules comparable with the outcomes of scaffold-free MSC micromass cultures used as reference system. However, the quantitative analysis of biochemical and physical properties of the engineered hydrogel constructs yielded still significant lower values in relation to native articular cartilage tissue. Fourthly, in order to improve these properties and to enhance the chondrogenesis in starPEGheparin hydrogels, a dualistic strategy was followed. In the first part, specific externally supplied stimulatory cues including a triple growth factor supply strategy and macromolecular crowding were applied. As second part, intrinsic properties of the modular hydrogel system such as the crosslinking degree, the enzymatic degradability and the heparin content were systematically and independently altered. It was found that while the external cues showed no supportive benefits for the chondrogenic differentiation, the reduction of the heparin content in the hydrogel proved to be a key trigger that resulted in a significantly increased cartilage-like ECM deposition and gel stiffness of engineered constructs with low and no heparin content. In conclusion, this work yielded important experiences with regards to the application of inkjet bioprinting for hMSC-based cartilage tissue engineering approaches. Furthermore, the obtained data provided valuable insights into the interaction of MSCs and a surrounding hydrogel-based microenvironment that can be used for the further development of chondrosupportive scaffolding materials which may facilitate the fabrication of cartilage-like tissue constructs

    PRELOADING : A Transformative Approach to Flood Preparation and Relief

    Get PDF
    For islands and coastal cities, the body of water that nourishes the land can easily become a leading source of threat. Natural disasters are mostly unpredictable and often have devastating impact on life and property. Although intensive tsunami inundations rarely occur, annual recurring flooding caused by rising water levels and coastal inundation is a common problem. People are often repeatedly trapped in flooded homes or are forced to quickly evacuate to inadequate temporary shelters. Two common approaches to flood threat are to build permanent barriers or to physically distance people from the water. However, as the water is essential to the livelihood of islands and coastal cities, these approaches often create more harm than good, destroying the normal beneficial relationship between the people and the water. Damage to homes and the destruction of communities are often inevitable, and thus require large amounts of material and time for post-disaster reconstruction. Since external resources are expensive and difficult to transport during times of need, the lack of immediate internal response to sudden natural disasters can cause severe delays in the disaster relief process and hinder the future redevelopment of the community. Consequently, the urgent issue is how to incorporate flood readiness into the built environment. How do we prepare ourselves for the occurrence and recurrence of flooding in coastal cities? This thesis proposes that, in designing for disasters, the architect’s objective should be to design buildings that can respond to, recover from, and be resilient against water inundation. This thesis investigates a new strategy for flood protection and relief within the context of Port Alberni, British Columbia. The aim is to establish interconnected relationships between pre- and post-disaster buildings, materials, and resources. This means designing existing architecture in public space to contain the material and programmatic capacity to partially withstand flooding and strategically transform into spaces for flood relief. These, in turn, contribute to the rebuilding of a resilient community. Daily public interactions with these architectural elements can also preload the residents with disaster awareness and knowledge for disaster relief. The design aims to reduce the gap between the urgent need for shelter and the speed of reaction to flood events, at the same time, create an architectural syntax that constructs place and brings people back to the water

    Hardware Acceleration in Image Stitching: GPU vs FPGA

    Get PDF
    Image stitching is a process where two or more images with an overlapping field of view are combined. This process is commonly used to increase the field of view or image quality of a system. While this process is not particularly difficult for modern personal computers, hardware acceleration is often required to achieve real-time performance in low-power image stitching solutions. In this thesis, two separate hardware accelerated image stitching solutions are developed and compared. One solution is accelerated using a Xilinx Zynq UltraScale+ ZU3EG FPGA and the other solution is accelerated using an Nvidia RTX 2070 Super GPU. The image stitching solutions implemented in this paper increase the system’s field of view and involve the end-to-end process of feature detection, image registration, and image mixing. The latency, resource utilization, and power consumption for the accelerated portions of each system are compared and each systems tradeoffs and use cases are considered

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue Abundance of Services at lU Customer Relations and Technology: Practical Solutions from Two Campuses FSU Converges Support to Follow Technology Service Catalogs and the Value of Just 12 Minutes Essential Telephone Skills Email Services: Beginning of the End? lnstitutional Excellence Award Interview President\u27s Message From the Executive Directo
    corecore