37,538 research outputs found

    How Digital Natives Learn and Thrive in the Digital Age: Evidence from an Emerging Economy

    Get PDF
    As a generation of ‘digital natives,’ secondary students who were born from 2002 to 2010 have various approaches to acquiring digital knowledge. Digital literacy and resilience are crucial for them to navigate the digital world as much as the real world; however, these remain under-researched subjects, especially in developing countries. In Vietnam, the education system has put considerable effort into teaching students these skills to promote quality education as part of the United Nations-defined Sustainable Development Goal 4 (SDG4). This issue has proven especially salient amid the COVID−19 pandemic lockdowns, which had obliged most schools to switch to online forms of teaching. This study, which utilizes a dataset of 1061 Vietnamese students taken from the United Nations Educational, Scientific, and Cultural Organization (UNESCO)’s “Digital Kids Asia Pacific (DKAP)” project, employs Bayesian statistics to explore the relationship between the students’ background and their digital abilities. Results show that economic status and parents’ level of education are positively correlated with digital literacy. Students from urban schools have only a slightly higher level of digital literacy than their rural counterparts, suggesting that school location may not be a defining explanatory element in the variation of digital literacy and resilience among Vietnamese students. Students’ digital literacy and, especially resilience, also have associations with their gender. Moreover, as students are digitally literate, they are more likely to be digitally resilient. Following SDG4, i.e., Quality Education, it is advisable for schools, and especially parents, to seriously invest in creating a safe, educational environment to enhance digital literacy among students

    A framework for interrogating social media images to reveal an emergent archive of war

    Get PDF
    The visual image has long been central to how war is seen, contested and legitimised, remembered and forgotten. Archives are pivotal to these ends as is their ownership and access, from state and other official repositories through to the countless photographs scattered and hidden from a collective understanding of what war looks like in individual collections and dusty attics. With the advent and rapid development of social media, however, the amateur and the professional, the illicit and the sanctioned, the personal and the official, and the past and the present, all seem to inhabit the same connected and chaotic space.However, to even begin to render intelligible the complexity, scale and volume of what war looks like in social media archives is a considerable task, given the limitations of any traditional human-based method of collection and analysis. We thus propose the production of a series of ‘snapshots’, using computer-aided extraction and identification techniques to try to offer an experimental way in to conceiving a new imaginary of war. We were particularly interested in testing to see if twentieth century wars, obviously initially captured via pre-digital means, had become more ‘settled’ over time in terms of their remediated presence today through their visual representations and connections on social media, compared with wars fought in digital media ecologies (i.e. those fought and initially represented amidst the volume and pervasiveness of social media images).To this end, we developed a framework for automatically extracting and analysing war images that appear in social media, using both the features of the images themselves, and the text and metadata associated with each image. The framework utilises a workflow comprising four core stages: (1) information retrieval, (2) data pre-processing, (3) feature extraction, and (4) machine learning. Our corpus was drawn from the social media platforms Facebook and Flickr

    Online Load Balancing for Network Functions Virtualization

    Full text link
    Network Functions Virtualization (NFV) aims to support service providers to deploy various services in a more agile and cost-effective way. However, the softwarization and cloudification of network functions can result in severe congestion and low network performance. In this paper, we propose a solution to address this issue. We analyze and solve the online load balancing problem using multipath routing in NFV to optimize network performance in response to the dynamic changes of user demands. In particular, we first formulate the optimization problem of load balancing as a mixed integer linear program for achieving the optimal solution. We then develop the ORBIT algorithm that solves the online load balancing problem. The performance guarantee of ORBIT is analytically proved in comparison with the optimal offline solution. The experiment results on real-world datasets show that ORBIT performs very well for distributing traffic of each service demand across multipaths without knowledge of future demands, especially under high-load conditions

    Learning Models over Relational Data using Sparse Tensors and Functional Dependencies

    Full text link
    Integrated solutions for analytics over relational databases are of great practical importance as they avoid the costly repeated loop data scientists have to deal with on a daily basis: select features from data residing in relational databases using feature extraction queries involving joins, projections, and aggregations; export the training dataset defined by such queries; convert this dataset into the format of an external learning tool; and train the desired model using this tool. These integrated solutions are also a fertile ground of theoretically fundamental and challenging problems at the intersection of relational and statistical data models. This article introduces a unified framework for training and evaluating a class of statistical learning models over relational databases. This class includes ridge linear regression, polynomial regression, factorization machines, and principal component analysis. We show that, by synergizing key tools from database theory such as schema information, query structure, functional dependencies, recent advances in query evaluation algorithms, and from linear algebra such as tensor and matrix operations, one can formulate relational analytics problems and design efficient (query and data) structure-aware algorithms to solve them. This theoretical development informed the design and implementation of the AC/DC system for structure-aware learning. We benchmark the performance of AC/DC against R, MADlib, libFM, and TensorFlow. For typical retail forecasting and advertisement planning applications, AC/DC can learn polynomial regression models and factorization machines with at least the same accuracy as its competitors and up to three orders of magnitude faster than its competitors whenever they do not run out of memory, exceed 24-hour timeout, or encounter internal design limitations.Comment: 61 pages, 9 figures, 2 table

    An efficient parallel method for mining frequent closed sequential patterns

    Get PDF
    Mining frequent closed sequential pattern (FCSPs) has attracted a great deal of research attention, because it is an important task in sequences mining. In recently, many studies have focused on mining frequent closed sequential patterns because, such patterns have proved to be more efficient and compact than frequent sequential patterns. Information can be fully extracted from frequent closed sequential patterns. In this paper, we propose an efficient parallel approach called parallel dynamic bit vector frequent closed sequential patterns (pDBV-FCSP) using multi-core processor architecture for mining FCSPs from large databases. The pDBV-FCSP divides the search space to reduce the required storage space and performs closure checking of prefix sequences early to reduce execution time for mining frequent closed sequential patterns. This approach overcomes the problems of parallel mining such as overhead of communication, synchronization, and data replication. It also solves the load balance issues of the workload between the processors with a dynamic mechanism that re-distributes the work, when some processes are out of work to minimize the idle CPU time.Web of Science5174021739

    Spartan Daily, May 4, 2009

    Get PDF
    Volume 132, Issue 49https://scholarworks.sjsu.edu/spartandaily/10586/thumbnail.jp

    Regional assessment of groundwater recharge in the lower Mekong Basin

    Get PDF
    Groundwater recharge remains almost totally unknown across the Mekong River Basin, hindering the evaluation of groundwater potential for irrigation. A regional regression model was developed to map groundwater recharge across the Lower Mekong Basin where agricultural water demand is increasing, especially during the dry season. The model was calibrated with baseflow computed with the local-minimum flow separation method applied to streamflow recorded in 65 unregulated sub-catchments since 1951. Our results, in agreement with previous local studies, indicate that spatial variations in groundwater recharge are predominantly controlled by the climate (rainfall and evapotranspiration) while aquifer characteristics seem to play a secondary role at this regional scale. While this analysis suggests large scope for expanding agricultural groundwater use, the map derived from this study provides a simple way to assess the limits of groundwater-fed irrigation development. Further data measurements to capture local variations in hydrogeology will be required to refine the evaluation of recharge rates to support practical implementations

    A Review of the Open Educational Resources (OER) Movement: Achievements, Challenges, and New Opportunities

    Get PDF
    Examines the state of the foundation's efforts to improve educational opportunities worldwide through universal access to and use of high-quality academic content
    corecore