1,167 research outputs found

    Optimal Posted Prices for Online Cloud Resource Allocation

    Full text link
    We study online resource allocation in a cloud computing platform, through a posted pricing mechanism: The cloud provider publishes a unit price for each resource type, which may vary over time; upon arrival at the cloud system, a cloud user either takes the current prices, renting resources to execute its job, or refuses the prices without running its job there. We design pricing functions based on the current resource utilization ratios, in a wide array of demand-supply relationships and resource occupation durations, and prove worst-case competitive ratios of the pricing functions in terms of social welfare. In the basic case of a single-type, non-recycled resource (i.e., allocated resources are not later released for reuse), we prove that our pricing function design is optimal, in that any other pricing function can only lead to a worse competitive ratio. Insights obtained from the basic cases are then used to generalize the pricing functions to more realistic cloud systems with multiple types of resources, where a job occupies allocated resources for a number of time slots till completion, upon which time the resources are returned back to the cloud resource pool

    A comparative study on data science and information science: From the perspective of job market demands in China

    Get PDF
    With the development of big data, data science related positions are highly demanded in the job market. Since information science and data science greatly overlap and share similar concerns, this paper aims to compare them from the perspective of the job market demands in China. We crawled 2,680 recruitment posts related to data science and information science. Then we made a comparative study on these two domains about the skills, salary, and clusters of position responsibilities. The results showed that they had differ-ent emphasis on the skills, the qualification standard and the application ar-ea

    Emerging adults’ use of communication technologies with their siblings: associations with sibling relationship quality

    Get PDF
    Master of ScienceSchool of Family Studies and Human ServicesMelinda S. MarkhamInformed by the Couple and Family Technology (CFT) framework, the present study aimed to examine how the use of different communication modalities is associated with sibling relationship quality in emerging adulthood. The four communication modalities were face-to-face communication, synchronous communication technologies, asynchronous communication technologies, and social media. The sample consists of 275 emerging adults aged between 18- to 29-years-old, who had a living, biological sibling. Results of a Hierarchical Multiple Regression revealed that frequency of face-to-face communication was negatively associated with sibling relationship quality throughout all steps. In addition, geographic distance moderated the relationship between face-to-face communication and sibling relationship quality – the closer they live with each other, the stronger the negative relationship became. Another two moderation effects emerged in this study. First, gender dyads moderated the relationship between asynchronous communication frequency and sibling relationship quality. As the frequency of asynchronous communication increases, the relationship quality of sister-sister pairs was significantly less close than brother-brother and mixed-gender pairs. Second, gender dyads moderated the relationship between frequency of social media usage and sibling relationship quality. For brother-brother pairs and mixed-gender pairs, the frequency of social media usage was negatively related to sibling relationship quality. Whereas for sister-sister pairs, the frequency of social media usage was positively associated with sibling relationship quality

    Towards Robust Graph Incremental Learning on Evolving Graphs

    Full text link
    Incremental learning is a machine learning approach that involves training a model on a sequence of tasks, rather than all tasks at once. This ability to learn incrementally from a stream of tasks is crucial for many real-world applications. However, incremental learning is a challenging problem on graph-structured data, as many graph-related problems involve prediction tasks for each individual node, known as Node-wise Graph Incremental Learning (NGIL). This introduces non-independent and non-identically distributed characteristics in the sample data generation process, making it difficult to maintain the performance of the model as new tasks are added. In this paper, we focus on the inductive NGIL problem, which accounts for the evolution of graph structure (structural shift) induced by emerging tasks. We provide a formal formulation and analysis of the problem, and propose a novel regularization-based technique called Structural-Shift-Risk-Mitigation (SSRM) to mitigate the impact of the structural shift on catastrophic forgetting of the inductive NGIL problem. We show that the structural shift can lead to a shift in the input distribution for the existing tasks, and further lead to an increased risk of catastrophic forgetting. Through comprehensive empirical studies with several benchmark datasets, we demonstrate that our proposed method, Structural-Shift-Risk-Mitigation (SSRM), is flexible and easy to adapt to improve the performance of state-of-the-art GNN incremental learning frameworks in the inductive setting

    Unsupervised Chunking with Hierarchical RNN

    Full text link
    In Natural Language Processing (NLP), predicting linguistic structures, such as parsing and chunking, has mostly relied on manual annotations of syntactic structures. This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manner. We present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to model word-to-chunk and chunk-to-sentence compositions. Our approach involves a two-stage training process: pretraining with an unsupervised parser and finetuning on downstream NLP tasks. Experiments on the CoNLL-2000 dataset reveal a notable improvement over existing unsupervised methods, enhancing phrase F1 score by up to 6 percentage points. Further, finetuning with downstream tasks results in an additional performance improvement. Interestingly, we observe that the emergence of the chunking structure is transient during the neural model's downstream-task training. This study contributes to the advancement of unsupervised syntactic structure discovery and opens avenues for further research in linguistic theory
    • …
    corecore