105 research outputs found

    A Critical Review Of Post-Secondary Education Writing During A 21st Century Education Revolution

    Get PDF
    Educational materials are effective instruments which provide information and report new discoveries uncovered by researchers in specific areas of academia. Higher education, like other education institutions, rely on instructional materials to inform its practice of educating adult learners. In post-secondary education, developmental English programs are tasked with meeting the needs of dynamic populations, thus there is a continuous need for research in this area to support its changing landscape. However, the majority of scholarly thought in this area centers on K-12 reading and writing. This paucity presents a phenomenon to the post-secondary community. This research study uses a qualitative content analysis to examine peer-reviewed journals from 2003-2017, developmental online websites, and a government issued document directed toward reforming post-secondary developmental education programs. These highly relevant sources aid educators in discovering informational support to apply best practices for student success. Developmental education serves the purpose of addressing literacy gaps for students transitioning to college-level work. The findings here illuminate the dearth of material offered to developmental educators. This study suggests the field of literacy research is fragmented and highlights an apparent blind spot in scholarly literature with regard to English writing instruction. This poses a quandary for post-secondary literacy researchers in the 21st century and establishes the necessity for the literacy research community to commit future scholarship toward equipping college educators teaching writing instruction to underprepared adult learners

    Toward Data Efficient Online Sequential Learning

    Get PDF
    Can machines optimally take sequential decisions over time? Since decades, researchers have been seeking an answer to this question, with the ultimate goal of unlocking the potential of artificial general intelligence (AGI) for a better and sustainable society. Many are the sectors that would be boosted by machines being able to take efficient sequential decisions over time. Let think at real-world applications such as personalized systems in entertainment (content systems) but also in healthcare (personalized therapy), smart cities (traffic control, flooding prevention), robots (control and planning), etc.. However, letting machines taking proper decisions in real-life is a highly challenging task. This is caused by the uncertainty behind such decisions (uncertainty on the actual reward, on the context, on the environment, etc.). A viable solution is to learn by experience (i.e., by trial and error), letting the machines uncover the uncertainty while taking decisions, and refining its strategy accordingly. However, such refinement is usually highly data-hungry (data-inefficiency), requiring a large amount of application specified data, leading to very slow learning processes -- hence very slow convergence to optimal strategies (curse of dimensionality). Luckily, data is usually intrinsically structured. Identifying and exploiting such structure substantially improves the data-efficiency of sequential learning algorithms. This is the key hypothesis underpinning the research in this thesis, in which novel structural learning methodologies are proposed for decision-making strategies problems such as Recommendation System (RS), Multi-armed Bandit (MAB) and Reinforcement Learning (RL), with the ultimate goal of making the learning process more (data)-efficient. Specifically, we tackle such goal from the perspective of modelling the problem structure as graphs, embedding tools from graph signal processing into decision learning theory. As the first step, we study the application of graph-clustering techniques for RS, in which the curse of dimensionality is addressed by grouping data into clusters via graph-clustering techniques. Next, we exploit spectral graph structure for MAB problems, representing online learning problems. A key challenge is to learn sequentially the unknown bandit vector. Exploiting the smoothness-prior (i.e., bandit vector smooth on a given underpinning graph), we study theoretically the Laplacian-regularized estimator and provide both empirical evidences and theoretical analysis on the benefits of exploiting the graph structure in MABs. Then, we focus on the theoretical understanding of the Laplacian-regularized estimator. To this end, we derive a theoretical error upper bound on the estimator, which illustrates the impact of the alignment between the data and the graph structure as well as the graph spectrum on the estimation accuracy. We then move to RL problems, focusing on the specific problem of learning a proper representation of the state-action (representation learning problem). Motivated by the fact that a good representation should be informative of the value function, we seek a learning algorithm able to preserve continuity between the value function and the representation space. Showing that state values are intrinsically correlated to the state transition dynamic structure and the diffusion of the reward on the MDP graph, we build a new loss function based on the newly defined diffusion distance and we propose a novel method to learn state representation with such desirable property. In summary, in this thesis we address both theoretically and empirically important online sequential learning problems leveraging on the intrinsic data structure, showing the gain of the proposed solutions toward more data-efficient sequential learning strategies

    Designing hybridization: alternative education strategies for fostering innovation in communication design for the territory

    Get PDF
    Within the broad context of design studies, Communication Design for the Territory stands as a hybrid discipline constantly interfacing with other fields of knowledge. It assumes the territorial theme as its specific dimension, aiming to generate communication systems capable of reading the stratifications of places. From an educational perspective, teaching activities are closely linked to research and can take on different levels of complexity: from the various forms of cartographic translation to the design of sophisticated transmedia digital systems. In the wake of COVID-19, this discipline has come to terms with a profoundly changed scenario in terms of limited access to the physical space and the emergence of new technologies for remote access. In this unique context, we propose a pedagogical strategy that focuses on the hybridization of communication artifacts with the aim of fostering design experimentation. As a creative tool, hybridization leads to the design of innovative systems by strategically combining the characteristics of different artifacts to achieve specific communication goals. By experimenting with these creative strategies, students are led to critically reflect on existing communication artifacts’ features and explore original designs that deliberately combine different media, contents, and communication languages in innovative ways. Through hybridization, the methods for territorial knowledge production appear more effective, effectively combining the skills and knowledge embodied in multiple subject areas. The paper presents the experience developed in the teaching laboratories of the DCxT (Communication Design for the Territory) research group of the Design Department of Politecnico di Milano. The teaching experience highlights how hybridization strategies can increase the effectiveness in learning about territorial specificities, in acquiring critical knowledge about communication systems, and in developing innovation strategies that allow to influence the evolution of traditional communication models

    Population Descent: A Natural-Selection Based Hyper-Parameter Tuning Framework

    Full text link
    First-order gradient descent has been the base of the most successful optimization algorithms ever implemented. On supervised learning problems with very high dimensionality, such as neural network optimization, it is almost always the algorithm of choice, mainly due to its memory and computational efficiency. However, it is a classical result in optimization that gradient descent converges to local minima on non-convex functions. Even more importantly, in certain high-dimensional cases, escaping the plateaus of large saddle points becomes intractable. On the other hand, black-box optimization methods are not sensitive to the local structure of a loss function's landscape but suffer the curse of dimensionality. Instead, memetic algorithms aim to combine the benefits of both. Inspired by this, we present Population Descent, a memetic algorithm focused on hyperparameter optimization. We show that an adaptive m-elitist selection approach combined with a normalized-fitness-based randomization scheme outperforms more complex state-of-the-art algorithms by up to 13% on common benchmark tasks

    Cyber Threat Intelligence based Holistic Risk Quantification and Management

    Get PDF

    PROCEEDINGS 5th PLATE Conference

    Get PDF
    The 5th international PLATE conference (Product Lifetimes and the Environment) addressed product lifetimes in the context of sustainability. The PLATE conference, which has been running since 2015, has successfully been able to establish a solid network of researchers around its core theme. The topic has come to the forefront of current (political, scientific & societal) debates due to its interconnectedness with a number of recent prominent movements, such as the circular economy, eco-design and collaborative consumption. For the 2023 edition of the conference, we encouraged researchers to propose how to extend, widen or critically re-construct thematic sessions for the PLATE conference, and the paper call was constructed based on these proposals. In this 5th PLATE conference, we had 171 paper presentations and 238 participants from 14 different countries. Beside of paper sessions we organized workshops and REPAIR exhibitions

    Managing Event-Driven Applications in Heterogeneous Fog Infrastructures

    Get PDF
    The steady increase in digitalization propelled by the Internet of Things (IoT) has led to a deluge of generated data at unprecedented pace. Thereby, the promise to realize data-driven decision-making is a major innovation driver in a myriad of industries. Based on the widely used event processing paradigm, event-driven applications allow to analyze data in the form of event streams in order to extract relevant information in a timely manner. Most recently, graphical flow-based approaches in no-code event processing systems have been introduced to significantly lower technological entry barriers. This empowers non-technical citizen technologists to create event-driven applications comprised of multiple interconnected event-driven processing services. Still, today’s event-driven applications are focused on centralized cloud deployments that come with inevitable drawbacks, especially in the context of IoT scenarios that require fast results, are limited by the available bandwidth, or are bound by the regulations in terms of privacy and security. Despite recent advances in the area of fog computing which mitigate these shortcomings by extending the cloud and moving certain processing closer to the event source, these approaches are hardly established in existing systems. Inherent fog computing characteristics, especially the heterogeneity of resources alongside novel application management demands, particularly the aspects of geo-distribution and dynamic adaptation, pose challenges that are currently insufficiently addressed and hinder the transition to a next generation of no-code event processing systems. The contributions of this thesis enable citizen technologists to manage event-driven applications in heterogeneous fog infrastructures along the application life cycle. Therefore, an approach for a holistic application management is proposed which abstracts citizen technologists from underlying technicalities. This allows to evolve present event processing systems and advances the democratization of event-driven application management in fog computing. Individual contributions of this thesis are summarized as follows: 1. A model, manifested in a geo-distributed system architecture, to semantically describe characteristics specific to node resources, event-driven applications and their management to blend application-centric and infrastructure-centric realms. 2. Concepts for geo-distributed deployment and operation of event-driven applications alongside strategies for flexible event stream management. 3. A methodology to support the evolution of event-driven applications including methods to dynamically reconfigure, migrate and offload individual event-driven processing services at run-time. The contributions are introduced, applied and evaluated along two scenarios from the manufacturing and logistics domain

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science

    Assuming Data Integrity and Empirical Evidence to The Contrary

    Get PDF
    Background: Not all respondents to surveys apply their minds or understand the posed questions, and as such provide answers which lack coherence, and this threatens the integrity of the research. Casual inspection and limited research of the 10-item Big Five Inventory (BFI-10), included in the dataset of the World Values Survey (WVS), suggested that random responses may be common. Objective: To specify the percentage of cases in the BRI-10 which include incoherent or contradictory responses and to test the extent to which the removal of these cases will improve the quality of the dataset. Method: The WVS data on the BFI-10, measuring the Big Five Personality (B5P), in South Africa (N=3 531), was used. Incoherent or contradictory responses were removed. Then the cases from the cleaned-up dataset were analysed for their theoretical validity. Results: Only 1 612 (45.7%) cases were identified as not including incoherent or contradictory responses. The cleaned-up data did not mirror the B5P- structure, as was envisaged. The test for common method bias was negative. Conclusion: In most cases the responses were incoherent. Cleaning up the data did not improve the psychometric properties of the BFI-10. This raises concerns about the quality of the WVS data, the BFI-10, and the universality of B5P-theory. Given these results, it would be unwise to use the BFI-10 in South Africa. Researchers are alerted to do a proper assessment of the psychometric properties of instruments before they use it, particularly in a cross-cultural setting
    • …
    corecore