24 research outputs found

    On the treatment of uncertainty in Innovation Projects

    Get PDF
    The treatment of uncertainty in innovation projects is a critical aspect that must be addressed to improve project outcomes. This thesis focuses on identifying, measuring, and managing uncertainty in innovation projects, specifically emphasizing perspectives from innovation, risk management, and decision-making. The problematic aspects identified in the literature review include long incubation periods, standardized rules and procedures, non-existent market and market unfamiliarity, fuzziness in the fuzzy front end, team-based dynamic shifting capability, and selecting the right project leader. The research gap identified in the existing literature is the absence of a unified framework or toolbox that comprehensively addresses uncertainty in innovation projects. This thesis aims to fill this gap by proposing a unified toolbox to treat uncertainty effectively. The analytical direction of the research involves identifying the areas of uncertainty, measuring the impact on project outcomes, and developing a toolbox to manage and mitigate those. The research methodology adopted for this study is a qualitative case study approach, utilizing a multiple case study design. Two European Union projects – RESPONDRONE and ASSISTANCE, are selected for conducting a case study analysis. Thematic analysis is employed to derive meaningful insights and patterns from the data gathered during research. From the thematic analysis of the selected cases, five key themes are identified that significantly impact the uncertainty treatment of radical innovation projects. The key themes are- technology and innovation, communication and collaboration, adaptive project management, stakeholder engagement, and risk management. Each theme significantly impacts uncertainty treatment in the four critical areas of uncertainty- market, technological, organizational, and resource. These observations steer the study to see the treatment of uncertainty in innovation projects through the lens of existing literature. An impact assessment flowchart is developed, and a unified toolbox is proposed for better uncertainty treatment by putting things into different perspectives. This thesis concludes that the uncertainty paradigm in radical innovation projects is complex and nuanced. Rather than trying to pinpoint every aspect of it, a better approach for a project team is to understand the common areas of uncertainty generation, measure the impact of an unexpected event as soon as possible and equip themselves with a unified toolbox that can provide them the flexibility to use any tools necessary based on the context of the uncertainty

    The Prevalence of Depression and Anxiety Among the University Graduates in Bangladesh: How Far Does it Affect the Society?

    Get PDF
    The symptoms of anxiety and depression amongst university students are more common throughout the globe as it affects one’s social, economic, and academic life negatively. Although the students in developing and lowincome countries have more tendencies to experience depression and anxiety, the extent and pattern of the problem of depression are largely unknown. This paper focuses on exploring the various models of depression amongst recent graduates of Shahjalal University of Science & Technology, Sylhet- Bangladesh, who experienced depression and anxiety throughout their academic life. This is an exploratory study where in-depth interview methods have been used for the collection of data. This study involved field research, and it is based on the primary data and secondary data received from books, articles, newspapers, archival documents review, and other online sources to define concepts and relevant terms. Thematic analysis method has been employed through coding processes to analyze the collected data. The findings from this study revealed that most students, especially the female students, suffered from depression and anxiety in their academic life. This is most likely due to educational, social, personal, and family-related issues. This study also reveals a high tendency of suicide, involvement in illegal activities, and failure to attain academic goals amongst depressed students. Hopefully, from this research, these findings will provide some guidelines and policy strategies based on the nature of these problems. Also, it will serve as a guide for future researchers and academic and policymakers to base their reports on in order to reduce this social disease

    Long Movie Clip Classification with State-Space Video Models

    Full text link
    Most modern video recognition models are designed to operate on short video clips (e.g., 5-10s in length). Because of this, it is challenging to apply such models to long movie understanding tasks, which typically require sophisticated long-range temporal reasoning capabilities. The recently introduced video transformers partially address this issue by using long-range temporal self-attention. However, due to the quadratic cost of self-attention, such models are often costly and impractical to use. Instead, we propose ViS4mer, an efficient long-range video model that combines the strengths of self-attention and the recently introduced structured state-space sequence (S4) layer. Our model uses a standard Transformer encoder for short-range spatiotemporal feature extraction, and a multi-scale temporal S4 decoder for subsequent long-range temporal reasoning. By progressively reducing the spatiotemporal feature resolution and channel dimension at each decoder layer, ViS4mer learns complex long-range spatiotemporal dependencies in a video. Furthermore, ViS4mer is 2.63×2.63\times faster and requires 8×8\times less GPU memory than the corresponding pure self-attention-based model. Additionally, ViS4mer achieves state-of-the-art results in 77 out of 99 long-form movie video classification tasks on the LVU benchmark. Furthermore, we also show that our approach successfully generalizes to other domains, achieving competitive results on the Breakfast and the COIN procedural activity datasets. The code will be made publicly available

    Generalizing the Negative Binomial-Lindley Model for Accounting Subpopulation Heterogeneity in Crash Data Analysis

    Get PDF
    Crash data are often highly dispersed; it may also include a large amount of zero observations or have a long tail. The traditional Negative Binomial (NB) model cannot model these data properly. To overcome this issue, the Negative Binomial-Lindley (NB-L) model has been proposed as an alternative to the NB to analyze data with these characteristics. Research studies have shown that the NB-L model provides a superior performance compared to the NB when data include numerous zero observations or have a long tail. In addition, crash data are often collected from sites with different spatial or temporal characteristics. Therefore, it is not unusual to assume that crash data are drawn from multiple subpopulations. Finite mixture models are powerful tools that can be used to account for underlying subpopulations and capture the population heterogeneity. This thesis first documented the derivations and characteristics of the Finite mixture NB-L model (FMNB-L) to analyze data generated from heterogeneous subpopulations with many zero observations and a long tail. The application of the model was demonstrated with a simulation study to identify subpopulations. Then the FMNB-L model was used to analyze Texas four-lane freeway crashes. These data had unique characteristics; it was highly dispersed, had many locations with very large number of crashes, as well as significant number of locations with zero crash. Multiple goodness-of-fit metrics were used to compare the FMNB-L model with the NB, NB-L, and the finite mixture NB models. The FMNB-L identified two subpopulations in datasets. The results showed a significantly better fit by the FMNB-L compared to other analyzed models. In addition, the differences in various temporal and spatial factors result in variations of model coefficients among different groups of observations. A grouped random parameters model is a strategy to account for such unobserved heterogeneity. In this thesis, the derivations and applications of a grouped random parameters negative binomial-Lindley model (G-RPNB-L) to account for the unobserved heterogeneity in crash data with many zero observations was proposed. First, a simulation study was designed to illustrate the proposed model. The simulation study showed the ability of the proposed model to correctly estimate the coefficients. Then, an empirical dataset in Maine was used to show the application of the proposed model. It was found that the impact of weather variables denoting “Days with precipitation greater than 1.0 inch”, and “Days with temperature less than 32°F” varied across Maine counties. The proposed model was also compared with the NB, NB-L, and grouped random-parameters NB (G-RPNB) models using different goodness-of-fit metrics. The proposed G-RPNB-L model showed a superior fit compared to the other models

    Deep Learning Models for Predicting Phenotypic Traits and Diseases from Omics Data

    Get PDF
    Computational analysis of high-throughput omics data, such as gene expressions, copy number alterations and DNA methylation (DNAm), has become popular in disease studies in recent decades because such analyses can be very helpful to predict whether a patient has certain disease or its subtypes. However, due to the high-dimensional nature of the data sets with hundreds of thousands of variables and very small number of samples, traditional machine learning approaches, such as support vector machines (SVMs) and random forests, have limitations to analyze these data efficiently. In this chapter, we reviewed the progress in applying deep learning algorithms to solve some biological questions. The focus is on potential software tools and public data sources for the tasks. Particularly, we show some case studies using deep neural network (DNN) models for classifying molecular subtypes of breast cancer and DNN-based regression models to account for interindividual variation in triglyceride concentrations measured at different visits of peripheral blood samples using DNAm profiles. We show that integration of multi-omics profiles into DNN-based learning methods could improve the prediction of the molecular subtypes of breast cancer. We also demonstrate the superiority of our proposed DNN models over the SVM model for predicting triglyceride concentrations

    A Simple LLM Framework for Long-Range Video Question-Answering

    Full text link
    We present LLoVi, a language-based framework for long-range video question-answering (LVQA). Unlike prior long-range video understanding methods, which are often costly and require specialized long-range video modeling design (e.g., memory queues, state-space layers, etc.), our approach uses a frame/clip-level visual captioner (e.g., BLIP2, LaViLa, LLaVA) coupled with a Large Language Model (GPT-3.5, GPT-4) leading to a simple yet surprisingly effective LVQA framework. Specifically, we decompose short and long-range modeling aspects of LVQA into two stages. First, we use a short-term visual captioner to generate textual descriptions of short video clips (0.5-8s in length) densely sampled from a long input video. Afterward, an LLM aggregates the densely extracted short-term captions to perform long-range temporal reasoning needed to understand the whole video and answer a question. To analyze what makes our simple framework so effective, we thoroughly evaluate various components of our system. Our empirical analysis reveals that the choice of the visual captioner and LLM is critical for good LVQA performance. Furthermore, we show that a specialized prompt that asks the LLM first to summarize the noisy short-term visual captions and then answer a given input question leads to a significant LVQA performance boost. On EgoSchema, which is best known as a very long-form video question-answering benchmark, our method achieves 50.3% accuracy, outperforming the previous best-performing approach by 18.1% (absolute gain). In addition, our approach outperforms the previous state-of-the-art by 4.1% and 3.1% on NeXT-QA and IntentQA. We also extend LLoVi to grounded LVQA and show that it outperforms all prior methods on the NeXT-GQA dataset. We will release our code at https://github.com/CeeZh/LLoVi

    Roles of Stakeholders Towards Project Success: A Conceptual Study

    Get PDF
    Stakeholder plays significant roles in project success. They ensure clear communication of project goals, contribute to decision-making, and demonstrate commitment, increasing the likelihood of successful outcomes. They also act as advocates within their organizations, generating buy-in and support. The main purpose of this paper is to identify and discuss the roles of shareholders in a project success. The paper is conceptual in nature and uses a number literatures ranging from 2007 to 2023 from a good number of journals. After scrutinized the literature review, the paper concludes a number of findings. The findings implies that stakeholders in a project is crucial for its success and sustainability. They play a significant role in ensuring the performance of the project. Project managers need to acquire stakeholder management skills to address the communication requirements of stakeholders. This is important for the success of the project. The paper recommend that policymakers, practitioners and academia have to ensure the expectations and make a balance among the stakeholders

    FDTD Analysis Fiber Optic SPR Biosensor for DNA Hybridization: A Numerical Demonstration with Graphene

    Get PDF
    This article illustrates a design and finite difference time domain (FDTD) method based on analysis of fiber optic surface plasmon resonance (SPR) biosensor for biomedical application especially for DNA-DNA hybridization. The fiber cladding at the middle portion is constructed with the proposed hybrid of gold (Au), graphene, and a sensing medium. This sensor can be recognized adsorption of DNA biomolecules onto sensing medium of PBS saline using attenuated total reflection (ATR) technique. The refractive index (RI) is varied owing to the adsorption of different concentration of biomolecules.  Result states that the sensitivity with a monolayer of graphene will be improved up to 40% than bare graphene layer. Owing to increased adsorption capability of DNA molecules on graphene, sensitivity increases compared to the conventional gold thin film SPR biosensor. Numerical analysis shows that the variation of the SPR angle for mismatched DNA strands is quite negligible, whereas that for complementary DNA strands is considerable, which is essential for proper detection of DNA hybridization.  Finally, the effect of Electric field distribution on inserting graphene layer is analyzed incorporating the FDTD technique by using Lumerical FDTD solution software

    COVID-19 fake news detection model on social media data using machine learning techniques

    Get PDF
    Social media sites like Instagram, Twitter, and Facebook have become indispensable parts of the daily routine. These social media sites are powerful instruments for spreading the news, photographs, and other sorts of information. However, since the emergence of the COVID-19 pandemic in December 2019, many articles and headlines concerning the COVID-19 epidemic have surfaced on social media. Social media is frequently used to disseminate fraudulent material or information. This disinformation may confuse consumers, perhaps causing worry. It is hard to counter the widespread dissemination of disinformation. As a result, it is critical to develop a model for recognizing fake news in the news stream. The dataset, which would be a synthesis of COVID-19-related news from numerous social media and news sources, is utilized for categorization in this work. Markers are retrieved from unstructured textual data gathered from a variety of sources. Then, to eliminate the computational burden of analyzing all of the features in the dataset, feature selection is done. Finally, to categorize the COVID -19 related dataset, multiple cutting-edge machine-learning algorithms were trained. Support Vector Machine (SVM), Naïve Bayes (NB), and Decision Tree (DT) are the machine learning models presented. Finally, numerous measures are used to evaluate these algorithms such as accuracy, precision, recall, and F1 score. The Decision Tress algorithm reported the highest accuracy of 100% compared to the Support Vector Machine 98.7% and Naïve Bayes 96.3%
    corecore