13,328 research outputs found

    Advanced analytical methods for fraud detection: a systematic literature review

    Get PDF
    The developments of the digital era demand new ways of producing goods and rendering services. This fast-paced evolution in the companies implies a new approach from the auditors, who must keep up with the constant transformation. With the dynamic dimensions of data, it is important to seize the opportunity to add value to the companies. The need to apply more robust methods to detect fraud is evident. In this thesis the use of advanced analytical methods for fraud detection will be investigated, through the analysis of the existent literature on this topic. Both a systematic review of the literature and a bibliometric approach will be applied to the most appropriate database to measure the scientific production and current trends. This study intends to contribute to the academic research that have been conducted, in order to centralize the existing information on this topic

    When Giant Language Brains Just Aren't Enough! Domain Pizzazz with Knowledge Sparkle Dust

    Full text link
    Large language models (LLMs) have significantly advanced the field of natural language processing, with GPT models at the forefront. While their remarkable performance spans a range of tasks, adapting LLMs for real-world business scenarios still poses challenges warranting further investigation. This paper presents an empirical analysis aimed at bridging the gap in adapting LLMs to practical use cases. To do that, we select the question answering (QA) task of insurance as a case study due to its challenge of reasoning. Based on the task we design a new model relied on LLMs which are empowered by additional knowledge extracted from insurance policy rulebooks and DBpedia. The additional knowledge helps LLMs to understand new concepts of insurance for domain adaptation. Preliminary results on two QA datasets show that knowledge enhancement significantly improves the reasoning ability of GPT-3.5 (55.80% and 57.83% in terms of accuracy). The analysis also indicates that existing public knowledge bases, e.g., DBPedia is beneficial for knowledge enhancement. Our findings reveal that the inherent complexity of business scenarios often necessitates the incorporation of domain-specific knowledge and external resources for effective problem-solving.Comment: Ongoing work to adapt LLMs for business scenario

    Designing an On-Demand Dynamic Crowdshipping Model and Evaluating its Ability to Serve Local Retail Delivery in New York City

    Full text link
    Nowadays city mobility is challenging, mainly in populated metropolitan areas. Growing commute demands, increase in the number of for-hire vehicles, enormous escalation in several intra-city deliveries and limited infrastructure (road capacities), all contribute to mobility challenges. These challenges typically have significant impacts on residents’ quality-of-life particularly from an economic and environmental perspective. Decision-makers have to optimize transportation resources to minimize the system externalities (especially in large-scale metropolitan areas). This thesis focus on the intra-city mobility problems experienced by travelers (in the form of congestion and imbalance taxi resources) and businesses (in the form of last-mile delivery), while taking into consideration a measurement of potential adoption by citizens (in the form of a survey). To find solutions for this mobility problem this dissertation proposes three distinct and complementary methodological studies. First, taxi demand is predicted by employing a deep learning approach that leverages Long Short-Term Memory (LSTM) neural networks, trained over publicly available New York City taxi trip data. Taxi pickup data are binned based on geospatial and temporal informational tags, which are then clustered using a technique inspired by Principal Component Analysis. The spatiotemporal distribution of the taxi pickup demand is studied within short-term periods (for the next hour) as well as long-term periods (for the next 48 hours) within each data cluster. The performance and robustness of the LSTM model are evaluated through a comparison with Adaptive Boosting Regression and Decision Tree Regression models fitted to the same datasets. On the next study, an On-Demand Dynamic Crowdshipping system is designed to utilize excess transport capacity to serve parcel delivery tasks and passengers collectively. This method is general and could be expanded and used for all types of public transportation modes depending upon the availability of data. This system is evaluated for the case study of New York City and to assess the impacts of the crowdshipping system (by using taxis as carriers) on trip cost, vehicle miles traveled, and people travel behavior. Finally, a Stated Preference (SP) survey is presented, designed to collect information about people’s willingness to participate in a crowdshipping system. The survey is analyzed to determine the essential attributes and evaluate the likelihood of individuals participating in the service either as requesters or as carriers. The survey collects information on the preferences and important attributes of New York citizens, describing what segments of the population are willing to participate in a crowdshipping system. While the transportation problems are complex and approximations had to be done within the studies to achieve progress, this dissertation provides a comprehensive way to model and understand the potential impact of efficient utilization of existing resources on transportation systems. Generally, this study offer insights to decisions makers and academics about potential areas of opportunity and methodologies to optimize the transportation system of densely populated areas. This dissertation offers methods that can optimize taxi distribution based on the demand, optimize costs for retail delivery, while providing additional income for individuals. It also provides valuable insights for decision makers in terms of collecting population opinion about the service and analyzing the likelihood of participating in the service. The analysis provides an initial foundation for future modeling and assessment of crowdshipping

    Underinvestment in on-the Job Training?

    Get PDF
    [Excerpt] A growing number of commentators are pointing to employer sponsored training (OJT)as a critical ingredient in a nation\u27s competitiveness. American employers appear to devote less time and resources to the training of entry level blue collar, clerical and service employees than employers in Germany and Japan (Limprecht and Hayes 1982, Mincer and Higuchi 1988, Koike 1984, Noll et al 1984, Wiederhold-Fritz 1985). In the United States, only 33 percent of workers with 1 to 5 years of tenure report having received skill improvement training from their current employer (Hollenbeck and Wilkie 1985). Analyzing 1982 NLS-Youth data, Parsons (1985) reports that only 34 to 40 percent of the young workers in clerical, operative, service and laborer jobs reported that it was very true that the skills [I am] learning would be valuable in getting a better job. The payoffs to getting jobs which offer training appear to be very high, however. In Parson\u27s study, having a high learning job rather than a no learning job in 1979 increased a male youth\u27s 1982 wage rate by 13.7 percent. While the 1980 job had no such effect, the 1981 job raised wages by 7.2 percent when it was a high learning job rather than a no learning job

    Reverse mortgage: a neural network approach for pricing and risk assessment

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and ManagementPopulation aging and low precautionary savings rates has put European public social systems under strain. As a result, home-ownership among seniors as viable mean of income stream enhancement and welfare for seniors has been boldly encouraged by governments. Thus, equity release instruments for pensioners have been proposed by the market. These products are mostly encompassed in North America where the elderly are less reluctant to express their desire to transform housing into wealth. Still, southern European countries present large home-ownership rates and an aging low income population that may well unlock future demand. Whilst housing is a highly illiquid asset and emotional attachment as well as inconvenience of moving barriers may occur, in recent literature relatively new approaches to monetize homes have undergone major developments. Particularly, this study will be mainly concerned with the risk and profitability analysis of reverse mortgage schemes through actuarial and deep learning techniques in the attempt to conceive a framework that fully encompasses the valuation needs of companies willing to commercialize home equity based products

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy
    • …
    corecore