3,915 research outputs found

    The detection of fraudulent financial statements using textual and financial data

    Get PDF
    Das Vertrauen in die Korrektheit veröffentlichter Jahresabschlüsse bildet ein Fundament für funktionierende Kapitalmärkte. Prominente Bilanzskandale erschüttern immer wieder das Vertrauen der Marktteilnehmer in die Glaubwürdigkeit der veröffentlichten Informationen und führen dadurch zu einer ineffizienten Ressourcenallokation. Zuverlässige, automatisierte Betrugserkennungssysteme, die auf öffentlich zugänglichen Daten basieren, können dazu beitragen, die Prüfungsressourcen effizienter zuzuweisen und stärken die Resilienz der Kapitalmärkte indem Marktteilnehmer stärker vor Bilanzbetrug geschützt werden. In dieser Studie steht die Entwicklung eines Betrugserkennungsmodells im Vordergrund, welches aus textuelle und numerische Bestandteile von Jahresabschlüssen typische Muster für betrügerische Manipulationen extrahiert und diese in einem umfangreichen Aufdeckungsmodell vereint. Die Untersuchung stützt sich dabei auf einen umfassenden methodischen Ansatz, welcher wichtige Probleme und Fragestellungen im Prozess der Erstellung, Erweiterung und Testung der Modelle aufgreift. Die Analyse der textuellen Bestandteile der Jahresabschlüsse wird dabei auf Basis von Mehrwortphrasen durchgeführt, einschließlich einer umfassenden Sprachstandardisierung, um erzählerische Besonderheiten und Kontext besser verarbeiten zu können. Weiterhin wird die Musterextraktion um erfolgreiche Finanzprädiktoren aus den Rechenwerken wie Bilanz oder Gewinn- und Verlustrechnung angereichert und somit der Jahresabschluss in seiner Breite erfasst und möglichst viele Hinweise identifiziert. Die Ergebnisse deuten auf eine zuverlässige und robuste Erkennungsleistung über einen Zeitraum von 15 Jahren hin. Darüber hinaus implizieren die Ergebnisse, dass textbasierte Prädiktoren den Finanzkennzahlen überlegen sind und eine Kombination aus beiden erforderlich ist, um die bestmöglichen Ergebnisse zu erzielen. Außerdem zeigen textbasierte Prädiktoren im Laufe der Zeit eine starke Variation, was die Wichtigkeit einer regelmäßigen Aktualisierung der Modelle unterstreicht. Die insgesamt erzielte Erkennungsleistung konnte sich im Durchschnitt gegen vergleichbare Ansätze durchsetzen.Fraudulent financial statements inhibit markets allocating resources efficiently and induce considerable economic cost. Therefore, market participants strive to identify fraudulent financial statements. Reliable automated fraud detection systems based on publically available data may help to allocate audit resources more effectively. This study examines how quantitative data (financials) and corporate narratives, both can be used to identify accounting fraud (proxied by SEC’s AAERs). Thereby, the detection models are based upon a sound foundation from fraud theory, highlighting how accounting fraud is carried out and discussing the causes for companies to engage in fraudulent alteration of financial records. The study relies on a comprehensive methodological approach to create the detection model. Therefore, the design process is divided into eight design and three enhancing questions, shedding light onto important issues during model creation, improving and testing. The corporate narratives are analysed using multi-word phrases, including an extensive language standardisation that allows to capture narrative peculiarities more precisely and partly address context. The narrative clues are enriched by successful predictors from company financials found in previous studies. The results indicate a reliable and robust detection performance over a timeframe of 15 years. Furthermore, they suggest that text-based predictors are superior to financial ratios and a combination of both is required to achieve the best results possible. Moreover, it is found that text-based predictors vary considerably over time, which shows the importance of updating fraud detection systems frequently. The achieved detection performance was slightly higher on average than for comparable approaches

    A Corpus Driven Computational Intelligence Framework for Deception Detection in Financial Text

    Get PDF
    Financial fraud rampages onwards seemingly uncontained. The annual cost of fraud in the UK is estimated to be as high as £193bn a year [1] . From a data science perspective and hitherto less explored this thesis demonstrates how the use of linguistic features to drive data mining algorithms can aid in unravelling fraud. To this end, the spotlight is turned on Financial Statement Fraud (FSF), known to be the costliest type of fraud [2]. A new corpus of 6.3 million words is composed of102 annual reports/10-K (narrative sections) from firms formally indicted for FSF juxtaposed with 306 non-fraud firms of similar size and industrial grouping. Differently from other similar studies, this thesis uniquely takes a wide angled view and extracts a range of features of different categories from the corpus. These linguistic correlates of deception are uncovered using a variety of techniques and tools. Corpus linguistics methodology is applied to extract keywords and to examine linguistic structure. N-grams are extracted to draw out collocations. Readability measurement in financial text is advanced through the extraction of new indices that probe the text at a deeper level. Cognitive and perceptual processes are also picked out. Tone, intention and liquidity are gauged using customised word lists. Linguistic ratios are derived from grammatical constructs and word categories. An attempt is also made to determine ‘what’ was said as opposed to ‘how’. Further a new module is developed to condense synonyms into concepts. Lastly frequency counts from keywords unearthed from a previous content analysis study on financial narrative are also used. These features are then used to drive machine learning based classification and clustering algorithms to determine if they aid in discriminating a fraud from a non-fraud firm. The results derived from the battery of models built typically exceed classification accuracy of 70%. The above process is amalgamated into a framework. The process outlined, driven by empirical data demonstrates in a practical way how linguistic analysis could aid in fraud detection and also constitutes a unique contribution made to deception detection studies

    News – European Union

    Get PDF

    Fraudulent Financial Statements and the importance of Red Flags

    Get PDF
    Διπλωματική εργασία--Πανεπιστήμιο Μακεδονίας, Θεσσαλονίκη, 2019.Prior researches have been conducted in order to determine the importance of Red Flags with the use of different sample. Gullkvist & Jokipii (2013) perceived the importance of Red Flags across fraudulent financial reporting and misappropriation of assets. The purpose of this study is to cover the gap in the literature about the importance of red flag among different sample groups. For this reason, a literature review about Financial Statement Fraud is written, so as toinitially deeply understand this field and then a quantitative research with the use ofquestionnaires was carried out. Data analysis revealed the top 10 most important Red Flags. Data analysis also showed that the correlation between only a few red flags and the demographical characteristics is statistically significant and generally the demographical characteristics of the respondents, who currently work in Auditing companies in the Netherlands, do not differentiate the answers based on the importance of red flags

    Health Policy Newsletter September 2006 Vol. 19, No. 3

    Get PDF

    The Relationship between Accounting Frauds and Economic Fluctuations: A Case of Project Based Organizations in UAE

    Get PDF
    Purpose: the purpose of this study is to provide a better understanding of the rate of accounting frauds (misappropriation of assets, fraudulent financial reporting), and how they occur in different business cycles (economic fluctuations). Further, it aims at exploring the relationship between factors influencing the economic fluctuations and the level of accounting frauds. Design/methodology approach: a qualitative research design used semi-structured interviews with a group of internal controllers and external auditors from the big four auditing companies in addition to other Leading and certified audit offices in UAE in order to identify how the factors influencing the GDP fluctuations could affect the degree of accounting frauds.Findings: GDP components that influence economic fluctuations associate with the accounting frauds rate, especially fraudulent financial reporting. Economic factors including the GDP, unemployment and inflation are very important in the steadiness of an economy. The probable drop of GDP, or raise in unemployment level, high rates of inflation are positively influence the occurrences of accounting frauds.Research limitation/implications: There are many ECONOMIC roots of business cycles and economic fluctuations such as variations in trading strategy, warfare, inflation caused by governmental finance or fears. But these represent non-economic data that is hard to be rationalized by economic theory. Typical macroeconomic theory inclines to illuminate business cycles by a number of mistake and emphasis on modifying this mistake either by dynamic strategy or by supporting a separate strategy (Raudino, 2016). Thus, it is very hard to capture all the factors influencing the economic fluctuations. However, most of the studies in the literature considers the GDP as the most important factors, in addition to inflation and unemployment. Practical implication: This study contributes to both economic and accounting research by proving findings from an investigation of the elements modifying the economical fluctuation and how these fluctuations might impact the rate of accounting Frauds (AF), with implications for economists and financiers, shareholders, stakeholders, stockholders and external inspectors of auditing companies an additional visions about the audit risk in PBOs at altered stages of GDP. Originality/value: This study contributes to both economic and accounting research by exploratory research into the GDP fluctuation, inflation, and unemployment and accounting frauds rate

    I Was Just Doing My Job! Evolution, Corruption, and Public Relations in Interviews with Government Whistleblowers

    Get PDF
    This paper addresses public sector communication by exploring the role of government whistleblowers. It argues for the need to reconnect voices by creating platforms from which whistleblowers can speak without fear of retribution for the betterment of society. The paper presents 13 in-depth interviews with whistleblowers who worked for governmental entities in the United States or who worked as contractors to U.S. government entities. The goal was to understand their stories, including why they blew the whistle, how they blew the whistle, how whistleblowing affected their relationships with their employers, what role public relations executives and practitioners played in their whistleblowing experience, and how public relations executives and practitioners could interact more productively with whistleblowers. Four of the five theories explained some of the dynamics of whistleblowing: Resource dependence perspective explained the role of upper management in relying on wrongdoing; normalization of corruption theory explained attempts to conscript new employees into corrupt practices; justice theory explained the sense of betrayal felt by employees who tried to correct wrongdoing; and relationship management further explained the negative impact of retaliation on the relationships between whistleblowers and their employers. However, evolutionary theory explained all aspects of whistleblowing in terms of Darwinian natural selection

    Integrated Reporting in practice: Insights on Intellectual Capital and Big Data

    Get PDF
    Integrated reporting is gaining momentum as a paradigm that redefines the traditional reporting boundary. Promoted by the International Integrated Reporting Council (IIRC), it is tracing a new path for corporate reporting due to its combination of financial and non-financial information in a single document, the integrated report (IR). Such report aims to convey how the company\u2019s strategy, governance, performance and prospects lead to the creation of value in the short, medium and long term. IR is prepared in accordance with the principle-based guidance of the International Integrated Reporting Framework, which provides impetus for an interconnected approach to corporate reporting and includes six forms of capitals in portraying the value creation process: financial, manufactured, intellectual, human, social and relationship, and natural capitals. Integrated reporting and IR provide new scope for researches on both intellectual capital (IC) and Big Data. More specifically, IR entangles three intangible, non-financial capitals in its value creation story that have been traditionally named IC. Further, its preparation requires a huge amount of strategic, operational, and performance information to be gathered and processed. In this regard, the IIRC has urged companies to adopt Big Data as a single, combined information architecture that assists in implementing integrated reporting. In 2018, it has also launched an initiative aimed at collecting early experiences about Big Data by companies that issue IR. The overall purpose of this thesis is to unveil how subjects involved in the IR preparation (i.e. IR preparers) deal with IC and Big Data while engaging in integrated reporting. In particular, this thesis inspects integrated reporting in practice from three different, but interrelated perspectives to gain insights on IC and Big Data. It is a collection of three papers that respectively address: 1) the ontology of IC (first paper); 2) the performativity of IC (second paper); and 3) the epistemic authority of Big Data (third paper). By assuming an insider viewpoint, this thesis benefits from in-depth interviews with the companies\u2019 members to explore the process of IR preparation. It adopts a critical/interpretative methodological approach that allows to reach the IR preparers\u2019 ideas and experiences. Such approach also allows to gain knowledge and understanding about the flourishing integrated reporting process, by unpacking the underlying procedures concerned with it. The first paper sheds new light on the subjective nature of IC ontology. Such ontology emerges in the function that IR preparers assign to IC in the process of IR preparation. The study shows that integrated thinking helps develop a unique idea on how IC exists in the process of value creation. It is the first study to empirically investigate IC ontology in the integrated reporting context. The second paper reveals that IC definition, classification and valuation stimulate ongoing interaction among various actors. Some sketches, matrixes and maps inspired by the International Integrated Reporting Framework are pivotal in defining concepts and categories of IC and its connection to value creation. The paper enriches the scant research that examines the performativity of IC in the context of corporate external reporting. Finally, the third paper offers exploratory insights on the extent to which corporate members might rely on Big Data while preparing their IR. It suggests that the epistemic authority of Big Data might stem from both the energy that the company devotes to exploiting Big Data and the identification of prospective information. The study contributes to the infant literature on Big Data in corporate reporting by focusing on the process of IR preparation

    Parsing the Plagiary Scandals in History and Law

    Get PDF
    [Excerpt] “In 2002 the history of History was scandal. The narrative started when a Pulitzer Prize winning professor was caught foisting bogus Vietnam War exploits as background for classroom discussion. His fantasy lapse prefaced a more serious irregularity—the author of the Bancroft Prize book award was accused of falsifying key research documents. The award was rescinded. The year reached a crescendo with two plagiarism cases “that shook the history profession to its core.” Stephen Ambrose and Doris Kearns Goodwin were “crossover” celebrities: esteemed academics—Pulitzer winners—with careers embellished by a public intellectual reputation. The media nurtured a Greek Tragedy —two superstars entangled in the labyrinth of the worst case academic curse—accusations that they copied without attribution. Their careers dangled on the idiosyncratic slope of paraphrasing with its reefs of echoes, mirroring, recycling, borrowing, etc. As the Ambrose-Kearns Goodwin imbroglio ignited critique from the History community, a sequel engulfed Harvard Law School. Alan Dershowitz, Charles Ogletree, and Laurence Tribe were implicated in plagiarism allegations; the latter two ensnared on the paraphrase slope. The New York Times headline anticipated a new media frenzy: When Plagiarism’s Shadow Falls on Admired Scholars. Questioned after the first two incidents, the President of Harvard said: “If you had a third one then I would have said, ‘Okay, you get to say this is a special thing, a focused problem at the Law School.’” There was no follow up comment after the Tribe accusation. The occurrence of similar plagiarism packages in two disciplines within an overlapping time frame justifies an inquiry. The following case studies of six accusation narratives identify a congeries of shared issues, subsuming a crossfire of contention over definition, culpability, and sanction. While the survey connects core History-Law commonalities, each case is defined by its own distinctive cluster of signifiers. The primary source for the explication of each signifier cluster is the media of newspaper, trade journal, television, and internet. The media presence is the Article’s motif—each case study summarizes a media construct of a slice of the plagiarism debate. By author’s decree the debate is restricted to “pure” plagiarism: the appropriation of another’s text without attribution. The survey is conducted according to chronological order, beginning with History. Ward Churchill’s sui generis smutch from plagiarism continues to agitate media coverage. His argument that a dismissal by the University of Colorado for academic misconduct would constitute a cover for a First Amendment protected essay on 9/11 adds more challenge to the plagiary abyss. This Article concludes with up-to-date coverage of the Churchill narrative.

    Annual Report 2019-2020

    Get PDF
    https://digitalcommons.memphis.edu/govpubs-tn-bureau-investigation-annual-reports/1002/thumbnail.jp
    corecore