3,972 research outputs found
Multimedia Forensic Analysis of TikTok Application Using National Institute of Justice (NIJ) Method
The advancement of technology, especially in mobile devices like smartphones, has had a significant impact on human life, particularly during the COVID-19 pandemic, leading to the growth of online activities, especially on social media platforms like TikTok. TikTok is a highly popular social media platform, primarily known for its focus on short videos and images often accompanied by music. However, this has also opened up opportunities for misuse, including the spread of false information and defamation. To address this issue, this research utilizes mobile forensic analysis with Error Level Analysis (ELA) to collect digital evidence related to crimes on TikTok. This research contributes by applying digital forensic techniques, specifically Error Level Analysis (ELA), to detect image manipulation on TikTok. By using forensic methods, this research helps uncover digital crimes occurring on TikTok and provides essential insights to combat misuse and criminal activities on this social media platform. The research aims to collect digital evidence from TikTok on mobile devices using MOBILedit Forensic Express Pro and authenticate it with ELA through tools like FotoForensics and Forensically, as well as manual examination. This research follows the National Institute of Justice (NIJ) methodology with ten stages of mobile forensic investigation, including scenario creation, identification, collection, investigation, and analysis. The research yields manipulated digital evidence from TikTok, primarily concerning upload times. Error Level Analysis (ELA) is used to assess the authenticity of images, revealing signs of manipulation in digital evidence. The research's contribution is to produce or collect manipulated digital evidence from TikTok, primarily concerning upload times, and to apply the Error Level Analysis (ELA) approach or technique to assess the authenticity of images, uncovering signs of manipulation in digital evidence
Fast Machine Learning Method with Vector Embedding on Orthonormal Basis and Spectral Transform
This paper presents a novel fast machine learning method that leverages two
techniques: Vector Embedding on Orthonormal Basis (VEOB) and Spectral Transform
(ST). The VEOB converts the original data encoding into a vector embedding with
coordinates projected onto orthonormal bases. The Singular Value Decomposition
(SVD) technique is used to calculate the vector basis and projection
coordinates, leading to an enhanced distance measurement in the embedding space
and facilitating data compression by preserving the projection vectors
associated with the largest singular values. On the other hand, ST transforms
sequence of vector data into spectral space. By applying the Discrete Cosine
Transform (DCT) and selecting the most significant components, it streamlines
the handling of lengthy vector sequences. The paper provides examples of word
embedding, text chunk embedding, and image embedding, implemented in Julia
language with a vector database. It also investigates unsupervised learning and
supervised learning using this method, along with strategies for handling large
data volumes.Comment: update 9. Strategies for managing large data volumes with 9.1. Using
incremental SV
A framework for securing email entrances and mitigating phishing impersonation attacks
Emails are used every day for communication, and many countries and
organisations mostly use email for official communications. It is highly valued
and recognised for confidential conversations and transactions in day-to-day
business. The Often use of this channel and the quality of information it
carries attracted cyber attackers to it. There are many existing techniques to
mitigate attacks on email, however, the systems are more focused on email
content and behaviour and not securing entrances to email boxes, composition,
and settings. This work intends to protect users' email composition and
settings to prevent attackers from using an account when it gets hacked or
hijacked and stop them from setting forwarding on the victim's email account to
a different account which automatically stops the user from receiving emails. A
secure code is applied to the composition send button to curtail insider
impersonation attack. Also, to secure open applications on public and private
devices
Detecting Fraud in Bankrupt Municipalities Using Benford\u27s Law
This thesis explores if fraud or mismanagement in municipal governments can be diagnosed or detected in advance of their bankruptcies by financial statement analysis using Benford’s Law. Benford’s Law essentially states that the distribution of first digits from real world observations would not be uniform, but instead follow a trend where numbers with lower first digits (1, 2…) occur more frequently than those with higher first digits (…8,9). If a data set does not follow Benford’s distribution, it is likely that the data has been manipulated. This widespread phenomenon has been used as a tool to detect anomalies in data sets. The annual financial statements of Jefferson County, Vallejo City, and Orange County were analyzed. All the data sets showed overall nonconformity to Benford’s Law and therefore indicated that there was the possibility of fraud occurring. I find that Benford’s Law, had it been applied in real time to those financial statements, would have been able to detect that something was amiss. That would have been very useful because each of those jurisdictions subsequently went bankrupt. This paper demonstrates that Benford’s Law may in some cases be useful as an early indicator to detect the possibility of fraud in municipal governments’ financial data
First Digit Phenomenon in Number Generation Under Uncertainty: Through the Lens of Benford’s Law
Decision making under uncertainty has been investigated by looking for regularities due to the application of heuristics (Tversky & Kahneman, 1974). Contemporary society demands that we estimate numbers when making decisions, for instance, the value of an item, so regularities in the numbers people generate could help us understand how humans deal with unknown situations. Recent research (e.g., Burns, 2009) suggests that people could spontaneously exhibit a stronger bias towards the smaller leading digits (e.g., 1, 2) that approximates Benford’s law, a well-established phenomenon of the first digits aggregated from the naturally occurring datasets. Hence, it may also represent a potential regularity in how people produce unknown numbers. Therefore, the present study attempted to investigate the conditions under which the first digit phenomenon might occur under uncertainty by examining the degree of fit to Benford’s law with various forms of numerical responses, and more importantly, testing the existing speculations of why people might present such a bias when generating unknown values. The key elements of the designs were the statements of numerical questions and simple visual displays for estimations. As expected, the first digit phenomenon was stronger when generating non-arbitrary numbers, compared to the arbitrary numbers. The critical findings were the extension of Benford’s law to the estimation tasks with a peak of digit-5; the continued failure of the recognition hypothesis as a reliable explanation; and the supporting evidence of the Integration Hypothesis, which emphasises the attribute of processing multiple information for the occurrence of the first digit phenomenon in number generation. Building on and extending the results of the previous research conducted, the outcomes of this project can assist in understanding: 1) how numerical responses to unknown questions inform theories of numerical cognition and decision making, and 2) how the pattern of leading digits generated from humans might offer implications for the practices of Benford’s law in fraud detection
A forensic acquisition and analysis system for IaaS
Cloud computing is a promising next-generation computing paradigm that offers significant economic benefits to both commercial and public entities. Furthermore, cloud computing provides accessibility, simplicity, and portability for its customers. Due to the unique combination of characteristics that cloud computing introduces (including on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service), digital investigations face various technical, legal, and organizational challenges to keep up with current developments in the field of cloud computing. There are a wide variety of issues that need to be resolved in order to perform a proper digital investigation in the cloud environment. This paper examines the challenges in cloud forensics that are identified in the current research literature, alongside exploring the existing proposals and technical solutions addressed in the respective research. The open problems that need further effort are highlighted. As a result of the analysis of literature, it is found that it would be difficult, if not impossible, to perform an investigation and discovery in the cloud environment without relying on cloud service providers (CSPs). Therefore, dependence on the CSPs is ranked as the greatest challenge when investigators need to acquire evidence in a timely yet forensically sound manner from cloud systems. Thus, a fully independent model requires no intervention or cooperation from the cloud provider is proposed. This model provides a different approach to a forensic acquisition and analysis system (FAAS) in an Infrastructure as a Service model. FAAS seeks to provide a richer and more complete set of admissible evidences than what current CSPs provide, with no requirement for CSP involvement or modification to the CSP’s underlying architecture
Exploring the Law of Numbers: Evidence from China's Real Estate
The renowned proverb, Numbers do not lie, underscores the reliability and
insight that lie beneath numbers, a concept of undisputed importance,
especially in economics and finance etc. Despite the prosperity of Benford's
Law in the first digit analysis, its scope fails to remain comprehensiveness
when it comes to deciphering the laws of number. This paper delves into number
laws by taking the financial statements of China real estate as a
representative, quantitatively study not only the first digit, but also depict
the other two dimensions of numbers: frequency and length. The research
outcomes transcend mere reservations about data manipulation and open the door
to discussions surrounding number diversity and the delineation of the usage
insights. This study wields both economic significance and the capacity to
foster a deeper comprehension of numerical phenomena.Comment: DS
Applying Benford’s law to detect accounting data manipulation in the banking industry
In this paper, we take a glimpse at the dark side of bank accounting statements by using a mathematical law which was established by Benford in 1938 to detect data manipulation. We shed the spotlight on the healthy, failed, and bailed out banks in the global financial crisis and test whether a set of balance sheet and income statement variables which are used by regulators to rate the performance and soundness of banks were manipulated in the years prior to and also during the crisis. We find that banks utilise loan loss provisions to manipulate earnings and income upwards throughout the examined periods. Together with loan loss provisions, problem banks resort to a downward manipulation of allowance for loan losses and non-performing loans with the purpose to tamper earnings upwards. We also provide evidence that manipulation is more prevalent in problem banks, which manage income and earnings to conceal their financial difficulties. Moreover, manipulation is found to be strengthened in the crisis period; it is also expanded to affect regulatory capital. Overall, banks utilise data manipulation without yet resorting to eye-catching manipulation strategies that may attract the scrutiny by regulators. Benford’s Law appears to be a suitable tool for assessing the quality of accounting information and for discovering irregularities in bank accounting data
Recommended from our members
Applying Benford’s law to detect fraudulent practices in the banking industry
In this paper, we take a glimpse at the dark side of bank accounting statements by using a mathematical law which was established by Benford in 1938 to detect data manipulation. We shed the spotlight on the healthy, failed, and bailed out banks in the global financial crisis and test whether a set of balance sheet and income statement variables which are used by regulators to rate the performance and soundness of banks were manipulated in the years prior to and also during the crisis. We find that banks utilise loan loss provisions to manipulate earnings and income upwards throughout the examined periods. Together with loan loss provisions, problem banks resort to a downward manipulation of allowance for loan losses and non-performing loans with the purpose to tamper earnings upwards. We also provide evidence that manipulation is more prevalent in problem banks, which manage income and earnings to conceal their financial difficulties. Moreover, manipulation is found to be strengthened in the crisis period; it is also expanded to affect regulatory capital. Overall, banks utilise data manipulation without yet resorting to eye-catching manipulation strategies that may attract the scrutiny by regulators. Benford’s Law appears to be a suitable tool for assessing the quality of accounting information and for discovering irregularities in bank accounting data
- …