64 research outputs found
Tinjauan Yuridis Kejahatan di dalam Sistem Elektronik pada Rekening Virtual
Virtual account is an electronic application that is created to connect to a large computer network when using the internet. This is very vulnerable to the threat of online crime, virtual accounts must provide security for customers. The aim of the research is to find out what electronic system crimes are, analyze the application of data protection laws, and legal sanctions for perpetrators of account burglary crimes. The research method is normative juridical. The research results show that the electronic systems currently in force in Indonesia are Mobile Banking, SMS Banking and Internet Banking. Regarding data protection in banking, it is regulated in article 40 (1) of Law no. 10 of 1998 concerning banking, namely that banks are obliged to keep confidential information regarding depositors and their deposits. Even though the Laws and Regulations have strictly regulated the protection of customer data, the fact is that in the field there is still a lot of misuse of customer personal data by irresponsible parties. Prohibitions for perpetrators of virtual account burglary crimes are regulated in Law Number 19 of 2016 concerning Information and Electronic Transactions Article 31 Paragraphs 1, 2, 3 and 4. The conclusion is that the application of the law regarding data protection in banking is regulated in Article 40 (1) of the Law No. 10 of 1998 concerning banking, namely that "banks are obliged to keep confidential information regarding depositors and their deposits
BIO-INSPIRED MOTION PERCEPTION: FROM GANGLION CELLS TO AUTONOMOUS VEHICLES
Animals are remarkable at navigation, even in extreme situations. Through motion perception, animals compute their own movements (egomotion) and find other objects (prey, predator, obstacles) and their motions in the environment. Analogous to animals, artificial systems such as robots also need to know where they are relative to structure and segment obstacles to avoid collisions. Even though substantial progress has been made in the development of artificial visual systems, they still struggle to achieve robust and generalizable solutions. To this end, I propose a bio-inspired framework that narrows the gap between natural and artificial systems.
The standard approaches in robot motion perception seek to reconstruct a three-dimensional model of the scene and then use this model to estimate egomotion and object segmentation. However, the scene reconstruction process is data-heavy and computationally expensive and fails to deal with high-speed and dynamic scenarios. On the contrary, biological visual systems excel in the aforementioned difficult situation by extracting only minimal information sufficient for motion perception tasks. I derive minimalist/purposive ideas from biological processes throughout this thesis and develop mathematical solutions for robot motion perception problems.
In this thesis, I develop a full range of solutions that utilize bio-inspired motion representation and learning approaches for motion perception tasks. Particularly, I focus on egomotion estimation and motion segmentation tasks. I have four main contributions: 1. First, I introduce NFlowNet, a neural network to estimate normal flow (bio-inspired motion filters). Normal flow estimation presents a new avenue for solving egomotion in a robust and qualitative framework. 2. Utilizing normal flow, I propose the DiffPoseNet framework to estimate egomotion by formulating the qualitative constraint in a differentiable optimization layer, which allows for end-to-end learning. 3. Further, utilizing a neuromorphic event camera, a retina-inspired vision sensor, I develop 0-MMS, a model-based optimization approach that employs event spikes to segment the scene into multiple moving parts in high-speed dynamic lighting scenarios. 4. To improve the precision of event-based motion perception across time, I develop SpikeMS, a novel bio-inspired learning approach that fully capitalizes on the rich temporal information in event spikes
Marketing Analysis - Walesby Forest Outdoor Adventure Activities Centre
This document contains details to the consulting work done to develop the future market strategy for the company Walesby Forest Outdoor Adventure Activity Centre. A thorough investigation of the adventure activity market was undertaken in order to reach any conclusions. Rich literature content in the field of corporate branding, corporate re-branding and market strategy was used in order to ascertain the conclusions. The assignment had some challenges. The main challenge was to continue to build on the existing core customer base, mainly coming from scouts and cubs, and schools and also to look at ways to expand into the corporate market at the same time. The brand Walesby Forest is currently focussed on its core markets, and hence catering to the corporate market required an in depth look at the issue of re-branding. This paper includes a look at every aspect of Walesby Forestβs portfolio, its competitors and the consumerβs interests in order to arrive at the future market strategy
Some fixed point theorems for pseudo ordered sets
In this paper, it is shown that for an isotone map f on a pseudo ordered set A, the set of all fixed points of f inherits the properties of A, namely, completeness, chain-completeness and weakly chain-completeness, as in the case of posets
Examining the Influence of Personality and Multimodal Behavior on Hireability Impressions
While personality traits have been traditionally modeled as behavioral constructs, we novelly posit job hireability as a personality construct. To this end, we examine correlates among personality and hireability measures on the First Impressions Candidate Screening dataset. Modeling hireability as both a discrete and continuous variable, and the big-five OCEAN personality traits as predictors, we utilize (a) multimodal behavioral cues, and (b) personality trait estimates obtained via these cues for hireability prediction (HP). For each of the text, audio and visual modalities, HP via (b) is found to be more effective than (a). Also, superior results are achieved when hireability is modeled as a continuous rather than a categorical variable. Interestingly, eye and bodily visual cues perform comparably to facial cues for predicting personality and hireability. Explanatory analyses reveal that multimodal behaviors impact personality and hireability impressions: e.g., Conscientiousness impressions are impacted by the use of positive adjectives (verbal behavior) and eye movements (non-verbal behavior), confirming prior observations
Efficient Labelling of Affective Video Datasets via Few-Shot & Multi-Task Contrastive Learning
Whilst deep learning techniques have achieved excellent emotion prediction, they still require large amounts of labelled training data, which are (a) onerous and tedious to compile, and (b) prone to errors and biases. We propose Multi-Task Contrastive Learning for Affect Representation (MT-CLAR) for few-shot affect inference. MT-CLAR combines multi-task learning with a Siamese network trained via contrastive learning to infer from a pair of expressive facial images (a) the (dis)similarity between the facial expressions, and (b) the difference in valence and arousal levels of the two faces. We further extend the image-based MT-CLAR framework for automated video labelling where, given one or a few labelled video frames (termed support-set), MT-CLAR labels the remainder of the video for valence and arousal. Experiments are performed on the AFEW-VA dataset with multiple support-set configurations; moreover, supervised learning on representations learnt via MT-CLAR are used for valence, arousal and categorical emotion prediction on the AffectNet and AFEW-VA datasets. The results show that valence and arousal predictions via MT-CLAR are very comparable to the state-of-the-art (SOTA), and we significantly outperform SOTA with a support-set β6% the size of the video dataset
A Weakly Supervised Approach to Emotion-change Prediction and Improved Mood Inference
Whilst a majority of affective computing research focuses on inferring
emotions, examining mood or understanding the \textit{mood-emotion interplay}
has received significantly less attention. Building on prior work, we (a)
deduce and incorporate emotion-change () information for inferring
mood, without resorting to annotated labels, and (b) attempt mood prediction
for long duration video clips, in alignment with the characterisation of mood.
We generate the emotion-change () labels via metric learning from a
pre-trained Siamese Network, and use these in addition to mood labels for mood
classification. Experiments evaluating \textit{unimodal} (training only using
mood labels) vs \textit{multimodal} (training using mood plus labels)
models show that mood prediction benefits from the incorporation of
emotion-change information, emphasising the importance of modelling the
mood-emotion interplay for effective mood inference.Comment: 9 pages, 3 figures, 6 tables, published in IEEE International
Conference on Affective Computing and Intelligent Interactio
- β¦