127 research outputs found
On Collaborative Predictive Blacklisting
Collaborative predictive blacklisting (CPB) allows to forecast future attack
sources based on logs and alerts contributed by multiple organizations.
Unfortunately, however, research on CPB has only focused on increasing the
number of predicted attacks but has not considered the impact on false
positives and false negatives. Moreover, sharing alerts is often hindered by
confidentiality, trust, and liability issues, which motivates the need for
privacy-preserving approaches to the problem. In this paper, we present a
measurement study of state-of-the-art CPB techniques, aiming to shed light on
the actual impact of collaboration. To this end, we reproduce and measure two
systems: a non privacy-friendly one that uses a trusted coordinating party with
access to all alerts (Soldo et al., 2010) and a peer-to-peer one using
privacy-preserving data sharing (Freudiger et al., 2015). We show that, while
collaboration boosts the number of predicted attacks, it also yields high false
positives, ultimately leading to poor accuracy. This motivates us to present a
hybrid approach, using a semi-trusted central entity, aiming to increase
utility from collaboration while, at the same time, limiting information
disclosure and false positives. This leads to a better trade-off of true and
false positive rates, while at the same time addressing privacy concerns.Comment: A preliminary version of this paper appears in ACM SIGCOMM's Computer
Communication Review (Volume 48 Issue 5, October 2018). This is the full
versio
Controlled Data Sharing for Collaborative Predictive Blacklisting
Although sharing data across organizations is often advocated as a promising
way to enhance cybersecurity, collaborative initiatives are rarely put into
practice owing to confidentiality, trust, and liability challenges. In this
paper, we investigate whether collaborative threat mitigation can be realized
via a controlled data sharing approach, whereby organizations make informed
decisions as to whether or not, and how much, to share. Using appropriate
cryptographic tools, entities can estimate the benefits of collaboration and
agree on what to share in a privacy-preserving way, without having to disclose
their datasets. We focus on collaborative predictive blacklisting, i.e.,
forecasting attack sources based on one's logs and those contributed by other
organizations. We study the impact of different sharing strategies by
experimenting on a real-world dataset of two billion suspicious IP addresses
collected from Dshield over two months. We find that controlled data sharing
yields up to 105% accuracy improvement on average, while also reducing the
false positive rate.Comment: A preliminary version of this paper appears in DIMVA 2015. This is
the full version. arXiv admin note: substantial text overlap with
arXiv:1403.212
Combating Robocalls to Enhance Trust in Converged Telephony
Telephone scams are now on the rise and without effective countermeasures there is no stopping. The number of scam/spam calls people receive is increasing every day. YouMail estimates that June 2021 saw 4.4 billion robocalls in the United States and the Federal Trade Commission (FTC) phone complaint portal receives millions of complaints about such fraudulent and unwanted calls each year. Voice scams have become such a serious problem that people often no longer pick up calls from unknown callers. In several scams that have been reported widely, the telephony channel is either directly used to reach potential victims or as a way to monetize scams that are advertised online, as in the case of tech support scams. The vision of this research is to bring trust back to the telephony channel. We believe this can be done by stopping unwanted and fraud calls and leveraging smartphones to offer a novel interaction model that can help enhance the trust in voice interactions. Thus, our research explores defenses against unwanted calls that include blacklisting of known fraudulent callers, detecting robocalls in presence of caller ID spoofing and proposing a novel virtual assistant that can stop more sophisticated robocalls without user intervention.
We first explore phone blacklists to stop unwanted calls based on the caller ID received when a call arrives. We study how to automatically build blacklists from multiple data sources and evaluate the effectiveness of such blacklists in stopping current robocalls. We also used insights gained from this process to increase detection of more sophisticated robocalls and improve the robustness of our defense system against malicious callers who can use techniques like caller ID spoofing.
To address the threat model where caller ID is spoofed, we introduce the notion of a virtual assistant. To this end, we developed a Smartphone based app named RobocallGuard which can pick up calls from unknown callers on behalf of the user and detect and filter out unwanted calls. We conduct a user study that shows that users are comfortable with a virtual assistant stopping unwanted calls on their behalf. Moreover, most users reported that such a virtual assistant is beneficial to them. Finally, we expand our threat model and introduce RobocallGuardPlus which can effectively block targeted robocalls. RobocallGuardPlus also picks up calls from unknown callers on behalf of the callee and engages in a natural conversation with the caller. RobocallGuardPlus uses a combination of NLP based machine learning models to determine if the caller is a human or a robocaller. To the best of our knowledge, we are the first to develop such a defense system that can interact with the caller and detect robocalls where robocallers utilize caller ID spoofing and voice activity detection to bypass the defense mechanism. Security analysis explored by us shows that such a system is capable of stopping more sophisticated robocallers that might emerge in the near future. By making these contributions, we believe we can bring trust back to the telephony channel and provide a better call experience for everyone.Ph.D
- …