I am a Ph.D. student at University of Wisconsin-Madison majoring in Computer Sciences. My research interests lie in the intersection of Machine Learning and Security. I work with Prof. Somesh Jha and Prof. Kassem Fawaz at MADS&P, and with Prof. Earlence Fernandes.
I did my undergraduate at Indian Institute of Technology Delhi, majoring in Electrical Engineering with minor in Computer Science.
PhD in Computer Sciences, 2019 - Present
University of Wisconsin-Madison
B.Tech. in Electrical Engineering, 2014 - 2018
Indian Institute of Technology Delhi
Content scanning systems use perceptual hashing algorithms to scan user content for illegal material. The use of client-side content scanning has been proposed but faces criticism due to potential misuse. Our research experimentally characterizes the potential for attackers to manipulate the system for physical surveillance purposes and finds that robust detection of illegal material leads to increased potential for surveillance.
We propose a systems-oriented defense against voice-based confusion attacks that exploit design issues in commercial voice assistants like Amazon Alexa and Google Home. Our defense, called SkilIFence, uses information from counterpart apps and websites to interpret a user’s intentions and ensure that only the intended skills are executed in response to voice commands. We demonstrate the effectiveness of SkillFence through experiments involving real user data and synthetic and organic speech, showing that it can secure 90.83% of skills with a false acceptance rate of 19.83%.
The Disjoint Deepfake Detection (D3) method is an adversarially robust deepfake detector to effectively combat imperceptible adversarial perturbations added to deepfakes to evade detection. D3 uses an ensemble of models over disjoint subsets of the frequency spectrum to improve robustness, and has been proven to reduce the dimensionality of the input subspace in which adversarial deepfakes lie. Empirical validation has shown that D3 significantly outperforms existing ensemble defenses against both white-box and black-box attacks.
We propose a new method for generating physical adversarial examples for camera-based computer vision that are invisible to human eyes. Rather than modifying the victim object with visible artifacts, our method modifies the light that illuminates the object. This allows an attacker to create a modulated light signal that adversarially illuminates a scene and causes targeted misclassifications on a state-of-the-art ImageNet deep learning model. We demonstrate the effectiveness of our method through a range of simulation and physical experiments with LEDs, achieving targeted attack rates of up to 84%.