Projects
A tool for detecting AI/ML model weaknesses through bill-of-materials based analysis. I authored the original submission to the U.S. Army’s xTech Scalable AI competition, where it ranked 4th and was awarded a $2M SBIR contract.
Co-authored SISA (Sharded, Isolated, Sliced, and Aggregated training), a technique that improves machine unlearning time by up to 4.63× with the strongest privacy guarantees. Published at IEEE Symposium on Security and Privacy (Oakland) 2021.
Maintainer of CleverHans, a Python library providing reference implementations of adversarial attacks and defenses for machine learning models. Used widely in ML security research.
Contributed to BastionLab, an open-source framework for confidential data science. Enables data owners to share datasets for analysis without exposing raw data, using trusted execution environments.
Contributed to BlindBox, a solution for deploying AI models inside confidential enclaves so that neither the model provider nor the infrastructure operator can access user data.
Contributed to AICert, a tool for certifying the provenance and integrity of AI models using hardware-based attestation from confidential computing environments.