News
2025
October 2025
🎉 Two exciting papers archived on arXiv this month!
Frequency-Aware Model Parameter Explorer - Introducing a novel attribution method for improving explainability in AI models. This work represents a significant step forward in understanding how neural networks process information through frequency-domain analysis. Read the paper →
Survival at Any Cost? - Our paper explores critical questions about LLM decision-making under pressure and moral trade-offs. Read the paper →
January 2025
🚀 Started as Research Intern at ZEISS Lab × Medical University of Vienna, working remotely with an international team on frequency-based explainability methods for AI systems.
2024
September 2024
📊 Reached 40+ citations on Google Scholar with h-index of 4. Grateful for the research community’s engagement with our work!
Stay tuned for more updates on AI safety, explainability, and alignment research!
.png)