“Art is long, and Time is fleeting.” — Longfellow
I’m Alireza Mohamadi, an independent researcher who recently completed my B.Sc. in CE. Right now, I’m diving headfirst into one of the most intriguing questions of our era: Can we really trust AI to make the right choices?
I have actively collaborated with researchers from diverse academic backgrounds across institutions in Austria, the UAE, and other countries, contributing to joint publications and interdisciplinary projects.
I’m currently obsessed with AI safety and AI alignment. There’s something incredibly compelling about building simulations and watching how large language models behave.
My latest work involves creating survival scenario simulations (like DECIDE-SIM) where LLM agents face resource scarcity and moral trade-offs. It’s like a psychology lab, but for AI. Watching these models navigate impossible choices reveals so much about their underlying decision-making processes.
Current Adventures 🚀
Research Intern @ ZEISS Lab × Medical University of Vienna (Jan 2025 - Present)
Working remotely with an international team on frequency-based explainability methods. Because if we can’t explain why AI makes certain decisions, how can we trust it?
Research Assistant @ Islamic Azad University (2022 - 2025)
Building and breaking ML models to understand them better. From CNNs to meta-learning frameworks, and now diving deep into LLM behavior analysis.
What I Actually Work On 🔬
- AI Safety & Alignment: Making sure AI systems do what we actually want them to do
- Explainable AI: Cracking open the black box to see what’s really happening inside
- AI for Science: Applying ML to advance photonic computing and next-gen technology
The Fun Part 🎮
The best part of my research? Creating simulations and watching AI agents interact. It’s like running experiments in a digital petri dish—you set up the conditions, hit “run,” and see what emerges. Some results confirm your hypotheses. Others completely surprise you. Both are equally exciting. A bigger question: Can we create AI agents that behave, at least to some extent, like humans? And if we get there, what would happen? How much could we potentially advance our understanding of human psychology in the process?
Published Work 📚
I’ve published 12 papers on topics ranging from AI safety to AI for science. You can find them all on my Google Scholar.
Recent News 📰
October 2025 - Two new papers archived on arXiv:
- “Frequency-Aware Model Parameter Explorer: A new attribution method for improving explainability” Read it here
- “Survival at Any Cost? LLMs and the Choice Between Self-Preservation and Human Harm” Read it here
Hobbies
In my free time, I enjoy playing Warzone — the thrill of high-stakes survival modes has made it one of my favorite pastimes. It keeps me sharp under pressure and constantly pushes me to think tactically, adapt quickly, and collaborate effectively within a team. Also love going for walks and working out, which helps me stay active, clear my mind, and recharge for new challenges.
“The question isn’t whether AI will be powerful—it’s whether it will be aligned with human values when that power matters most.”
.png)