Research Interests

Machine Learning Hardware Acceleration

My research focuses on designing high-performance, energy-efficient accelerators for deep learning workloads. I explore the use of emerging technologies, such as silicon photonics, to create novel architectures for accelerating fundamental operations in models like Convolutional Neural Networks (CNNs). By leveraging optical computing paradigms, my work aims to overcome the bottlenecks of traditional electronic hardware.

Opto-Electronic Hybrid Circuits

I am interested in the co-design and integration of optical and electronic components to build next-generation computing systems. My work involves developing comprehensive co-simulation frameworks and prototyping platforms that bridge the gap between photonic device physics and system-level architecture. This enables a holistic approach to designing hybrid systems that harness the high bandwidth of photonics and the complex logic of electronics.

Post-Quantum Cryptography Acceleration

As the demand for secure computation grows, my research addresses the significant computational challenges of post-quantum cryptography. I focus on developing specialized hardware accelerators, particularly photonic-based systems, to efficiently execute the complex mathematical operations, such as the Number Theoretic Transform (NTT), that are at the core of future cryptographic standards.

Next-Generation AI Hardware Architecture

My research explores novel hardware architectures to meet the demands of future AI applications. This includes investigating the use of silicon photonic chiplets to create scalable, high-bandwidth interconnects for multi-GPU systems and other large-scale computing environments. The goal is to design architectures that can handle the ever-increasing data and model sizes in AI.

View My Projects