Yale University
abhishek at cs.yale.edu
-
UNC Chapel Hill-
AMD-
Huawei Zurich-
NVIDIA-
NVIDIA-
NVIDIA-
NVIDIAI work on making computer systems more efficient—both by increasing execution speed and by improving programmer productivity—to better support advanced AI and neurotechnologies. While my core expertise is in computer architecture, I also design operating systems, compilers, and chips to achieve these goals.
My group highlighted the growing overhead of memory address translation and developed optimizations that are now widely adopted. Companies and open-source projects including AMD, NVIDIA, RISC-V, Meta, and Linux have integrated our ideas on coalesced TLBs and translation contiguity into billions of microprocessors and operating systems. This work is detailed in my book on virtual memory and in the appendix of the classic quantitative computer architecture textbook.
We also build full computer systems for brain-computer interfaces, aiming to transform neurological treatments and advance understanding of brain function. Through our HALO and SCALO systems, we are taping out low-power, flexible chips for brain interfaces. See my ASPLOS ’23 keynote for more, and look out for our upcoming CCC visioning workshop report on brain interfaces.
I was honored with the 2023 ACM SIGARCH Maurice Wilkes Award “for contributions to memory address translation used in widely available commercial microprocessors and operating systems.” My research has earned six Top Picks, two honorable mentions, a Best Paper at ISCA ’23, a Distinguished Paper at ASPLOS ’23, and a visiting CV Starr Fellowship at Princeton Neuroscience. On the teaching front, I received Yale College’s 2025 Dylan Hixon ‘88 Prize for excellence in instruction across the natural sciences, and Yale Engineering’s 2022 Ackerman Award for instruction and mentoring.
Appendix L in "Computer Architecture: A Quantitative Approach" by Hennessy and Patterson