Yale University
abhishek at cs.yale.edu
-
AMD-
Huawei-
NVIDIA-
NVIDIA-
NVIDIA-
NVIDIAI am a computer architect. I also study operating systems, compilers, and chips to validate hypotheses on architectural choices for systems ranging from data center servers to those used in neural engineering.
My group has worked on memory address translation for several years. AMD has shipped an estimated one billion Zen CPU cores using coalesced TLBs. NVIDIA has shipped an estimated fifty million GPUs with TLB optimizations for extreme translation contiguity, starting with the Ampere architecture. An estimated two billion Linux operating systems use our code to migrate 2MB pages starting with the 4.14 kernel. These results have, in turn, influenced RISC-V's NAPOT translation contiguity feature and Meta's page placement algorithms for tiered memory systems. All this, and more, is summarized in my book and appendix to the classic Hennessy & Patterson textbook.
We are also building computer systems that help treat neurological disorders and shed light on brain function. In our HALO project, we are taping out low power and flexible chips for neural interfaces. In our SCALO project, we are building a neural interface that decodes neural signals from multiple brain sites. Check out my ASPLOS '23 keynote to learn more about our work.
I am the recipient of the 2023 ACM SIGARCH Maurice Wilkes Award "for contributions to memory address translation used in widely available commercial microprocessors and operating systems". My research has also been recognized with four Top Picks selections and two honorable mentions, a Best Paper Award at ISCA '23, a Distinguished Paper Award at ASPLOS '23, the Chancellor's Award for Faculty Excellence in Research at Rutgers, a visiting CV Starr Fellowship at Princeton's Neuroscience Institute, and more. My teaching and mentoring have been recognized with the Yale SEAS Ackerman Award.
Appendix L in "Computer Architecture: A Quantitative Approach" by Hennessy and Patterson