Yale University
abhishek at cs.yale.edu
-
AMD-
Huawei-
NVIDIA-
NVIDIA-
NVIDIA-
NVIDIAI build computer architectures, operating systems, compilers, and chips for emerging data center servers and brain computer interfaces.
My group has led the way in calling attention to the rising overheads of memory address translation, and has pioneered optimizations to mitigate these overheads. AMD has shipped over a billion Zen CPU cores using coalesced TLBs. NVIDIA has shipped tens of millions of GPUs with TLB support for translation contiguity. Billions of Linux operating systems integrate our large page migration code, and support folios, motivated by our translation contiguity work. We have also influenced page table formats for naturally-aligned power-of-two contiguity, supported in all RISC-V cores. Finally, our work on memory tiering has influenced the deployment of hundreds of thousands of Meta's servers. This, and more, is summarized in my book on virtual memory and appendix to the classic Hennessy & Patterson textbook.
My group is also leading the charge in making brain computer interfaces full-fledged computers with the processing horsepower to effectively treat neurological disorders, shed light on brain function, and augment human capability. Through our HALO and SCALO systems, we are taping out low power and flexible chips for brain interfaces. Check out my ASPLOS '23 keynote to learn more.
I received the 2023 ACM SIGARCH Maurice Wilkes Award "for contributions to memory address translation used in widely available commercial microprocessors and operating systems". My research has been recognized with six Top Picks selections and two honorable mentions, a Best Paper Award at ISCA '23, a Distinguished Paper Award at ASPLOS '23, a visiting CV Starr Fellowship at Princeton Neuroscience, and more. My teaching and mentoring have been recognized with the Yale SEAS Ackerman Award.
Appendix L in "Computer Architecture: A Quantitative Approach" by Hennessy and Patterson