Research Interests

My research focuses on AI Chip and System, with the following directions:

I have an AI chip-related paper collection at Neural Networks on Silicon, which is helpful for students to learn the history and SOTA of AI chip and system research.

Computing-in-Memory Architecture for Emerging AI Applications

[Apr. 2024] ISSCC’24 Insights for Emerging AI Computing.

  • I was invited by JOS to review the research highlights on machine learning accelerators in ISSCC’24. We observed four research trends toward efficient generative AI (ML Chips for Generative AI, and CIM Innovation from Circuits to Systems) and beyond-AI computing (DSA for Embedded Vision Processors, and DSA for Solver Accelerators). We believe these remarkable trends will lead to more AI software and hardware innovations from academia and industry in the near future.
  • Towards Efficient Generative AI and Beyond-AI Computing: New Trends on ISSCC 2024 Machine Learning Accelerators (JOS’24, Invited Paper)

[Feb. 2023] Scaling-out CIM for Large-scale AI and Beyond-AI Applications.

[Feb. 2022] Reconfigurable Digital Computing-in-Memory (CIM) AI Chip.

Reconfigurable AI Chip Architecture

[Aug. 2020] Evolver, Evolvable AI Chip.

[Jun. 2018] RANA, Software-Hardware Co-design for AI Chip Memory Optimization.

[Apr. 2017] Thinker and DNA, Reconfigurable AI Chip.

[Oct. 2014] RNA, Reconfigurable Architecture for Neural Approximation.

Agile Development for AI Chips

[Jul. 2023] AutoDCIM, Automated Digital CIM Macro Compiler.

  • I worked with ACCESS and developed AutoDCIM, the first automated digital CIM (DCIM) macro compiler. AutoDCIM takes the user specifications as inputs and generates a DCIM macro architecture with an optimized layout. With the growing interest in the DCIM field, AutoDCIM will play an important role in agile DCIM implementation and developing an ecosystem for DCIM-based AI computing.
  • AutoDCIM: An Automated Digital CIM Compiler (DAC’23)