The advancements and overwhelming success of Machine Learning has profoundly affected the future of computer architecture. Not only is performing learning on big-data the leading application driver for future architectures, but also machine learning techniques can be used to improve hardware efficiency for a wide variety of application domains.
This course will explore, from a computer architecture perspective, the principles of hardware/software codesign for machine learning. One thrust of the course will delve into accelerator, CPU, and GPU enhancements for ML algorithms, including parallelization techniques. The other thrust of the course will focus on how machine learning can be used to optimize conventional architectures by dynamically learning and adapting to program behavior.
Not really – the computation behind machine learning and how that is exploited with hardware is what is most relevant here. That said, I welcome ML experts, or even projects that focus on the algorithm aspect of ML (provided there is some relationship to hardware or hardware support).
It is recommended that you have taken some course about computer architecture, either CS151b or CS251a/b. Expected background includes basic knowledge of simple hardware pipelines (ie. how does an inorder processor work?).
All of that said, we will spend time going in depth on background/review during the first two-or-so weeks to build a foundation for more advanced architecture concepts.