PG电子游戏

张景昭

助理教授

优化,机器学习理论, 强化学习, 动力系统, 异常检测

CV

Short Bio

Assistant Professor @ Tsinghua, IIIS

Jointly Affiliated as PI @ Shanghai Qizhi Institute

Short bio:

张景昭现任清华PG电子游戏 助理教授。 博士毕业于麻省理工学院计算机科学专业,导师是Prof. Suvrit Sra和 Prof. Ali Jadbabie。本科毕业于UC Berkeley, 导师是Laura Waller。 张景昭曾获伯克利研究生奖学金,MIT Lim奖学金,IIIS青年学者奖学金, MIT最佳AI&Decision Making 硕士论文, MIT 最佳 AI & Decision Making 博士论文 等奖项。 研究主要包含优化算法,神经网络训练,算法复杂性分析,机器学习理论,以及人工智能应用。

Jingzhao Zhang is an assistant professor at Tsinghua, IIIS. He graduated in 2022 from MIT EECS PhD program under the supervision of Prof. Ali Jadbabaie and Prof. Suvrit Sra. His research focused on providing theoretical analyses to practical large-scale algorithms. He now aims to propose theory that are simple and can predict experiment observations. Jingzhao Zhang is also interested in machine learning applications, specifically those involving dynamical system formulations. He received Ernst A. Guillemin SM Thesis Award and George M. Sprowls PhD Thesis Award.

淡泊明志 宁静致远


What's new

§Fall 2023 optimization class materials are now available online.

§Please check our ICLR2024 workshop on Bridging the Gap between Theory and Practice for Learning.

§Uploaded the research project on the two-phase scaling law paper in the research section (Aug 2023).

§If you want to join as an intern, please prepare a 15min presentation on a recent DL / ML / AI paper and then send me an email.

§If you are interested in joining as a PhD, please refer to my post here.

Research interests

I am interested in theoretical explanations of practical optimization algorithms.

I am working on developing faster training algorithms.

I enjoy applying optimization algorithms to real world problems.

Our group

PhD students:

Jingwei Li

Lesi Chen

Bei Luo

Xinran Gu

Hongyi Zhou

Undergraduate students:

Huaqing Zhang

Jiazheng Li

Hong Lu

Alumni:

Peiyuan Zhang (PhD at Yale)

Yusong Zhu (PhD at UT Austin)

Kaiyue Wen (PhD at Stanford)

Research Projects

For a complete list, please refer to my Google scholar page.


2024: Statistical learning in LLMs.

A presentation on several recent works.


2023: Two phases of scaling laws for kNN classifiers.

A short presentation on the arxiv manuscript .


2022: On the nonsmoothness of neural network training.

A tale of three recent works: why is neural network training non-smooth from an optimization perspective, and how should we analyze the process?


2021: Theoretical understanding of adaptive gradient methods

My phd defense presentation.


2019: An ODE perspective for Nesterov's accelerated gradient method

My master thesis (RQE at MIT) presentation.

Teaching

Fall 2023 Introduction to Optimization

References:

Bertsimas, Dimitris, and John N. Tsitsiklis. Introduction to linear optimization.

Boyd, Stephen P., and Lieven Vandenberghe. Convex optimization.

Bubeck, Sébastien. Convex optimization: Algorithms and complexity.

Grading: 40% HW + 30 % Midterm + 30 % Final

Weekly schedule:

1.Linear programming and Polyhdra. lecture, scribe

2.Simplex and Duality. lecture, scribe

3.Linear Duality and Ellipsoid. lecture, scribe

4.Ellipsoid and Convexity. lecture, scribe

5.Convex Optimization, MaxCut. lecture, scribe

6. SDP Relaxation; Lagrangian Duality. lecture, scribe

7. Lagrangian Duality and KKT. lecture, scribe

8. Midterm

9. Newton's method. lecture, scribe

10. Self-concordance and Convergence of Newton. lecture, scribe

11. Interior Point Method. lecture, scribe

12. Gradient Method and Oracle Complexity. lecture, scribe

13. Gradient Methods with Stochasticity, Nonconvexity and Mirror Maps. lecture, scribe

14. Mirror Descent and Online Learning. lecture, scribe

15. Final

相关资讯

Email

Google Scholar

//scholar.google.com/citations?user=8NudxYsAAAAJ&hl=en
TOP