About me

I am a third-year Ph.D. student at the University of Southern California, working in the Geometric Capture Lab under Professor Hao Li. My research focuses on real-time facial expression tracking, particularly using techniques suited to emerging platforms such as virtual and augmented reality. In my spare time I also work at Pinscreen.

I recently spent a summer at Microsoft Research in Redmond working with Shahram Izadi and Pushmeet Kohli. In the summer of 2015 I was an intern with Oculus Research, working to develop techniques for high-fidelity real-time speech animation. In the summer of 2014, I worked with Pelican Imaging on surface reconstruction using their mobile depth sensors. I was awarded the Pelican Imaging Fellowship for the 2014-2015 academic year.

I was formerly a senior software engineer at NVIDIA working on GRID, the company's cloud graphics technology.

I received my Master's degree from Georgia Tech, where I worked under Jarek Rossignac on medical image volume processing and on solid/fluid coupling (as part of the Aquatic Propulsion Lab). I also worked on class projects on rendering algorithms, game programming and parallel/GPU computing (among other things). In the summer of 2008 I worked as an intern on NVIDIA's OpenGL driver team.

Before that, I attended Boston University (one of the toughest grading universities), where I received Bachelor's degrees in Computer Science from the College of Arts and Sciences and in Film and Television from the College of Communications. I did this by writing, directing, shooting and editing short films by day and coding by night. (If there were a time between day and night, that would probably be when I would have slept.)

At BU I was awarded the Trustee Scholarship (full tuition for four years), which prevented me from going deep into student loan debt. Upon graduating, I was one of two students who received the CS Department's Academic Achievement Award and was later selected for an Alumni Spotlight profile. You can read more about my years at BU here and here.


K. Olszewski*, Z. Li*, C. Yang*, Y. Zhou, R. Yu, Z. Huang, S. Xiang, S. Saito, P. Kohli, H. Li. Realistic Dynamic Facial Textures from a Single Image using GANs. To appear in the Proceedings of the IEEE International Conference on Computer Vision 2017 (ICCV 2017).
[coming soon]

K. Olszewski, J. Lim, S. Saito, H. Li. High-Fidelity Facial and Speech Animation for VR HMDs. ACM Transactions on Graphics, Proceedings of the 9th ACM SIGGRAPH Conference and Exhibition in Asia 2016, 12/2016 (SIGGRAPH ASIA 2016).
[paper] [video] [bibtex]

D. Casas, A. Feng, O. Alexander, G. Fyffe, P. Debevec, R. Ichikari, H. Li, K. Olszewski, E. Suma, and A. Shapiro. Rapid Photorealistic Blendshape Modeling from RGB-D Sensors. Computer Animation and Virtual Worlds 2016, Proceedings of the 29th Conference on Computer Animation and Social Agents, 05/2016 (CASA 2016).
[paper] [video] [bibtex]

H. Li*, L. Trutoiu*, K. Olszewski*, L. Wei*, T. Trutna, P.-L. Hsieh, A. Nicholls, and C. Ma. Facial Performance Sensing Head-Mounted Display. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2015), 34(4):47, 2015.
[paper] [video] [bibtex]

* Authors have contributed equally.


S. Saito, L. Wei, J. Fursund, L. Hu, C. Yang, R. Yu, K. Olszewski, S. Chen, I. Benavente, Y. Chen, H. Li. Pinscreen: 3D Avatar from a Single Image. ACM SIGGRAPH Asia 2016 Emerging Technologies, 12/2016.

© 2016 Kyle Olszewski
Template design by Andreas Viklund