A novel technique for point rendering in a CUDA path tracer is introduced in this proposal. The approach makes it possible to render point represented geometries with global illumination effects. Octree data structure is combined in order for more efficient intersection determination. Furthermore, Octree enables the users/artists to choose the level of details of the rendered results to achieve the sketch of the image before final rendering. The proposed approach also allows the rendered points to be visualized with depth and occlusion sense, providing more accurate 3D point positions for users to design before rendering.
Our efforts are put on the intersections between a tracing ray and points detected by the adapted Octree implementation and its traversal method. When ray-point intersection happens, we decide whether or not the point in the Octree node should be rendered.
Results with different levels of details (Octree resolutions respectively), rendering iterations and adjustable point radius are shown.
For details of the project, please click here.
FlipYoWall is my self project as an instance of Android Live Wallpaper.
This self-developed Android OpenGL|ES 2.0 Live Wallpaper is featuring the interaction between the user and the virtual skateboard in client devices. Virtual 3D immersion is enhanced by features, such as touching, zooming in/out, magnetometer, and accelerometer feedbacks. High-level shading technique, sky cubemap, varying multiple textures are also applied on 3D models and the scene.
Currently provides: Ollies, FS/BS Ollies, Kickflips, Heelflips, Treflips, Varial Heels, Hardflips, Laserflips. Try yourself to find out how to do those tricks!
This is a project cooperated with Kristopher Wong, Kaleb Williams. We adopt a compact Octree data structure as the representation of models in scenes and integrate it into our CUDA path tracer for efficient volume rendering. The exploitation of dedicated graphics hardware is shown and the results are realistic and visually appealing.
Our efforts are put on the implementation of a compact model representation in Octree with an efficient parametric algorithm for tree traversal. With the proposed Octree combined into our CUDA path tracer, the volume rendering by each ray per thread is made possible for average Nvidia graphics cards. Our rendering algorithm extends the classic ray tracing process into path tracing with light transport in participating media, which gathers light energy in a physically based manner. Finally, the global illumination examples and comparisons of cloud like models are shown.
For details of the project, please click here.
Based on ray tracing algorithm, path tracing provides us with more accurate physically based rendering results, such as caustics, color bleeding, soft shadow, depth of field, motion blur, or even global illumination. This path tracer, using Monte Carlo method with poisson disk sampling technique, renders high quality images by shooting rays iteratively bouncing in the scene.
With the help of modern GPUs, the speed of path tracing process is hugely increased. This path tracer is built on CUDA GPU language and is, further, fastened by implementing string compaction on ray parallelization. It also supports supersampled anti-aliasing, motion blur, Fresnel refraction, depth of field and interactive camera.
Left default image: Global illumination of glossy, mirror and glossy spheres rendered in 15,000 iterations with Monte Carlo method. (Click the image to enlarge)
Above image shows a fantastic view of a national park in my home country -- Taroko National Park, Taiwan.
Image processing most runs on CPU, single looping through all the pixels necessarily. Combined with openGL and GLSL shading language, the processing can be handled in GPUs and be completed instantly.
This small program aims to implement various image processing filters in GLSL fragment shaders and renders the resultant image back to a fullscreen quad.
Above images from left-right, top-bottom order: Original, Normal blur, Gaussian blur, Grayscale, Edge Detection, Gamma Correction(1/2.2), Brightness Adjustment, Contrast Adjustment, Negative, Night Vision, Toon Shading, Pixelization (Click the images to show)
This is actually the my results of assignments from Computer Animation & Simulation course by Prof. Jernej Barbic at USC. It covers three parts of simulation: Simulating a Jello Cube, Motion Capture Interpolation and Constrained Particle System. All the materials to implement are OpenGL and its extension in C/C++.
The left snapshot briefly shows the simple mass springs system with explicit penalty method to compute expected deformation state of the jello cube. There are bounding box and inclined plane bouncing the cube back if collision happens. Besides, users are allowed to use cursor to drag/pull/push on the surface of the cube and see the chaining on-going deformation as time goes. As to the shading schemes, I provided three shading: Flat, Gouraud and Phong shaders for users to choose from.
To see full simulation video, please click here.
On the right shows three rotation interpolation methods in comparison with the input Motion Capture data (ground truth). The interpolated motions are in green whose positions and rotation are interpolated by using Bezier Euler, SLERP Quaternion and Bezier SLERP Quaternoin methods. The number of frames in between keyframes is 40.
This simulation is about constrained particle system consists of multiple particles in external force fields. The first node is attached/fixed at the very top of the red ring, and the last node is can be chosen whether or not to be on the ring. Besides the constraints above, all the neighboring nodes are connected by an edge with length (1 / # of nodes); therefore, the total length of the chain equals to one.
To see full simulation videos, please click from below:
This term project in Multimedia System Design course by Prof. Parag Havaldar at USC aims to remove ads from the given video and audio. My great teammate, Jarvis Mcgee, and I developed our own algorithm to reach the goal. All the materials to implement are OpenAL/OpenGL and their extensions in C/C++.
The above snapshots show the input video/audio, containing normal content and ads, while the below shot sequence is the result after executing our algorithm, producing only desirable and expected multimedia. (Click the image to enlarge)
Advertising is a source of revenue for content owners and content distributors but often proves to be an unwelcome viewing hindrance when the intended audience is consuming the content. In such cases, you could watch your recorded content and fast forward through the advertisements, but it would be even better to AUTOMATICALLY preprocess the video to remove advertisements. -- Parag Havaldar
Here are the main tasks/steps in the program execution Jarvis and I designed: - Using color space information in video side to divide frames into shots - Analyzing audio shots in both temporal and frequency domain to find out possible ads - Analyzing video shots by the mean of considering camera motion to find out possible ads - Integrating both potential ads from audio and video sides to decide the exact ads
This is a multi-player game/project for CS523, Networked Games Course on gamepipe in USC. We named the game "Toy Car Wars", because the game mainly features on toy cars that fight each other in daily, common room in our life.
All the players are divided into two teams, red and blue, in the game. At the beginning of the game, every team member is instantiated in its team room, and each team needs to grab a bomb, at the center of the map, then set the bomb in opponent team's room. During the grabbing process, cars of different team are expected to fight each other in order to stop and disturb others. There are 5 different weapons randomly created: machine gun, rocket launcher, fork, spring board, dish-wash, and corn. Players can catch and use them as tools or weapons to win the game. Besides, there are two basic tools, wings and umbrella, for cars to fly and avoid damage from falling to the ground severely.
To see full game demo, please click here.
This is the final, overall exercise/project in CS571, Web Technologies Course in USC. The Android Mobile application contains movie searching function via IMDB movie database. Besides, after the searched results are returned, user can click to see movie details and post the movie information with reviews or opinion onto his/her Facebook wall.
The application sends query to my Java Servlet for the data from IMDB. In more details, the Java Servlet acts as a front end server, extracting the query string and calling my another back end PHP script to retrieve data from imdb.com corresponding to the media title and media type included in the query. Then the PHP script returns results from IMDB in XML format for Java Servlet to handle, and the Java Servlet produces a JSON string that is returned asynchronously to the original Android Mobile application. Finally, if the user want to post some movie information onto his/her Facebook wall, the application will authorize the user and request the user's permission to access his/her data in order to post the information onto the wall.
To see full application demo, please click here.
This is my first undergraduate research in Communications and Multimedia Laboratory of Computer Science department at National Taiwan University. After realizing the content of Epipolar Geometry, I started surveying several papers about match move and finally chose High-quality video view interpolation using a layered representation as the direct approach for me to fathom and dig out more contents about match move.
From this experience, I learned the spirit of doing research and the attitude of corporation from the self-solving and group discussion. Sometimes it is hard to solve a problem by oneself. Even a simple problem could contain blind points for someone, and that is why we need others' knowledge and experience to create win-win situation, paving the shortcut to our goals.
Here I want to thank my advisor, Prof. Ming Ouhyoung, for teaching and equipping me with the ability to truly solve problems. I also have to thank my two seniors in CS dept. at NTU, Hung-Hsiang Lin and Shih-Hao Hsiung, and my classmate, Yung-Hsiang Yang, for accompanying me and discussing my weekly progress. Thank you, I appreciate your help in all aspects. :D
To see full report, please click here.
The image above is the process of the layered brushing of the program.
As well as still images, we also applied this effect onto video. However as video requires greater frame-to-frame consistency, we particularly chose another type of video, namely Stop Motion, to help lower the importance of consistencies across frames. In the end, we spent five hours to render the video, and were surprised and satisfied with the end result, which has a character of its own.
For this project, I worked with Hsin-Cheng Chao on the application of
Painterly Rendering with Curved Brush Strokes of Multiple Sizes.
The goal was to transform an input image into an output image with the characteristics
of an oil painting. For this, we used hand-painting methods, including the use of
a spline brush stroke to strengthen the authenticity of the lines. In addition,
a major factor in the look of such paintings is the layered nature of
canvases, ensuring there is a combination of large and small strokes and thus enhancing the
The images above show oil paintings with different styles.
The Digital Visual Effects Course by Prof. Yung-Yu Chuang gave me many useful visual effects skills. Whether in computer graphics, computer vision, or image processing, I learned a great deal both in terms of theory and practice. For this term project, our team( Hsuan-Yueh Peng, Ting-Ju Chen, and Larry Yang) put to use everything we had learned in assignments and in class, creating a video that portrays a dark life and complex spirit.
Here is the link to the video itself. Enjoy it! :)
Such difference is discernable to our naked eyes, not to
mention how important a proper sampler is to animation movies.
To see full project, please click here.
For this term project on Digital Image Synthesis by Prof. Yung-Yu Chuang , I made a poisson disk distribution in PBRT(Physically Based Rendering) and conducted a comparison of the results of a poisson sampler and a random sampler. These images left shows some difference in their level of noise in general. Those belows are the magnified shadow parts of the corresponding left images. Under magnification, you can also see that the head area of the gree dinosaur is clearer and the result is brighter in the right-hand case, particularly the outlines.
The reasons behind rating a particular picture in a
given way may be many. Spatial reasons aside, color scheme is
one of the most essential issues in rating colored image. If the
combination of colors is appropriate, the image may seem more
harmonious than one with inappropriate combinations, even when the
two images are identical in spatial structure. As such, color scheme
is the primary thing discussed in this project. Cheng-Hung Wu,
Curtis Yang, and I
investigated how color schemes work with the human visual sense.
This project is based on the implementation of
To see slide report, please click here (w/ Sample Adjustments).
To see full project report, please click here.
How one person rates an image depends on the events that person has experience. Since we know that the experience of any two arbitrary users can be completely difference, why might they both come to the same conclusion on an image without any discussion? Why are some paintings or photographs adored by people from a range of backgrounds while others are not? There may be some inherent commonality in image rating that leads to this happening.
(Specific Adjustments with Samples)
This term project was actually my first self-chosen term project in college. Though the simulation results were not satisfying enough to reach my goal, the process of figuring and implementing had enlightened my interest in Computer Graphics. :)
With computer scientists making constant progress, 3D motion pictures are
able to simulate many things as they are in the real world. Much work has been
done to simulate some phenomena that are not so easy to deal with, one of which
is object deformation. Varying by environment, object characteristics, and
object situation, simulating deformation requires many formulae for analytical
purposes. The goal of this project on Introduction to Computer Graphics course
Prof. Ming Ouhyoung is to simulate deformation through collision in real time
Real-Time Simulation of Deformation and Fracture of Stiff Materials.
Presented by Hsuan-Yueh Peng and Chen-Hung Wu.
To see full project report, please click here.
This is actually an assignment in Virtual Reality course (Fall 2011) by Prof. Ming Ouhyoung in National Taiwan University. Though it is an homework, there is really no limitation or specification of the content of it. We were asked to creation our own animation by using OpenGL. Therefore, I chose my favorite topic-- Skateboarding as the main theme of the animation.
Everything was from scratch! I wrote my own key sequences/scripts of all the motions. It can be said I was the director/writer/producer... of this animation. Further, the character in the animation performs many skateboarding tricks that are all based on the real skateboarding ones. So, if you have any question about those tricks, please do not hesitate to ask me.
Here is the link to the skateboarding animation. Hope you like it! :D
Who says the simple Assembly Language cannot create something interesting?
Here is one:
Golfing Club, a classic GBA golf game developed with assembly language. Holding GBA in your hands and playing golf on it is probably the most joyable leisure during raining sessions, isn't it?
To see full project report, please click here.