Project Report - Ankita Ghosh & Sayan Deb Sarkar

Introduction and Motivation

'Man on Mars', a subject of discussion since ages in the scientific community and a topic of wonder for the humankind. As we carelessly burn through the life resources available on Earth, we devise new ways to invade Mars and make it habital. The debate on what life on Mars will look like goes on endlessly. Through our graphics project, we attempt to present our version of this topic as we take a man, out of his place on Earth, onto the surface of Mars.

Man on Mars

Man on Mars


We are using the images above as our source of inspiration. These conform with the theme 'out of place' as the picture on the left depicts an astronaut walking on Mars. We plan to showcase a human figure standing on a terraformed version of Mars with greenery around. By taking some artistic inspiration from the image on the right, we plan to ornate the sky with various cestial bodies. Thus our final scene will look like an amalgamation of the two scenes giving an undertone of an out of the world view.

Sayan Deb Sarkar

I have rendered all validation related scenes with mitsuba3, except for polynomial radial distortion, which to the best of my knowledge, is not there in mitsuba3 [7]. For that, I specifically used mitsuba 0.6 desktop GUI. For clarification and comparison purposes, the nori comparison with mitsuba images are rendered with 256 samples per pixel on my personal laptop, because of difficulty in setting up mitsuba on the euler cluster, while those standalone of nori are rendered with 1024 samples per pixel, on the euler cluster, unless stated otherwise. The corresponding integrator is always set as path_mis unless stated otherwise.

Feature Implementation

Advanced Camera Models

Files Added/Modified:

Depth Of Field

To increase the realism of a scene, and to focus the viewer on specific parts of the image, I used depth of field, which is simulated using a simple thinlens camera. I modified src/perspective.cpp to accept two camera related parameters, aperture of the lens and focal length. Depth of field is usually simulated in graphics by calculating ray intersection with the focal plane, and then modifying the ray origin as a point sampled on the lens and the direction as such that the ray passes through the point on the focal plane. For my validation, I show variations of the aforementioned parameters and try to focus on two different subjects on the scene while simulatenously comparing with corresponding mitsuba renders. Starting with no depth of field effect, I show step by step how experimenting with lens radius and focal length help me focus on a desired subject at a time.

Focal Length : 0.0, Lens Radius : 0.0
nori mitsuba


Focal Length : 4.41159, Lens Radius : 0.5
nori mitsuba


Focal Length : 4.41159, Lens Radius : 1.5
nori mitsuba


Focal Length : 5.91159, Lens Radius : 0.5
nori mitsuba


Focal Length : 5.91159, Lens Radius : 1.5
nori mitsuba


Lens Distortion

Initially, I wanted to extend perspective camera with a naive implementation of first order radial distortion. However, in the interest of validation with mitsuba, I also ended up implementing polynomial radial distortion, both of which have been explained as follows.
Radial Distortion
Here, I followed the implementation of tensorflow to simulate quadratic radial distortion where, given a vector in homogeneous coordinates, \( (x/z, y/z, 1) \), \(r\) is defined \(r^2 = (x/z)^2 + (y/z)^2\). Following this definition, I used the simplest form of distortion function as \(f(r) = 1 + k * r^2\) with the distorted vector given as \( (f(r) * x/z, f(r) * y/z, 1)\). The reference can be found in tensorflow implementation of the same function [8]. The corresponding render with variation in values of the distortion coefficient is shown below.
coeff = 5 coeff = 10


Polynomial Radial Distortion
In the interest of proper validation of my implementation with an already avaiable renderer like mitsuba, I implemented polynomial radial distortion following their code [9]. I specified the second and fourth-order terms in a polynomial model that accounts for pincushion and barrel distortion, as k1 and k2. This is useful when trying to match rendered images to photographs created by a camera whose distortion is known. The corresponding comparison with mitsuba render is shown below.
nori : k1 = 5, k2 = 5 mitsuba : k1 = 5, k2 = 5 nori : k1 = 10, k2 = 10 mitsuba : k1 = 10, k2 = 10


Chromatic Aberration

Chromatic aberration, also known as color fringing, is a color distortion that creates an outline of unwanted color along the edges of objects in a photograph. It often appears when there's a high contrast between light and dark objects. On activation of the effect, sampleRay() in camera class is called 3 times, once for each color channel and then summed for the final radiance. The amount of abberation for a given color channel is chosen as parameters specified in the scene XML file and then position is zero-centered as well as the focus point is shifted in each direction by an offset. In the following images, I show comparisons of different parameters on two different scenes to understand how chromatic aberration usually affects a captured photograph.

cbox table


Homogeneous Participating Media

Files Added/Modified:

I have implemented both kinds of media - scene-wide homogeneous medium and homogeneous medium attached to a shape, with the integrator updated to handle both scenarios without use. The medium is parameterized by absorption coefficient sigma_a, scattering coefficient sigma_s and the phase function (Henyey-Greenstein/isotropic in my implementation). Before moving onto the integrator in the next section, I added functionality for handling of sampling volumetric scattering following PBRT implementation [2]. The major steps are as follows :

In the following images, I show comparisons of the sampled homogeneous medium scene-wide and attached to a sphere with corresponding mitsuba renders, using an isotropic phase function. As can be seen, these look pretty identical. The media were rendered with a volunetric path tracer, described in the next subsection.

nori scenewide absorption = 0.005 scattering = 0.25 mitsuba scenewide absorption = 0.005 scattering = 0.25


nori sphere absorption = 1.0 scattering = 0.0 mitsuba sphere absorption = 1.0 scattering = 0.0 nori sphere absorption = 0.0 scattering = 1.0 mitsuba sphere absorption = 0.0 scattering = 1.0


Volumetric Path Tracer with Multiple Importance Sampling

I implemented a complex integrator, volpath_mis, which is based on path_mis and extends it with sampling distances in media and sampling the phase function. My implementation uses multiple importance sampling and combines sampling direct lighting with sampling the phase function. This gives us an efficient unidirectional volumetric path tracer. Unlike the PBRT implementation [2], I do not have two different mediums attached to a shape in order to keep track of how the current medium changes. Instead, I modified the integrator by using the dot product of the normal of the intersection point and the ray direction to keep track of exiting and entering medium. I implemented a function rayIntersectTr() to calculate medium transmittances and thus, the attenuation based on which media the ray is passing through.

To demonstrate validation of my implementation, I show comparison of the newly writtern volumetric path tracer with the path tracer we wrote for Programming Assignment 4. Using the same scene without a medium, I show comparisons of both below, which looks identical. In addition, in order to understand the effectiveness of importance sampling, I also show comparisons by turning off multiple importance sampling, which makes it behave like the volumetric version of path_mats and the original implementation.

vol path mis path mis


vol path mats vol path mis


To further validate the volpath_mis implementation, I followed the paradigm used in assignments. I modified the test-direct.xml file from Progamming Assignment 4 to use the volumetric path tracer as the integrator and the test-furnace.xml to add a homogeneous medium with 1.0 scattering and 0.0 absorption coefficients attached to a sphere for every scene. My implementation still passes all the tests as described below.





Henyey-Greenstein Phase Function

Files Added/Modified:

Henyey Greenstein phase function was specifically designed to be easy to fit to measured scattering data and is parameterized by a single parameter \( g \), the asymmetry parameter, to control the light distribution. It is useful to be able to draw samples from the distribution described by phase functions like applying multiple importance sampling to computing direct lighting in participating media as well as for sampling scattered directions for indirect lighting samples in participating media. The PDF for the Henyey-Greenstein phase function is separable into \( \theta \) and \( \phi \) components, with \(p(\phi) = \frac{1}{2\pi} \). I followed the PBRT book [2] for the implementation of this function by computing \(cos \theta \) and direction \(w_i \) for Henyey-Greenstein sample and then calculating the PDF accordingly. In the following images, I show the following results in order :

  1. Henyey Greenstein phase function output with g = 0 and Isotropic Phase Function alongside comparison with mitsuba
  2. Backward and forward scattering using variation of \( g \)
  3. Integration of Henyey-Greenstein into warptest and successfully passing all tests

nori Henyey Greenstein with g=0 mitsuba Henyey Greenstein with g=0 nori Isotropic mitsuba Isotropic


strong backward scattering (g = -0.7) strong forward scattering (g = 0.7)


warptest visualisation chi2 test


Spotlight

Files Added/Modified:

My spotlight implementation is based on the implementation in the PBRT book. Spotlights are defined by two angles, falloffStart and totalWidth. Objects inside the inner cone of angles, up to falloffStart, are fully illuminated by the light. The directions between falloffStart and totalWidth are a transition zone that ramps down from full illumination to no illumination, such that points outside the totalWidth cone aren't illuminated at all. However, for the purpose of validation, I changed the calculation of cutOff value according to mitsuba implementation [7]. As can be seen in the images below, I show comparisons by putting two objects in the original cbox scene to demonstrate the comparison of nori spotlight implementation with mitsuba.

nori cow mitsuba cow nori head mitsuba head


Perlin Noise

Files Added/Modified:

Perlin Noise is essentially a seeded random number generator, it takes an integer as a parameter and returns a random number based on that parameter. I followed the referenced implementation for my understanding. I used persistence and octaves to control the noise texture [10]. Noise with a lot of high frequency as having a low persistence and octave is each successive noise function added, just like in classical music. I also used cosine interpolation for smoothing, which gives better texture at a slight loss of operating speed. Below I show perlin textures on various shapes, namely, a plane and the camel head alongside rendering just perlin noise to demonstrate the validation of my approach.

perlin noise on plane perlin noise camelhead perlin + checkeboard texture


Textured Area Emitters

Files Added/Modified:

Textured Area Emitters was quite interesting to implement, since initially I was unable to understand how to make an emitter have radiance based on texture, however, it was quite simple at the end. I added uv coordinate values to EmitterQueryRecord such that when evaluating the emitter, I can query the texture value corresponding to the given point and return that as the radiance. I adapted the integrators to follow this approach, meaning, I add the uv coordinate of the intersection point of a ray with the scene to be able to look up during emitter evaluation. Below, I show a perlin textured emitter as well as checkeboard textured emitter rendered using nori and mitsuba, they look almost same, the offset is due to the handling of the texture in the two frameworks.

nori checkeboard mitsuba checkeboard nori perlin noise


Final Gather For Photon Mapping

Files Added/Modified:

In final gather, the hemisphere above the intersection point is sampled by shooting many rays and computing the radiance at the intersection points of the sampled rays with a diffuse surface. I sampled rays using cosine hemispherical sampling based on the incident ray direction. My implementation is based on Christensen's work [11]. I show a direct comparison with the basic photon mapper to demonstrate how the indirect illumination accumulated using final gather helps reduce blotches.

final gather basic version for photon mapping


Object Instancing

Instancing is a method of showing the same object mulitple times in the scene, without copying the object in the memory. Object instancing was a very tricky feature to implement which I was unable to generate results for However, I'll describe my approach. The initial idea was to separate the transformations of a shape from the underlying mesh and then create multiple versions, ie, multiple transformations of the same shape in BVH. However, after deliberation and understanding using the PBRT textbook, I thought of this approach : create an Instance class to hold a pointer to another primitive, the one I would instance. The Instance class conceptually transforms the primitive but in reality, cannot modify the primitive. Instead, the total transformation T is stored and applied during ray intersection. First, the inverse transformation would be applied to the ray and then the bounding box of the BVH node would be modified. This was my idea for the implementation, however, I was unable to implement it to full clarity because I did not understand how to get access to the transform matrix of the scene within the bvh class.

Rendering on Euler Cluster

Files Added/Modified:

To be able to render on Euler, I essentially changed main.cpp to not use GUI. I renamed the file to main_euler.cpp for this submission. For validation, I show comparison of the rendered images using Euler and my personal laptop. I have also used the text output of the rendering, to show that the same implementation on euler took 12.0 seconds compared to 2.3 minutes on my system.

personal system cbox path mats euler cluster cbox path mats






Ankita Ghosh

Feature Implementation

Images as Textures

Relevant files:
lodepng.cpp
lodepng.h
imagetexture.cpp

I implemented the images as texture feature using the lodepng library by adding lodepng.cpp and files which makes using PNG images very easy and lightweight. I referred to the lecture [1] on texture mapping which helped me understand the concept. In imagetexture.cpp, for a given pair of \((u,v)\) coordinates, I return the \((r,g,b)\) values present at the \((x,y)\) location of my texture map as the albedo. To ensure right albedo value is sent, I scale the \((u,v)\) coordinates according to the height and width of the texture map. Additionally, I apply inverse gamma correction so that the original colour of the texture is retained. Below, I show a comparison between the results obtained from my texture mapping and from Mitsuba.

Mine Mitsuba

These are the results obtained on using images as textures on different objects.

Texture on other objects

Normal Mapping

Relevant files:
normaltexture.cpp
bsdf.h
mesh.cpp
sphere.cpp
diffuse.cpp

Implementation of the normal mapping builds on the steps of using images as textures. It helps us add wrinkles and folds to the object through texture, even when they are not present in the object geometry. I gathered knowledge about this topic from the PBR [2] textbook. For doing this, we require normal maps which represent the surface normals of the texture through RGB values. My normaltexture.cpp code implements texturing mapping. I add functions in bsdf.h and diffuse.cpp which check if the texture has a normal map. If present, I linearly transform the color channels of the normal map from \([0,1]\) range to \([-1,1]\) range into a normal vector by multiplying with \(2 * value -1\). This computation is done in the setHitInformation function of the mesh.cpp and sphere.cpp. Below are two comparisons on the effect of normal mapping. I show the texture, the normals mapped on a plane surface and the texture and normal combined.

Texture Normal Bump Mapping

Texture Normal Bump Mapping

Probabilistic Progressive Photon Mapping

Relevant files:
progressivephotonmap.cpp
integrator.h
render.cpp

The lecture slides and research paper [3] on this topic helped me understand how to implement probabilistic progressive photon mapping. To implement it, I first make addition of two new functions to the integrator.h: first, getNumIters so that I can access the number of iterations provided by the XML file, and second iteration, to clear the old photon map and build a new photon map for every iteration and also update the value of the radius for the new iteration according to the equation \( r_{i+1} = \sqrt{\frac{i+\alpha}{i+1}} r_i \). In the render.cpp file, I add an outer loop over the loop of sampling. This loops gets the value of iterations it needs to run from getNumIters. Inside this loop, before the smapling loop starts, I call the iteration function. Since Nori does running average, we do not need to implement that step and can view the final output directly.

In the renderer, I clear out the stored image block after every iteration just for visualization purposes so that we can see the results obtained in each iteration. I use only 10% of the number of photons used for cbox photon mapping from Programming Assignment 4 and 32 spp. These are shown below along with the value of radius at those iterations. (This line is then commented out while doing the running average.)

Iteration 1 (Radius = 0.10) Iteration 10 (Radius = 0.078) Iteration 100 (Radius = 0.054) Iteration 500 (Radius = 0.041)

Here I compare the results obtained by probabilistic progressive photon mapping (PP PMap) against normal photon mapping (PMap) and path importance sampling (Path MIS). The PP PMap performs significantly better than Path MIS. If we look closely at the edges and dielectric spheres, PP PMap performs better than PMap too.

Path MIS PMap PP PMap

Environment Map Emitter

Relevant files:
envmap.cpp

For implementing the environment map emitter, I follow the directions and pseudocode provided in the research paper [6]. First I created a function from the image map by obtaining the luminance value. The marignal density and conditional density are obtained using the precompute functions given in the paper. The sample functions not only return \(u,v\) coordinate values but also calculate probability density values. The probability density value are then converted to one expressed in terms of solid angle on the sphere by introducing a Jacobian term. I map the \((u, v)\) sample to \((\theta, \phi) \) on the unit sphere by scaling it and then calculate the direction \((x, y, z)\) using it. I also perform bilinear interpolation to improve the output of the environment map. Since my environment map is attached to a sphere emitter, I do not need to handle anything additionally for path_mis as my implementation already accounts for mesh emitters.

I validate my implementation using the same scene to compare my render against Mitsuba. In the scene, I have placed a dielectric sphere, a diffuse sphere and a mirror sphere.

Mine Mitsuba

Disney BSDF

Relevant files:
disney_BSDF.cpp
warp.h
warp.cpp
warptest.cpp


Disney BRDF gives a great range of flexibility which is why I took interest in implementing this feature. I referred to the paper [4] and the code provided by them for understanding the topic and its implementation. I implemented the subsurface, metallic, specular, specularTint, roughness, clearcoat and clearcoatGloss parameters, omitting sheen and anisotropic effects. There exist several small variants to the diffuse and specularity implementation, which comes due to artist preference as stated in the paper. To describe the specular lobes we need two variants of the Generalized-Trowbridge-Reit distribution (or GTR), one for the specularity term (GTR2) and another one for the clearcoat term (GTR1). The functions to perform SquareToGRT1 and SquareToGTR2 and their corresponding PDFs are added to the warp.h and warp.cpp files. I validate these, which are shown at the end of this section, and hence make required changes to warptest.cpp. The implementation of the Disney BSDF is done in the disney_BSDF.cpp file.

Below each row varies one parameter while keeping the others constant.
The parameters varied row-wise are as follows:
  1. Subsurface
  2. Metallic
  3. Specular
  4. Specular Tint
  5. Roughness
  6. Clearcoat
  7. ClearcoatGloss

0             0.2             0.4             0.6             0.8             1

00.20.40.60.81.0
00.20.40.60.81.0
00.20.40.60.81.0
00.20.40.60.81.0
00.20.40.60.81.0
00.20.40.60.81.0
00.20.40.60.81.0

Since the visualization for subsurface and clearcoatGloss parameters are not evident in the comparison above, I have provided more comparisons below to ensure that my parameters are validated sufficiently.

subsurface=0.0 subsurface=1.0

clearcoat=0.0, clearcoatGloss=0.0 clearcoat=1.0, clearcoatGloss=0.0 clearcoat=1.0, clearcoatGloss=1.0

Comparing DisneyBSDF with an existing implementation is very difficult since the principled BSDF implementation has differences for majority renderers. Since I have been comparing my results with Mitsuba for the project, I did the same for this feature too. Upon looking closely into Mitsuba's implementation I came to realise that their implementation of Principled BSDF is in such a way that it creates a certain difference in the albedo of the object. The scenes do not match exactly, however, they serve as a good reference is showing that the parameters get applied in a similar fashion. I provide two comparisons of applying my parameters with different values on Nori and Mitsuba.

Scene 1 (Mine) Scene 1 (Mitsuba) Scene 2 (Mine) Scene 2 (Mitsuba)


Further, I have also validated the implementation of my GTR1 and GTR2 functions throught warpTest and provided the warp visualization and chi2test results of both when \( \alpha=0.3\) and \( \alpha=0.8\).

GTR1 (alpha=0.3) GTR1 (alpha=0.3) GTR1 (alpha=0.8) GTR1 (alpha=0.8)

GTR2 (alpha=0.3) GTR2 (alpha=0.3) GTR2 (alpha=0.8) GTR2 (alpha=0.8)

Moderate Denoising: NL-means Denoising

Relevant files:
render.cpp
denoise.py


Non-local denoising filter performs averaging over similar pixels in the neighbourhood instead of just calculating a local mean. This results in denoised outputs where edges are preseved. To implement NL-means denoising, I took reference from our lecture slides [5]. First, I calculated and stored the variance map of the scene in render.cpp by using sample mean variance. Then, I implemented the NL-means pipeline in the python file denoise.py present along with all other C++ source codes. In my python code, I majorly make use of the numpy library for the mathematical calculations. I also use opencv library to read the exr file and scipy library to perform convolution operation. I implement the pseudocode present in the lecture slide and denoise two different renders of the same scene using path_mis and 128spp, and path_mats and 512spp. For the paramaters, I use the values stated in the slides: \(r=10\), \(f=3\) and \(k=0.45\). The scene, the variance map, and the denoised scene for both the renders are shown below.

Before Denoising (PATH MIS 128spp) Variance Post Denoising

Before Denoising (Path MATS 512spp) Variance Post Denoising

Final Scene

The final image was modeled using blender and was rendered on the Euler cluster at 1920x1080 resolution using 4096 samples per pixel, the time taken was 40 minutes.

rendered image


References

  1. Polygonal Meshes and Texture Mapping Lecture Slides, Link
  2. Pharr, Matt, Jakob, Wenzel and Humphreys, Greg, Physically Based Rendering, Second Edition: From Theory To Implementation, Morgan Kaufmann Publishers Inc., 2010. Online version 3
  3. Claude Knaus and Matthias Zwicker. 2011. Progressive photon mapping: A probabilistic approach. ACM Trans. Graph. 30, 3, Article 25 (May 2011), 13 pages. Paper
  4. Physically Based Shading at Disney : Link
  5. Image Based Denoising Lecture slides, Link
  6. Humphreys, Grigori Robert and Matt Phare. “Monte Carlo Rendering with Natural Illumination.” (2012)
  7. Mitsuba renderer, Github Repo
  8. Tensorflow Graphics : Github Repo
  9. Mitsuba Documentation : Release Docs
  10. Perlin Noise Generator Article By Hugo Elias : Article on Perlin Noise
  11. Per H. Christensen, Faster Photon Map Global Illumination, Journal of Graphic Tools, ACM 1999 : Paper