'Man on Mars', a subject of discussion since ages in the scientific community and a topic of wonder for the humankind. As we carelessly burn through the
life resources available on Earth, we devise new ways to invade Mars and make it habital. The debate on what life on Mars will look like goes on endlessly.
Through our graphics project, we attempt to present our version of this topic as we take a man, out of his place on Earth, onto the surface of Mars.
I have rendered all validation related scenes with mitsuba3, except for polynomial radial distortion, which
to the best of my knowledge, is not there in mitsuba3 [7]. For that, I specifically used mitsuba 0.6 desktop GUI.
For clarification and comparison purposes, the nori comparison with mitsuba images are rendered with 256 samples per pixel
on my personal laptop, because of difficulty in setting up mitsuba on the euler cluster, while those
standalone of nori are rendered with 1024 samples per pixel, on the euler cluster, unless stated otherwise. The
corresponding integrator is always set as path_mis
unless stated otherwise.
Files Added/Modified:
src/perspective.cpp
src/render.cpp
To increase the realism of a scene, and to focus the viewer on specific parts of the image, I used
depth of field, which is simulated using a simple thinlens camera. I modified src/perspective.cpp
to accept two camera related parameters, aperture of the lens and focal length. Depth of field is usually
simulated in graphics by calculating ray intersection with the focal plane, and then modifying the ray origin
as a point sampled on the lens and the direction as such that the ray passes through the point on the focal plane.
For my validation, I show variations of the aforementioned parameters and try to focus on two different
subjects on the scene while simulatenously comparing with corresponding mitsuba renders. Starting with no
depth of field effect, I show step by step how experimenting with lens radius and focal length help me focus
on a desired subject at a time.
k1
and k2
. This is useful when trying to match
rendered images to photographs created by a camera whose distortion is known. The corresponding comparison with mitsuba render is shown
below.
Chromatic aberration, also known as color fringing, is a color distortion
that creates an outline of unwanted color along the edges of objects in a photograph. It often appears when there's
a high contrast between light and dark objects. On activation of the effect, sampleRay()
in camera class is called 3 times, once for
each color channel and then summed for the final radiance. The amount of abberation for a given color channel is chosen as parameters
specified in the scene XML file and then position is zero-centered as well as the focus point is shifted in each direction by an offset.
In the following images, I show comparisons of different parameters on two different scenes to understand how chromatic aberration usually
affects a captured photograph.
Files Added/Modified:
include/nori/medium.h
include/nori/scene.h
src/homogenous.cpp
src/volpath_mis.cpp
I have implemented both kinds of media - scene-wide homogeneous medium and homogeneous medium attached to a shape, with the integrator updated
to handle both scenarios without use. The medium is parameterized by absorption coefficient sigma_a
, scattering
coefficient sigma_s
and the phase function (Henyey-Greenstein/isotropic in my implementation). Before moving onto the
integrator in the next section, I added functionality for handling of sampling volumetric scattering following PBRT implementation [2]. The major steps
are as follows :
In the following images, I show comparisons of the sampled homogeneous medium scene-wide and attached to a sphere with corresponding mitsuba renders, using an isotropic phase function. As can be seen, these look pretty identical. The media were rendered with a volunetric path tracer, described in the next subsection.
I implemented a complex integrator, volpath_mis
, which is based on path_mis
and extends it with
sampling distances in media and sampling the phase function. My implementation uses multiple importance sampling and combines sampling
direct lighting with sampling the phase function. This gives us an efficient unidirectional volumetric path tracer. Unlike the PBRT
implementation [2], I do not have two different mediums attached to a shape in order to keep track of how the current medium changes. Instead,
I modified the integrator by using the dot product of the normal of the intersection point and the ray direction to keep track of
exiting and entering medium. I implemented a function rayIntersectTr()
to calculate medium transmittances and thus, the
attenuation based on which media the ray is passing through.
To demonstrate validation of my implementation, I show comparison of the newly writtern volumetric path tracer with the path tracer
we wrote for Programming Assignment 4. Using the same scene without a medium, I show comparisons of both below,
which looks identical. In addition, in order to understand the effectiveness of importance sampling, I also show
comparisons by turning off multiple importance sampling, which makes it behave like the volumetric version of
path_mats
and the original implementation.
To further validate the volpath_mis
implementation, I followed the paradigm used in assignments. I modified the
test-direct.xml
file from Progamming Assignment 4 to use the volumetric path tracer as the integrator and
the test-furnace.xml
to add a homogeneous medium with 1.0 scattering and 0.0 absorption coefficients attached to a sphere
for every scene. My implementation still passes all the tests as described below.
Files Added/Modified:
include/nori/phase.h
src/henyeygreenstein.cpp
src/isotropic.cpp
src/warptest.cpp
Henyey Greenstein phase function was specifically designed to be easy to fit to measured scattering data and is parameterized by a single parameter \( g \), the asymmetry parameter, to control the light distribution. It is useful to be able to draw samples from the distribution described by phase functions like applying multiple importance sampling to computing direct lighting in participating media as well as for sampling scattered directions for indirect lighting samples in participating media. The PDF for the Henyey-Greenstein phase function is separable into \( \theta \) and \( \phi \) components, with \(p(\phi) = \frac{1}{2\pi} \). I followed the PBRT book [2] for the implementation of this function by computing \(cos \theta \) and direction \(w_i \) for Henyey-Greenstein sample and then calculating the PDF accordingly. In the following images, I show the following results in order :
Files Added/Modified:
src/spotlight.cpp
My spotlight implementation is based on the implementation in the PBRT book. Spotlights are defined by two angles,
falloffStart
and totalWidth
. Objects inside the inner cone of angles, up to falloffStart
, are
fully illuminated by the light. The directions between falloffStart
and totalWidth
are a transition zone that
ramps down from full illumination to no illumination, such that points outside the totalWidth
cone aren't illuminated at all. However, for the purpose of validation, I changed the calculation of cutOff
value according to mitsuba implementation [7]. As can be seen in the images below, I show comparisons by putting two objects in the
original cbox scene to demonstrate the comparison of nori spotlight implementation with mitsuba.
Files Added/Modified:
src/perlin.cpp
Perlin Noise is essentially a seeded random number generator, it takes an integer as a
parameter and returns a random number based on that parameter. I followed the referenced
implementation for my understanding. I used persistence
and octaves
to control
the noise texture [10]. Noise with a lot of high frequency as having a low persistence and octave is
each successive noise function added, just like in classical music. I also used cosine interpolation for smoothing, which
gives better texture at a slight loss of operating speed. Below I show perlin textures on various shapes, namely, a plane and
the camel head alongside rendering just perlin noise to demonstrate the validation of my approach.
Files Added/Modified:
src/arealight.cpp
include/nori/emitter.h
Textured Area Emitters was quite interesting to implement, since initially I was unable to understand how to make an emitter have radiance based
on texture, however, it was quite simple at the end. I added uv
coordinate values to EmitterQueryRecord
such that when
evaluating the emitter, I can query the texture value corresponding to the given point and return that as the radiance. I adapted the integrators to
follow this approach, meaning, I add the uv
coordinate of the intersection point of a ray with the scene to be able to look up
during emitter evaluation. Below, I show a perlin textured emitter as well as checkeboard textured emitter rendered using nori
and mitsuba, they look almost same, the offset is due to the handling of the texture in the two frameworks.
Files Added/Modified:
src/photonmapper_final_gather.cpp
In final gather, the hemisphere above the intersection point is sampled by shooting many rays and computing the radiance at the intersection points of the sampled rays with a diffuse surface. I sampled rays using cosine hemispherical sampling based on the incident ray direction. My implementation is based on Christensen's work [11]. I show a direct comparison with the basic photon mapper to demonstrate how the indirect illumination accumulated using final gather helps reduce blotches.
Instancing is a method of showing the same object mulitple times in the scene, without copying the object in the memory.
Object instancing was a very tricky feature to implement which I was unable to generate results for However, I'll describe my approach. The
initial idea was to separate the transformations of a shape from the underlying mesh and then create multiple versions, ie, multiple transformations
of the same shape in BVH. However, after deliberation and understanding using the PBRT textbook, I thought of this approach : create an
Instance
class to hold a pointer to another primitive, the one I would instance. The Instance
class conceptually
transforms the primitive but in reality, cannot modify the primitive. Instead, the total transformation T is stored and applied during
ray intersection. First, the inverse transformation would be applied to the ray and then the bounding box of the BVH node would be modified.
This was my idea for the implementation, however, I was unable to implement it to full clarity because I did not understand how to get access
to the transform matrix of the scene within the bvh
class.
Files Added/Modified:
src/main_euler.cpp
To be able to render on Euler, I essentially changed main.cpp
to not use GUI. I renamed the file to main_euler.cpp
for this submission. For validation, I show comparison of the rendered images using Euler and my personal laptop.
I have also used the text output of the rendering, to show that the same implementation on
euler took 12.0 seconds compared to 2.3 minutes on my system.
lodepng.cpp
lodepng.h
imagetexture.cpp
lodepng
library by adding lodepng.cpp
and imagetexture.cpp
, for a given pair of \((u,v)\) coordinates, I return the \((r,g,b)\) values present at the \((x,y)\) location of my texture map as the albedo. To ensure right albedo value is sent,
I scale the \((u,v)\) coordinates according to the height and width of the texture map. Additionally, I apply inverse gamma
correction so that the original colour of the texture is retained. Below, I show a comparison between the results obtained from
my texture mapping and from Mitsuba.
normaltexture.cpp
bsdf.h
mesh.cpp
sphere.cpp
diffuse.cpp
normaltexture.cpp
code implements texturing mapping. I add functions in bsdf.h
and diffuse.cpp
which check if the
texture has a normal map. If present, I linearly transform the color channels of the normal map from \([0,1]\) range to
\([-1,1]\) range into a normal vector by multiplying with \(2 * value -1\). This computation is done in the
setHitInformation
function of the mesh.cpp
and sphere.cpp
.
Below are two comparisons on the effect of normal mapping. I show the texture, the normals mapped on a plane surface and
the texture and normal combined. progressivephotonmap.cpp
integrator.h
render.cpp
integrator.h
: first, getNumIters
so that I can access the number of iterations provided by the XML file,
and second iteration
, to clear the old photon map and build a new photon map for every iteration and also update
the value of the radius for the new iteration according to the equation \( r_{i+1} = \sqrt{\frac{i+\alpha}{i+1}} r_i \). In the
render.cpp
file, I add an outer loop over the loop of sampling. This loops gets the value of iterations it needs
to run from getNumIters
. Inside this loop, before the smapling loop starts, I call the iteration
function. Since Nori does running average, we do not need to implement that step and can view the final output directly.
envmap.cpp
precompute
functions given in the paper. The sample
functions not only return
\(u,v\) coordinate values but also calculate probability density values. The probability density value are then converted to
one expressed in terms of solid angle on the sphere by introducing a Jacobian term. I map the \((u, v)\) sample to \((\theta, \phi) \) on the unit sphere by
scaling it and then calculate the direction \((x, y, z)\) using it. I also perform bilinear interpolation to improve the output of the
environment map. Since my environment map is attached to a sphere emitter, I do not need to handle anything additionally for path_mis
as my implementation already accounts for mesh emitters.
disney_BSDF.cpp
warp.h
warp.cpp
warptest.cpp
SquareToGRT1
and SquareToGTR2
and their corresponding PDFs are added to the warp.h
and
warp.cpp
files. I validate these, which are shown at the end of this section, and hence make required changes to
warptest.cpp
. The implementation of the Disney BSDF is done in the disney_BSDF.cpp
file.
0 0.2 0.4 0.6 0.8 1
Since the visualization for subsurface and clearcoatGloss parameters are not evident in the comparison above, I have provided
more comparisons below to ensure that my parameters are validated sufficiently.
render.cpp
denoise.py
render.cpp
by using sample mean variance.
Then, I implemented the NL-means pipeline in the python file denoise.py
present along with all other C++ source codes.
In my python code, I majorly make use of the numpy
library for the mathematical calculations. I also use
opencv
library to read the exr file and scipy
library to perform convolution operation. I implement the
pseudocode present in the lecture slide and denoise two different renders of the same scene using path_mis and 128spp, and
path_mats and 512spp. For the paramaters, I use the values stated in the slides: \(r=10\), \(f=3\) and \(k=0.45\). The scene, the
variance map, and the denoised scene for both the renders are shown below.
The final image was modeled using blender and was rendered on the Euler cluster at 1920x1080 resolution using 4096 samples per pixel, the time taken was 40 minutes.