CSE167_16

download CSE167_16

of 59

Transcript of CSE167_16

  • 8/13/2019 CSE167_16

    1/59

    Ray Tracing

    CSE167: Computer Graphics

    Instructor: Steve Rotenberg

    UCSD, Fall 2005

  • 8/13/2019 CSE167_16

    2/59

    Ray Tracing

    Ray tracing is a powerful rendering technique that is the foundationof many modern photoreal rendering algorithms

    The original ray tracing technique was proposed in 1980 by TurnerWhitted, although there were suggestions about the possibility in

    scientific papers dating back to 1968 Classic ray tracing shoots virtual view rays into the scene from the

    camera and traces their paths as they bounce around

    With ray tracing, one can achieve a wide variety of complex lightingeffects, such as accurate shadows and reflections/refractions fromcurved surfaces

    Achieving these effects with the same precision is difficult if notimpossible with a more traditional rendering pipeline

    Ray tracing offers a big advance in visual quality, but comes with anexpensive price of notoriously slow rendering times

  • 8/13/2019 CSE167_16

    3/59

    Ray Intersections

    Tracing a single ray requires determining if that rayintersects any one of potentially millions of primitives

    This is the basic problem of ray intersection

    Many algorithms exist to make this not only feasible, butremarkably efficient

    Tracing one ray is a complex problem and requiresserious work to make it run at an acceptable speed

    Of course, the big problem is the fact that one needs totrace lots of rays to generate a high quality image

  • 8/13/2019 CSE167_16

    4/59

    Rays

    Recall that a rayis a geometric entity with anorigin and a direction

    A ray in a 3D scene would probably use a 3D

    vector for the origin and a normalized 3D vectorfor the direction

    class Ray {

    Vector3 Origin;Vector3 Direction;

    };

  • 8/13/2019 CSE167_16

    5/59

    Camera Rays

    We start by shooting rays from the camera out into the scene

    We can render the pixels in any order we choose (even in random order!),

    but we will keep it simple and go from top to bottom, and left to right

    We loop over all of the pixels and generate an initialprimary ray (also called

    a camera ray or eye ray)

    The ray origin is simply the cameras position in world space

    The direction is computed by first finding the 4 corners of a virtual image in

    world space, then interpolating to the correct spot, and finally computing a

    normalized direction from the camera position to the virtual pixel

    Camera

    position

    Virtual image

    Primary ray

  • 8/13/2019 CSE167_16

    6/59

    Ray Intersection

    The initial camera ray is then tested for intersection with the 3D

    scene, which contains a bunch of triangles and/or other primitives

    If the ray doesnt hit anything, then we can color the pixel to some

    specified background color Otherwise, we want to know the first thing that the ray hits (it is

    possible that the ray will hit several surfaces, but we only care about

    the closest one to the camera)

    For the intersection, we need to know the position, normal, color,

    texture coordinate, material, and any other relevant information wecan get about that exact location

    If we hit somewhere in the center of a triangle, for example, then this

    information would get computed by interpolating the vertex data

  • 8/13/2019 CSE167_16

    7/59

    Ray Intersection

    We will assume that the results of a ray intersection testare put into some data structure which convenientlypackages it together

    class Intersection {Vector3 Position;

    Vector3 Normal;

    Vector2 TexCoord;

    Material *Mtl;

    float Distance; // Distance from ray origin to intersection

    };

  • 8/13/2019 CSE167_16

    8/59

    Lighting

    Once we have the key intersection information (position,

    normal, color, texture coords, etc.) we can apply any

    lighting model we want

    This can include procedural shaders, lighting

    computations, texture lookups, texture combining, bump

    mapping, and more

    Many of the most interesting forms of lighting involve

    spawning off additional rays and tracing them recursively

    The result of the lighting equation is a color, which is

    used to color the pixel

  • 8/13/2019 CSE167_16

    9/59

    Shadow Rays

    Shadows are an important lighting effect that can easily becomputed with ray tracing

    If we wish to compute the illumination with shadows for a point, weshoot an additional ray from the point to every light source

    A light is only allowed to contribute to the final color if the raydoesnt hit anything in between the point and the light source

    The lighting equation we looked at earlier in the quarter can easilybe adapted to handle this, as clgtiwill be 0 if the light is blocked

    Obviously, we dont need to shoot a shadow ray to a light source ifthe dot product of the normal with the light direction is negative

    Also, we can put a limit of the range of a point light, so they donthave an infinite influence (bending the laws of physics)

    s

    ispecidifilgtambamb

    hnmlnmccmc **

  • 8/13/2019 CSE167_16

    10/59

    Shadow Rays

  • 8/13/2019 CSE167_16

    11/59

    Shadow Rays

    Shadow rays behave slightly differently from primary(and secondary) rays

    Normal rays (primary & secondary) need to know thefirst surface hit and then compute the color reflected offof the surface

    Shadow rays, however, simply need to know ifsomething is hit or not

    In other words, we dont need to compute any additional

    shading for the ray and we dont need to find the closestsurface hit

    This makes them a little faster than normal rays

  • 8/13/2019 CSE167_16

    12/59

    Offsetting Spawned Rays

    We say that the shadow rays are spawned off of thesurface, or we might say that the primary ray spawnedoff additional shadow rays

    When we spawn new rays from a surface, it is usually agood idea to apply a slight adjustment to the origin of theray to push it out slightly (0.00001) along the normal ofthe surface

    This fixes problems due to mathematical roundoff thatmight cause the ray to spawn from a point slightly belowthe surface, thus causing the spawned ray to appear tohit the same surface

  • 8/13/2019 CSE167_16

    13/59

    Reflection Rays

    Another powerful feature often associated with raytracing is accurate reflections off of complex surfaces

    If we wanted to render a surface as a perfect mirror,instead of computing the lighting through the normalequation, we just create a new reflection ray and trace itinto the scene

    Remember thatprimary raysare the initial rays shotfrom the camera. Any reflected rays (and others, likerefracted rays, etc.), are called secondary rays

    Reflected rays, like shadow rays should be movedslightly along the surface normal to prevent the ray fromre-intersecting the same surface

  • 8/13/2019 CSE167_16

    14/59

    Computing Reflection Direction

    d

    n r

    nnddr 2

  • 8/13/2019 CSE167_16

    15/59

    Reflections

    If the reflection ray hits a normal material, we just compute theillumination and use that for the final color

    If the reflection ray hits another mirror, we just recursively generatea new reflection ray and trace that

    In this way, we can render complex mirrored surfaces that includereflections, reflections of reflections, reflections of reflections ofreflections

    To prevent the system from getting caught in an infinite loop, it iscommon to put an upper limit on the depth of the recursion. 10 orlower works for most scenes, except possibly for ones with lots of

    mirrored surfaces In any case, most pixels will only require a few bounces, as they are

    likely to hit a non-mirrored surface sooner or later

  • 8/13/2019 CSE167_16

    16/59

    Reflections

  • 8/13/2019 CSE167_16

    17/59

    Reflections

    Surfaces in the real world dont act as perfect mirrors

    Real mirrors will absorb a small amount of light and only reflectmaybe 95%-98% of the light

    Some reflecting surfaces are tinted and will reflect different

    wavelengths with different strengths This can be handled by multiplying the reflected color by the mirror

    color at each bounce

    We can also simulate partially reflective materials like polishedplastic, which have a diffuse component as well as a shiny specularcomponent

    For a material like this, we would apply the normal lighting equation,including shooting shadow rays, to compute the diffuse component,then add a contribution from a reflection ray to get the final color (thediffuse and specular components should be weighted so as not toviolate conservation of energy)

  • 8/13/2019 CSE167_16

    18/59

    Transmission Rays

    Ray tracing can also be used to accurately render thelight bending in transparent surfaces due to refraction

    Often, this is called transmissioninstead ofrefraction.Transmission is a more general term that also includestranslucency, but I think the real reason this word ispreferred is because reflection and refraction look toosimilar

    When a ray hits a transparent surface (like glass, orwater), we generate a new refracted ray and trace

    that, in a similar way as we did for reflection We will assume that the transmitted ray will obey Snells

    law (n1sin1=n2sin2), where n1and n2are the index ofrefraction for the two materials

  • 8/13/2019 CSE167_16

    19/59

    Computing Transmission (Refraction) Direction

    nzzt

    nnddz

    nnddr

    2

    2

    1

    1

    2

    n

    nd

    n

    r

    t

    n1

    n2

    2

    1

    z

    2

  • 8/13/2019 CSE167_16

    20/59

    Total Internal Reflection

    nzzt

    nnddz

    2

    2

    1

    1

    n

    n

    d

    n

    r

    n1

    n2

    1

    z

    When light traveling in a material with a high index of refraction hits amaterial with a low index of refraction at a steep angle, we get a totalinternal reflection

    When this happens, no refraction ray is generated

    This effect can be visible when one is scuba diving and looks up at thewater surface. One can only see rays refracting to the outside world in acircular area on the water surface above

    Total internal reflection can be detected when the magnitude of the z vectoris greater than 1, causing the square root operation to become undefined

  • 8/13/2019 CSE167_16

    21/59

    Spawning Multiple Rays

    When light hits a transparent surface, we not only see refraction, butwe get a reflection off of the surface as well

    Therefore, we will actually generate two new rays and trace both ofthem into the scene and combine the results

    The results of an individual traced ray is a color, which is the color ofthe light that the ray sees

    This color is used as the pixel color for primary rays, but forsecondary rays, the color is combined somehow into the final pixelcolor

    In a refraction situation, for example, we spawn off two new rays

    and combine them according to the Fresnel equations, provided inthe last lecture

    The Fresnel equations describe how the transmitted (refracted) raywill dominate when the incoming ray is normal to the surface, butthe reflection will dominate when the incoming ray is edge-on

  • 8/13/2019 CSE167_16

    22/59

    Refraction

    Transmission ray

    Reflection ray

    Primary ray

    Camera

    Normal

  • 8/13/2019 CSE167_16

    23/59

    Fresnel Equations

    The Fresnel equations can beused to determine theproportion of the light reflected

    (fr) and transmitted (ft) when aray hits an interface betweentwo dielectrics (like air andwater)

    They describe separate

    formulas for the parallel andperpendicularly polarized light,but these are usually averagedinto a single set of values rt

    perpparr

    perp

    par

    ff

    rrf

    nn

    nnr

    nn

    nnr

    0.1

    )(21

    )()(

    )()(

    )()(

    )()(

    22

    21

    21

    12

    12

    tndn

    tndn

    tndn

    tndn

  • 8/13/2019 CSE167_16

    24/59

    Recursive Ray Tracing

    The classic ray tracing algorithm includes

    features like shadows, reflection, refraction, and

    custom materials A single primary ray may end up spawning

    many secondary and shadow rays, depending

    on the number of lights and the arrangement

    and type of materials These rays can be thought of as forming a tree

    like structure

  • 8/13/2019 CSE167_16

    25/59

    Recursive Ray Tracing

  • 8/13/2019 CSE167_16

    26/59

    Ray Intersection

  • 8/13/2019 CSE167_16

    27/59

    Ray-Scene Intersection

    One of the key components of a ray tracer is the system thatdetermines what surface the ray hits

    A typical 3D scene may have well over 1,000,000 primitives

    As usual, triangles tend to be the primitive of choice, but one

    advantage of a ray tracer is that one can intersect rays with morecomplex surfaces such as spheres, Bezier patches, displacementmapped surfaces, fractals, and more

    Sometimes, complex primitives are simply tessellated into trianglesin a pre-rendering phase, and then just ray traced as triangles

    Alternately, it is possible to ray trace complex surfaces directly, or to

    use demand-based schemes that dont tessellate an object until aray comes nearby

  • 8/13/2019 CSE167_16

    28/59

    Ray-Object Intersection

    We will say that our scene is made up of several individual objects

    For our purposes, we will allow the concept of an object to include primitives such astriangles and spheres, or even collections of primitives or other objects

    In order to be render-able, an object must provide some sort of ray intersectionroutine

    We will define a C++ base class object as:

    class Object {

    public:

    virtual bool IntersectRay(Ray &r,Intersection &isect);

    };

    The idea is that we can derive specific objects, like triangles, spheres, etc., and thenwrite custom ray intersection routines for them

    The ray intersect routine takes a ray as input, and returns true if the object is hit andfalse if it is missed

    If the object is hit, the intersection data is filled in into the isect class

  • 8/13/2019 CSE167_16

    29/59

    Ray-Sphere Intersection

    Lets see how to test if a ray intersects a sphere

    The ray has an origin at point pand a unit length

    direction u, and the sphere has a center cand aradius r

    crp

    u

  • 8/13/2019 CSE167_16

    30/59

    Ray-Sphere Intersection

    The ray itself is the set of points p+u, where 0

    We start by finding the point qwhich is the point on the

    ray-line closest to the center of the sphere

    The line qc must be perpendicular to vector u, in other

    words, (q-c)u=0, or (p+u-c)u=0

    We can solve the value of that satisfies that

    relationship: =-(p-c)u, so q=p-((p-c)u)u

    p u c

    q

  • 8/13/2019 CSE167_16

    31/59

    Ray-Sphere Intersection

    Once we have q, we test if it is inside the actual sphere or not, by checking if |q-c|r

    If qis outside the sphere, then the ray must not miss

    If qis inside the sphere, then we find the actual point on the sphere surface that theray intersects

    We say that the ray will hit the sphere at two points q1and q2:

    q1=p+(-a)u) q2=p+(+a)u) where a=sqrt(r2-|q-c|2)

    If -a0, then the ray hits the sphere at q1, but if it is less than 0, then the actualintersection point lies behind the origin of the ray

    In that case, we check if +a0 to test if q2is a legitimate intersection

    p u c

    q

    q2q1

  • 8/13/2019 CSE167_16

    32/59

    Ray-Sphere Intersection

    There are several ways to formulate the ray-sphere intersection test

    This particular method is the one provided in the

    book As a rule, one tries to postpone expensive

    operations, such as division and square rootsuntil late in the algorithm when it is likely that

    there will be an intersection Ideally, quick tests can be performed at the

    beginning that reject a lot of cases where the rayis far away from the object being tested

  • 8/13/2019 CSE167_16

    33/59

    Ray-Plane Intersection

    A plane is defined by a normal vector nand a distance d, which isthe distance of the plane to the origin

    We test our ray with the plane by finding the point qwhich is wherethe ray line intersects the plane

    For q to lie on the plane it must satisfyd=qn=pn+un

    We solve for :

    =(d-pn)/(un)

    However, we must first check that the denominator is not 0, whichwould indicate that the ray is parallel to the plane

    If 0 then the ray intersects the plane, otherwise, the plane liesbehind the ray, in the wrong direction

  • 8/13/2019 CSE167_16

    34/59

    Ray-Triangle Intersection

    To intersect a ray with a triangle, we must first check if

    the ray intersects the plane of the triangle

    If we are treating our triangle as one-sided, then we can

    also verify that the origin of the ray is on the outside ofthe triangle

    Once we know that the ray hits the plane at point q, we

    must verify that qlies inside the 3 edges of the triangle

  • 8/13/2019 CSE167_16

    35/59

    Ray-Triangle

    Does segment abintersect triangle v0v1v2?

    0v

    q

    pu

    1v

    2v

    Does segment abintersect triangle v0v1v2?

  • 8/13/2019 CSE167_16

    36/59

    Barycentric Coordinates

    Reduce to 2D: remove smallest dimension

    Compute barycentric coordinates

    q' =q-v0e1=v1-v0

    e2=v2-v0

    =(q'e2)/(e1e2)

    =(q'e1)/(e1e2)

    Reject if

  • 8/13/2019 CSE167_16

    37/59

    Acceleration Structures

    Complex scenes can contain millions of primitives, and ray tracersneed to trace millions of rays

    This means zillions of potential ray-object intersections

    If every ray simply looped through every object and tested if it

    intersected, we would spend forever just doing loops, not evencounting all of the time doing the intersection testing

    Therefore, it is absolutely essential to employ some sort ofacceleration structureto speed up the ray intersection testing

    An acceleration structure is some sort of data structure that groupsobjects together into some arrangement that enables the ray

    intersection to be sped up by limiting which objects are tested There are a variety of different acceleration structures in use, but

    most of the successful ones tend to be based on some variation ofhierarchical subdivision of the space around the group of objects

  • 8/13/2019 CSE167_16

    38/59

    Bounding Volume Hierarchies

    The basic concept of a bounding volume hierarchy is a complex object in a hierarchyof simpler ones

    This works much like the hierarchical culling we looked at in the scene graph lecture

    For example, if one were using spheres as their bounding volume, we could enclosethe entire scene in one big sphere

    Within that sphere are several other spheres, each containing more spheres, until wefinally get to the bottom level where spheres contain actual geometry like triangles

    To test a ray against the scene, we traverse the hierarchy from the top level

    When a sphere is hit, we test the spheres it contains, and ultimately thetriangles/primitives within

    In general, a bounding volume hierarchy can reduce the ray intersection time fromO(n) to O(log n), where nis the number of primitives in the scene

    This reduction from linear to logarithmic performance makes a huge difference andmakes it possible to construct scenes with millions of primitives

  • 8/13/2019 CSE167_16

    39/59

    Sphere Hierarchies

    The sphere hierarchy makes for a good example of the concept, but inpractice, sphere hierarchies are not often used for ray tracing

    One reason is that it is not clear how to automatically group an arbitrary setof triangles into some number of spheres, so various heuristic options exist

    Also, as the spheres are likely to overlap a lot, they end up triggering a lotof redundant intersection tests

  • 8/13/2019 CSE167_16

    40/59

    Octrees

    The octree starts by placing a cube around theentire scene

    If the cube contains more than some specified

    number of primitives (say, 10), then it is splitequally into 8 cubes, which are then recursivelytested and possibly resplit

    The octree is a more regular structure than thesphere tree and provides a clear rule forsubdivision and no overlap between cells

    This makes it a better choice usually, but still notideal

  • 8/13/2019 CSE167_16

    41/59

    Octrees

  • 8/13/2019 CSE167_16

    42/59

    KD Trees

    The KD tree starts by placing a box (not necessarily a cube) aroundthe entire scene

    If the box contains too many primitives, it is split, as with the octree

    However, the KD tree only splits the box into two boxes, that need

    not be equal The split can take place on the x, y, or z place at some arbitrary

    point within the box

    This makes the KD tree a little bit more adaptable to irregulargeometry and able to customize a tighter fit

    In general, KD trees tend to be pretty good for ray tracing

    Their main drawback is that the tree depth can get rather deep,causing the ray intersection to spend a lot of time traversing the treeitself, rather than testing intersections with primitives

  • 8/13/2019 CSE167_16

    43/59

    KD Trees

  • 8/13/2019 CSE167_16

    44/59

    BSP Trees

    The BSP tree (binary space partitioning) is much like theKD tree in that it continually splits space into two (notnecessarily equal) halves

    Unlike the KD tree which is limited to xyz axis splitting,the BSP tree allows the splitting plane to be placedanywhere in the volume and aligned in any direction

    This makes it a much more difficult problem to choosethe location of the splitting plane, and so many heuristics

    exist In practice, BSP trees tend to perform well for ray

    tracing, much like KD trees

  • 8/13/2019 CSE167_16

    45/59

    BSP Trees

  • 8/13/2019 CSE167_16

    46/59

    Uniform Grids

    One can also subdivide space into a uniform grid,instead of hierarchically

    This is fast for certain situations, but gets too expensive

    in terms of memory for large complex scenes It also tends to loose its performance advantages in

    situations where primitives have a large variance in sizeand location (which is common)

    As a result, they are not really a practical generalpurpose acceleration structure for ray tracing

  • 8/13/2019 CSE167_16

    47/59

    Uniform Grids

  • 8/13/2019 CSE167_16

    48/59

    Hierarchical Grids

    One can also make a hierarchical grid

    Start with a uniform grid, but subdivide any cell thatcontains too many primitives into a smaller grid

    An octree is an example of a hierarchical grid limited to2x2x2 subdivision

    A more general hierarchical grid could supportsubdivision into any number of cells

    Hierarchical grids tend to perform very well in raytracing, especially for highly detailed geometry ofrelatively uniform size (such as the triangles in atessellated surface)

  • 8/13/2019 CSE167_16

    49/59

    Acceleration Structures

    All of the acceleration structures we looked at store some geometry andprovide a function for intersecting a ray

    In other words, they are really just a more complex type of primitivethemselves

    We can derive acceleration structures off of our base Object class, just like

    we did for Spheres and Triangles Also, acceleration structures can be designed so that they store a bunch of

    generic Objects themselves, and so one could build an accelerationstructure that contains a bunch of triangles, and then place that accelerationstructure within a larger acceleration structure, etc.

    This provides a nice, consistent way to represent scenes, similar to thescene graph concept we covered in the lecture on realtime scene

    management

    class KDTree:public Object {

    public:

    bool IntersectRay(Ray &r,Intersection &isect);

    };

  • 8/13/2019 CSE167_16

    50/59

    Distribution Ray Tracing

  • 8/13/2019 CSE167_16

    51/59

    Distribution Ray Tracing

    In 1984, an important modification to the basic ray tracing algorithmwas proposed, known as distributed ray tracing

    The concept basically involved shooting several distributed rays toachieve what had previously been done with a single ray

    The goal is not to simply make the rendering slower, but to achievea variety of soft lighting effects such as antialiasing, camera focus,soft-edge shadows, blurry reflections, color separation, motion blur,and more

    As the term distributed tends to refer to parallel processing inmodern days, the distributed ray tracing technique is now calleddistribution ray tracing, and the term distributed is reserved for

    parallel ray tracing, which is also an important subject

  • 8/13/2019 CSE167_16

    52/59

    Soft Shadows

    One nice visual effect we can achieve with distributionray tracing is soft shadows

    Instead of treating a light source as a point and shootinga single ray to test for shadows, we can treat the light

    source as a sphere and shoot several rays to test forpartial blocking of the light source

    If 15% of the shadow rays are blocked, then we get 85%of the incident light from the light source

    In lighting terminology, the completely shadowed region

    is called the umbraand the partially shadowed region iscalled thepenumbra

  • 8/13/2019 CSE167_16

    53/59

    Area Lights

    The soft shadow technique enables us to define lights in a muchmore complex way than we have previously

    We can now use any geometry to define a light, including triangles,patches, spheres, etc.

    To determine the incident light, we shoot several rays towards thelight source, distributed across the surface and weighted accordingto the surface area of the sample and the direction of the averagenormal

    Larger light sources create softer, diffuse shadows, while smallerlight sources cause sharp, harsh shadows

    Larger light sources also require more rays to adequately samplethe shadows, making area lights a lot more expensive than pointlights. Inadequate sampling of the light source can cause noisepatterns to appear in the penumbra region, known as shadowaliasing

  • 8/13/2019 CSE167_16

    54/59

    Blurry Reflections

    We can render blurry or glossyreflections by creating

    several reflection rays instead of just one

    The rays can be distributed around the ideal reflection

    direction Blurry surfaces will causes a wider distribution (and

    require more rays), while more polished surfaces will

    have a narrow distribution

    The same concept can apply to refraction in order toachieve rendering of unpolished glass

  • 8/13/2019 CSE167_16

    55/59

  • 8/13/2019 CSE167_16

    56/59

    Depth of Field

    The blurring caused by a camera lens being out of focus is due to the lenslimited depth of field

    In computer graphics, the term depth of field usually refers to the generalprocess of rendering images that include a camera blurring effect

    A lens will typically be set to focus on objects at some distance away,

    known as the focal distance Objects closer or farther than the focal distance will be blurry, and the

    blurriness increases with the distance to the focal plane

    Depth of field can be rendered with distribution ray tracing by distributingthe primary rays shot from the camera

    Rays area distributed across a virtual aperture, which represents the(usually circular) opening of the lens

    The larger the aperture, the more pronounced the blurring effect will be. Apinholecamera has an aperture size of 0, and therefore, will not have anyblurring due to depth of field

  • 8/13/2019 CSE167_16

    57/59

  • 8/13/2019 CSE167_16

    58/59

    Distribution Ray Tracing

    Ray tracing had a big impact on computer graphics in1980 with the first images of accurate reflections andrefractions from curved surfaces

    Distribution ray tracing had an even bigger impact in1984, as it re-affirmed the power of the basic ray tracingtechnique and added a whole bunch of sophisticatedeffects, all within a consistent framework

    Previously, techniques such as depth of field, motion

    blur, soft shadows, etc., had only been achievedindividually and by using a variety of complex, hackyalgorithms

  • 8/13/2019 CSE167_16

    59/59

    Distribution Ray Tracing

    If ray tracing is slow, then distribution ray tracing must be considerablyslower

    Now, instead of one or two splits per level in our recursion, we are have toshoot dozens or even hundreds of rays to achieve some of these effects

    This can cause an exponential expansion in the number of rays

    The good news is that we can combine these features so that we still onlyneed to shoot a small number of primary rays per pixel

    For example, we can shoot 16 rays in a 4x4 antialiasing pattern, whereeach ray has a random distribution in time and in the camera aperture

    Each of these rays only needs to spawn a few reflection or shadow rays, asthe results will be blended with 15 other samples

    Still, we end up with lots and lots of rays and potential for exponentialproblems in scenes with a lot of soft or blurry features

    This problem is at least partially addressed withpath tracingwhich is one ofthe techniques for global illumination that we will see in the next lecture