aboutsummaryrefslogtreecommitdiffstats
path: root/worksheets
diff options
context:
space:
mode:
authorKT <tran0563@umn.edu>2021-09-06 19:07:33 -0500
committerKT <tran0563@umn.edu>2021-09-06 19:07:33 -0500
commitcccd3186305915d92b1751dc616979d64116a4aa (patch)
tree5dd4834daef547cd45fc0b643f44a10b581de0ad /worksheets
parentAdded missing images for the A6 worksheet (diff)
downloadcsci4611-cccd3186305915d92b1751dc616979d64116a4aa.tar
csci4611-cccd3186305915d92b1751dc616979d64116a4aa.tar.gz
csci4611-cccd3186305915d92b1751dc616979d64116a4aa.tar.bz2
csci4611-cccd3186305915d92b1751dc616979d64116a4aa.tar.lz
csci4611-cccd3186305915d92b1751dc616979d64116a4aa.tar.xz
csci4611-cccd3186305915d92b1751dc616979d64116a4aa.tar.zst
csci4611-cccd3186305915d92b1751dc616979d64116a4aa.zip
Upload a1
Diffstat (limited to 'worksheets')
-rw-r--r--worksheets/a1_textrain.md16
-rw-r--r--worksheets/a2_carsoccer.md156
-rw-r--r--worksheets/a3_earthquake.md93
-rw-r--r--worksheets/a4_dance.md106
-rw-r--r--worksheets/a5_artrender.md65
-rw-r--r--worksheets/a6_harold.md94
6 files changed, 14 insertions, 516 deletions
diff --git a/worksheets/a1_textrain.md b/worksheets/a1_textrain.md
index b588d1c..6ce1142 100644
--- a/worksheets/a1_textrain.md
+++ b/worksheets/a1_textrain.md
@@ -24,6 +24,10 @@ documentation](https://processing.org/reference/color_.html) and/or the
[tutorial explaining color in
Processing](https://processing.org/tutorials/color/).
+Here are a couple of questions to get you thinking about how to work with
+pixel arrays and colors in this format. Note: These are very brief questions
+in this first worksheet, so this may not take you long at all. That's ok!
+
## Q1: Indexing
@@ -41,6 +45,12 @@ information from `inputImg` to help you.
```
PImage inputImg = loadImage("test.jpg");
+// your code should work for any valid values for row and column, we've
+// randomly picked the values (2, 2) here as an exmaple.
+int row = 2;
+int column = 2;
+
+// write your answer in terms of the row and column defined above
int index1D = /* --- Fill this in --- */;
```
@@ -50,8 +60,10 @@ int index1D = /* --- Fill this in --- */;
The image processing technique known as *thresholding* will be useful while
creating your Text Rain. During the thresholding operation, if a pixel's
grayscale value is less than `threshold`, then it becomes black. If the
-value is greater than `threshold`, it becomes white. You can use the green
-channel of the color as the grayscale value.
+value is greater than or equal to `threshold`, it becomes white. In the example below,
+assume the image has already been converted to grayscale. This means the
+red, green, and blue channels are all equal. So, you can get the grayscale
+value by accessing any one of the color channels red, green, or blue.
In the code block below, write a Java code snippet for thresholding one pixel
(`inputPixel`) to black or white.
diff --git a/worksheets/a2_carsoccer.md b/worksheets/a2_carsoccer.md
deleted file mode 100644
index 95170c4..0000000
--- a/worksheets/a2_carsoccer.md
+++ /dev/null
@@ -1,156 +0,0 @@
-# Assignment 2 (Car Soccer) Worksheet
-
-## Definitions
-
-Use the following C++ style pseudocode definitions for Q1 and Q2:
-
-```
-/* Use this Point3 class to store x,y,z values that define a mathematical
- * point (i.e., a position) in 3-space.
- */
-class Point3 {
- float x;
- float y;
- float z;
-};
-
-/* Use this Vector3 class to store x,y,z values that define a vector in
- * 3-space. Remember, mathematically, a vector is quite different than
- * a point. It has a direction and a magnitude but no position!
- * For vectors it is often useful to be able to compute the length,
- * also known as the magnitude, of the vector.
- */
-class Vector3 {
- float x;
- float y;
- float z;
-
- // returns the length (i.e., magnitude) of the vector
- float Length() {
- return sqrt(x*x + y*y + z*z);
- }
-};
-
-
-/* In C++ and other languages we can define operators so we can use
- * the +, -, =, *, / operations on custom classes. Like many graphics
- * libraries, this is what MinGfx does to make it easy to work with
- * points and vectors in code. For example, recall from class that
- * if we have a point A (Coffman Union) and we add a vector (direction
- * and magnitude) to this, we arrive at a new point B (e.g., Murphy Hall).
- * Conceptually, a point + a vector = a new point. Mathematically, it
- * does not make sense to add two points, but it does make sense to
- * subtract two points. The "difference" between the Murphy and Coffman
- * points is a vector that tells us the direction and magnitude we would
- * need to walk from Coffman to get to Murphy. Here's how we can write
- * that in code using Point3, Vector3, and operators like + and -.
- *
- * Point3 murphy = Point3(5, 8, 0);
- * Point3 coffman = Point3(4, 6, 0);
- * Vector3 toMurphy = murphy - coffman;
- *
- * // or, if we were given coffman and toMurphy we could find
- * // the point "murphy" by starting at point "coffman" and adding
- * // the vector "toMurphy".
- * Point3 murphy2 = coffman + toMurhpy;
- *
- * The code that defines these opertors looks something like this:
-*/
-
-// a point + a vector = a new point
-Point3 operator+(Point3 p, Vector3 v) {
- return Point3(p.x + v.x, p.y + v.y, p.z + v.z);
-}
-
-// a point - a point = a vector
-// the dir and magnitude needed to go from point point B to point A
-Vector3 operator-(Point3 A, Point3 B) {
- return Vector3(A.x - B.x, A.y - B.y, A.z - B.z);
-}
-
-// a vector * a scalar = a new vector with scaled magnitude
-Vector3 operator*(Vector3 v, float s) {
- return Vector3(v.x * s, v.y * s, v.z * s);
-}
-
-
-
-/* Given all these tools, we can define additional classes for geometries
- * that are useful in graphics. For example, we can represent a sphere
- * using a Point3 for the position of the center point of the sphere and
- * a float for the sphere's radius.
- */
-class Sphere {
- Point3 position;
- float radius;
-};
-```
-
-## Q1: Eulerian Integration
-
-In computer graphics and animation, there are many forms of integration that
-are used. For simple physics models like we have in Car Soccer, Eulerian
-Integration is good enough. Eulerian Integration uses velocity and position
-information from the current frame, and the elapsed time to produce a position
-for the next frame. Write pseudocode for determining the position of the sphere in the
-next frame:
-
-*Hint: think back to the motion equations from introductory physics. Or, look
-around in the assignment handout.*
-
-```
-Vector3 velocity = Vector3(1.0, 1.0, 1.0);
-float dt = 20; // milliseconds
-
-Sphere s = Sphere {
- position: Point3(0.0, 0.0, 0.0),
- radius: 5.0,
-};
-
-s.position = /* --- Fill in the next frame position computation here --- */
-```
-
-
-
-## Q2: Sphere Intersection
-
-In this assignment, you will need to test intersections between spheres and
-other objects. Using the information contained within each sphere class,
-write pseudocode to determine whether or not two spheres are intersecting
-(which you can use for car/ball intersections):
-
-```
-bool sphereIntersection(Sphere s1, Sphere s2) {
- /* --- Fill in your sphere intersection code here --- */
-
-
-}
-```
-
-To check that your intersections work, try working through the math by hand for the
-following two cases. You can write out the math on a scrap piece of paper. You do
-not need to include that detail in this worksheet. But, do change the lines below where
-it says "Fill in expected output" to indicate whether True or False would be returned:
-
-```
-Sphere s1 = Sphere {
- position: Point3(0.0, 1.0, 0.0),
- radius: 1.0,
-};
-
-Sphere s2 = Sphere {
- position: Point3(3.0, 0.0, 0.0),
- radius: 1.0,
-};
-
-Sphere s3 = Sphere {
- position: Point3(1.0, 1.0, 0.0),
- radius: 2.0,
-};
-
-print(sphereIntersection(s1, s2));
-/* --- Fill in expected output (True or False) --- */
-
-print(sphereIntersection(s1, s3));
-/* --- Fill in expected output (True or False) --- */
-```
diff --git a/worksheets/a3_earthquake.md b/worksheets/a3_earthquake.md
deleted file mode 100644
index e38902f..0000000
--- a/worksheets/a3_earthquake.md
+++ /dev/null
@@ -1,93 +0,0 @@
-# Assignment 3 (Earthquake) Worksheet
-
-## Q1: Useful math
-
-In this assignment, you will be dealing with earthquake data from the United
-States Geological Survey. As with any real-world dataset, the data have
-real-world units, which may not always match up with units we want to use for
-the data visualization. As such, there are a few handy practices commonly used
-when constructing visualizations from real-world data; one of the most common
-is normalization.
-
-Normalization is the process of converting a number inside of an
-arbitrary range to a floating point number between 0.0
-and 1.0, inclusive. For example, if we have a value `v = 1.5` in the list of
-numbers `{0.0, 1.5, 2.0, 1.3}`, the normalized value would be `v_normalized =
-0.75`.
-
-These two functions, part of the C++ standard library, will be useful for your
-first question:
-
-```
-/*
- * - `std::min_element()` - return the minimum of a vector
- * - `std::max_element()` - return the maximum of a vector
- *
- * Example usage:
- * std::vector<float> quakes = {0.0, 1.5, 2.0, 1.3};
- * float min_magnitude = std::min_element(quakes.begin(), quakes.end());
- */
-```
-
-Using the min_element() and max_element() functions, write a routine to normalize
-the values in an arbitrary vector (list) and return a new vector:
-
-```
-std::vector<float> normalize_list(std::vector<float> quakeList) {
- /* --- Fill in your algorithm here --- */
-}
-```
-
-Now, to check that your algorithm works, let's just work out a quick example
-by hand. What would the following code print out if you were to run it?
-Note, if your math is correct, all of the values printed should be between 0.0
-and 1.0 inclusive.
-
-```
-std::vector<float> quakes = {0.0, 2.3, 5.1, 1.1, 7.6, 1.7};
-std::vector<float> normalized_quakes = normalize_list(quakes);
-
-for (int i = 0; i < normalized_quakes.size(); i++) {
- std::cout << normalized_quakes[i] << " ";
-}
-std::cout << std::endl;
-```
-Output:
-```
-/* --- Fill in the expected output here (e.g. "0.0, 0.5, 0.5, 1.0, 0.5, 0.12, 0.6") --- */
-```
-
-## Q2: Constructing a mesh
-
-For the first two assignments, we were able to use the QuickShapes class to draw pretty much everything we had to draw because everything could be constructed from 3D graphics primitives like cubes, spheres, cones, etc. This assignment will be the first one where we create a custom 3D shape made of triangles and apply a custom texture to it.
-
-To create a triangle mesh from scratch, you will need to understand how a **Vertex Array** and an **Index Array** are used together to define each triangle that belongs to the mesh. We discuss this in detail in lecture, but here is a brief recap:
-
-The vertex array will hold the actual 3D coordinates for each vertex of the mesh. Each vertex is a point, so these should be represented as Point3s, and there should be one element in the array for each vertex in the mesh. Then, the index array tells us how to connect these vertices together to form triangles.
-
-The index array refers to vertices stored in the vertex array by their index in that array. So, if the vertex array is of length n, valid indices would be 0, 1, 2, ... n-1. Since these are all positive integers, the index array is usually stored as an array of unsigned ints. Even though each entry is a single unsigned int, we want to think of the entries as being grouped into sets of 3. (Since it takes 3 vertices to define a single triangle, we should always add indices in groups of 3, and the length of the index array should always be 3 times the number of triangles in the mesh). One final tip to remember is to use counter-clockwise vertex ordering to tell the graphics engine which side of the triangle is the "front" and which side is the "back". Remember, only the front face will show up; the back will be invisible!
-
-Let's practice all these concepts with a simple example of a square. Create your own copy of the image below (using a piece of paper that you photograph or a drawing program) and label each vertex with an index number
-(starting at 0).
-
-**Replace this image with your drawing:**
-
-![](./img/square.png)
-
-Now, write out the square's vertex array, using the familiar `Point3` class
-(since it's in the *xy*-plane, assume z = 0 for all points):
-
-```
-std::vector<Point3> squareVertexArray = {
- /* --- Fill in your `Point3`s here */
-};
-```
-
-Finally, write out the square's index array based on the indices you defined in the picture above. Make sure your indices are defined in counter-clockwise order so that a 3D camera looking down on your square from some +Z height above will see the front faces of your triangles.
-
-```
-std::vector<int> squareIndexArray = {
- /* --- Fill in your first triangle indices --- */
- /* --- Fill in your second triangle indices --- */
-};
-```
diff --git a/worksheets/a4_dance.md b/worksheets/a4_dance.md
deleted file mode 100644
index e4a1b36..0000000
--- a/worksheets/a4_dance.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# Assignment 4 (Dance) Worksheet
-
-## Q1: Transformations with Matrices
-
-As you know from class, transformations are the foundation for so much of computer graphics. This assignment deals with the important topic of composing transformations, so let's do some practice with that. We'll use 3D transformation matrices and graphics code, but apply these to a simple house shape that lies in the XY-plane to make our drawings easier. So, treat each vertex of the house like it is a 3D point, but where z=0.
-
-Here's what the house looks like when viewed from some +Z height above and looking down at the origin.
-![2D house diagram at the origin](./img/house.png)
-
-
-###Q1.1 Basic translation
-Here's a simple translation matrix. Draw a picture of the house to show what it would look like if transformed by this matrix.
-```
-Matrix4 trans = Matrix4::Translation(Vector3(0.0, 0.5, 0.0));
-```
-![House transformed by trans](file path to your image)
-
-###Q1.2 Basic scaling
-Here's a simple scaling matrix. Draw a picture to show what the original house would look like if transformed by this matrix.
-```
-Matrix4 scale = Matrix4::Scale(Vector3(2.0, 1.0, 1.0));
-```
-![House transformed by scale](file path to your image)
-
-###Q1.3 Basic rotation
-Here's a simple rotation matrix. Draw a picture to show what the original house would look like if transformed by this matrix.
-```
-Matrix4 rot = Matrix4::RotateZ(GfxMath::toRadians(45.0));
-```
-![House transformed by rot](file path to your image)
-
-
-###Q1.4 Composition 1
-Now, let's take a look at different compositions of the basic matrices above. Draw a picture to show what the original house would look like if transformed by the following matrix.
-```
-Matrix4 combo1 = trans * scale * rot;
-```
-![House transformed by combo1](file path to your image)
-
-
-###Q1.5 Composition 2
-Let's try another. Draw a picture to show what the original house would look like if transformed by the following matrix.
-```
-Matrix4 combo2 = trans * scale * rot;
-```
-![House transformed by combo2](file path to your image)
-
-
---------------------------------------------------------------------------------
-
-
-## Q2: Hierarchical Transformations
-
-Now, similar to the animated characters in your assignment and the waving robot we programmed (or will be programming soon) in class, imagine that we wanted to represent the house as a hierarchy or *scene graph*, where each part is a separate geometric object that can be transformed relative to its parent. This makes it easy to animate pieces of the house, perhaps making the door open and close, or to create many instances of the house and position them all at different locations within the scene. There are many advantages to organizing graphics using a hierarchy of transforms.
-
-All 3D graphics programs have a base coordinate system with an origin (0,0,0) and x,y,z axes. By convention, we call this base coordinate system "World Space". The (0,0,0) we have been referring to as "the origin" is more precisely called the "World Space Origin". When we work with hierarchies of transformations it is useful to make this explicit because we will want to refer to other coordinate systems as well. For example, while I would like to position my house relative to the world space origin. I would like to position the roof of siding of the house relative to the "house space origin". And, I would like to position the door relative to the "siding space origin". It will be the same with animated characters. I want to be able to define where the eyes go relative to the "head space origin", not relative to the pelvis or world space -- that would be much harder!
-
-The diagram below illustrates this concept with the simple case of a house. It's almost easier to describe when starting at the leaf nodes of the scenegraph, like the door in this case. We want to position the door relative to the siding, so the door is defined in "siding space", i.e., the translation matrix applied to get the door in the right place is written as if (0,0,0) is at the center of the siding. The siding, meanwhile, is positioned relative to the origin of the house, which is at the bottom center of the house. The roof is also positioned relative to this "house space origin". So, the house has 2 children (roof, siding), the roof has no children, and the siding has one child (the door). The entire house is positioned relative to world space; here, just with a simple translation that can be thought of as moving the origin of the house coordinate system to some new position within the world.
-
-![Hierarchical representation of house](./img/house_hierarchical.png)
-
-It can be useful in these situations to define transformation matrices for moving from one coordinate space to another. For example, given some point defined in the door's coordinate system, such as one of the vertices of the door, we could transform it into the siding's coordinate system like this:
-
-```
-// Transforms points in the door's coordinate system to the siding's coordinate system.
-Matrix4 doorToSiding = Matrix4::Translation(Vector3(0.5, -0.2, 0.0));
-
-// Imagine this is a vertex on the door
-Point3 ptInDoorSpace = (....);
-Point3 theSamePtExpressedInSidingSpace = doorToSiding * ptInDoorSpace;
-```
-
-Similarly, these matrices can convert between the other spaces we've talked about.
-```
-// Transforms points in the siding's coordinate system to the house's coordinate system.
-Matrix4 sidingToHouse = Matrix4::Translation(Vector3(0.0, 0.5, 0.0));
-
-// Transforms points in the house's coordinate system to the world's coordinate system.
-Matrix4 houseToWorld = Matrix4::Translation(Vector3(-1.0, 0.0, 0.0));
-```
-
-###Q2.1 How would you compose the matrices above to create a single matrix that will transform a point in "door space" all the way into "world space"?
-
-
-Given the matrices above matrices and the scene graph defined in the image, first
-show the combined transformation from Door-Space into World-Space as a matrix
-multiplication, then show how to transform the point `pInDoorSpace` into
-World-Space. Lastly, show the numeric representation of `pInWorldSpace`.
-
-```
-// The magenta point `p` from the diagram, in Door-Space
-Point3 pInDoorSpace = Point3(0.2, 0.4, 0.0);
-
-// Combined transformation from Door-Space -> World-Space
-Matrix4 doorSpaceToWorldSpace = /* --- Fill this in --- */
-
-// The point `p` in world space
-Point3 pInWorldSpace = /* --- Fill this in --- */
-```
-
-###Q2.2 Let's double-check your work now by calculate the actual "world space" coordinates for p. Show what the following code would output:
-
-```
-std::cout << "p in World-Space: " << pInWorldSpace << std::endl;
-/* --- Fill in output for std::cout here --- */
-```
diff --git a/worksheets/a5_artrender.md b/worksheets/a5_artrender.md
deleted file mode 100644
index 518ba4c..0000000
--- a/worksheets/a5_artrender.md
+++ /dev/null
@@ -1,65 +0,0 @@
-# Assignment 5 (Art Render) Worksheet
-
-## Q1: Phong Shading
-
-Given the following lighting equation:
-
-*diffuse* = *n* &middot; *l*
-
-*specular* = (*n* &middot; *h*) ^ *s*
-
-*color* = ka\*Ia + kd\*Id\**diffuse* + ks\*Is\**specular*
-
-Draw a picture that includes a point on a surface, a light, and labeled arrows
-for each vector that shows up in the equation. Hint: make sure that your
-vectors point in the right direction to make sense for the equation as written
-(e.g., make sure you draw *l* pointing in the correct direction for *n*
-&middot; *l* to be calculated correctly)!
-
-Replace this image with your diagram:
-
-![](./img/vectors.png)
-
-
-## Q2: Silhouette Outline
-
-This week in class we'll be talking in more detail about the key matrices used
-in vertex and fragment shaders. For example, we'll learn that the
-`normal_matrix` must be used rather than the `model_view_matrix` to transform
-normals to eye (a.k.a. camera) space. You'll use this in all of the shaders
-you write. The outline shader includes the most interesting use of normals
-though because not only does each vertex have a normal, the shader also has
-access to the "left normal" for the normal of the triangle to the left and the
-"right normal" for the triangle to the right. As you see in the assignment
-handout these are used to determine whether the vertex lies on a silhouette
-edge. Here are a few questions about the logic you'll need to use in that
-shader:
-
-### Q2.1
-Your outline vertex shader will need to include an if statement that is true
-if the vertex lies on a silhouette edge by testing the left normal and right
-normal in some way. Assuming `vec3 e` is a vector calculated in eye space
-that points from the vertex to the eye and `vec3 nl` is defined for the left
-normal and `vec3 nr` for the right normal, fill in the condition on the if
-statement:
-
-```
-if (/* --- Fill this in --- */)
-```
-
-### Q2.2
-For the `nl` and `nr` that appear in your if statement above, should these two
-vectors be transformed to eye space using the `normal_matrix`?
-
-```
-/* --- Write your answer here (yes / no) --- */
-```
-
-### Q2.3
-Inside the "if statement" from Q2.1, you will need to offset the vertex in the
-direction of the normal to move it outwards in order to create the "fin" that
-forms the silhouette outline. This process of changing the location of the vertex is like making a change to the actual 3D geometry of the model, as if you quickly loaded the model into a 3D modeling program, edited the vertex by hand, and resaved the file. So, we want to make this change while the vertex is still in model space, before transforming it to world space, camera space, and so on. With this in mind, which version of the vertex normal should you use at this step? Should you transform the normal as usual by multiplying by the `normal_matrix` in this case?
-
-```
-/* --- Write your answer here (yes / no) --- */
-```
diff --git a/worksheets/a6_harold.md b/worksheets/a6_harold.md
deleted file mode 100644
index d25a755..0000000
--- a/worksheets/a6_harold.md
+++ /dev/null
@@ -1,94 +0,0 @@
-# Assignment 6 (Harold) Worksheet
-
-For this assignment, one of the key parts is to check what part of the virtual
-environment your mouse is currently touching - useful for determining what
-type of stroke should be drawn when the mouse is clicked and dragged.
-
-
-## Q1: Mouse-Sky Intersections (Part 1)
-
-From the handout, we know that the sky here is really just a giant sphere
-with a radius of 1500.0 units. In order to calculate where in the sky our
-mouse is pointing in the scene, we need to perform a *ray-sphere
-intersection* test. The ray starts at the eye location (camera position), and goes
-through the current mouse location on the near clipping plane. This ray can be
-traced to figure out where it intersects the sky sphere.
-
-Create a top-down diagram of the scene including the sky sphere, the camera,
-the mouse position, and the aforementioned ray from the eye through the mouse
-position.
-
-You can use the following images as inspiration for the shapes that you draw
-in your diagram (replace this image with your final diagram):
-
-![](./img/sky_camera_example.png)
-
-
-## Q2: Mouse-Sky Intersections (Part 2)
-
-Now, let's create the building blocks for the method `Sky::ScreenPtHitsSky()`,
-which tests to see where the ray from the eye through the mouse intersects the
-sky sphere! We're given the following information in this method:
-
-- Camera view matrix (`Matrix4 view_matrix`)
-- Camera projection matrix (`Matrix4 proj_matrix`)
-- `Point2` normalized device coordinates of mouse (`Point2 normalized_screen_pt`)
- - Inclusive range [-1, 1]
- - `Point2(-1, 1)` is the upper left corner, and `Point2(1, -1)` is the
- lower right
-
-1. The info above actually gives us all we need to calculate the camera's position (also known as the eye position) in world space, but it may not be obvious at first how to do this. See if you can figure it out with a few hints below.
-```
-/* Hint 1: The view matrix transforms from one space to another, what are those spaces?
- Hint 2: It is possible to calculate the inverse of a transformation matrix, and Matrix4 has a handy routine for this. As you would expect, the inverse of a transformation matrix will apply the opposite transformation.
- */
-Point3 eye = /* --- Fill in your answer here --- */
-```
-
-2. Construct the mouse pointer location in world space. We consider the mouse
- to be on the near clipping plane of the camera (this should sound familiar
- from your drawing in Q1!). In order to grab this point, MinGfx has a handy
- helper function called
- [`GfxMath::ScreenToNearPlane`](https://ivlab.github.io/MinGfx/classmingfx_1_1_gfx_math.html#a2086a2f885f887fb53da8a5adb5860f0).
- Use the MinGfx documentation at the link and the variables given above to
- construct the world-space representation of the mouse location:
-
-```
-Point3 mouseIn3d = /* --- Fill in your answer here --- */
-```
-
-3. Create the ray from the eye through the world-space mouse location on the
- near plane. Use MinGfx's builtin `Ray` class for this.
-
-```
-Ray eyeThroughMouse = /* --- Fill in your answer here --- */
-```
-
-4. Use the
- [`Ray::IntersectSphere()`](https://ivlab.github.io/MinGfx/classmingfx_1_1_ray.html#affe83ef9859560bcb24343017cb86d88)
- method to find the intersection point of the `eyeThroughMouse` ray and the
- sky sphere. This method contains one bit of C++ syntax that you may not
- have seen before - output parameters. The `Ray::IntersectSphere()` method
- sets both `iTime` and `iPoint` this way. Usually, best practice here is to
- declare a variable of the correct type before you call the method, then
- pass in a *reference* to this variable. For example:
-
-```
-// Declare output parameter `x`
-float x;
-
-// Call someFunction with output parameter
-someFunction(&x);
-
-// x now has the value set by someFunction
-```
-
- Using the variables declared from the previous steps, write a code snippet
- that captures the return value of the sphere intersection test, as well as
- the `t` value and the `point` where the ray intersects the sphere.
-
-```
-// Declare output parameters
-
-bool intersects = eyeThroughMouse.IntersectSphere(/* --- Fill parameters in --- */)
-```