Game Engine Devlog Part 4 - IO and Entities

These past few weeks I've been completely reworking the IO and entity systems of my engine, along with making a start on direction lighting and 3D modelling.


Previously, keypress interrupts were processed by just adding the key ID to an array and then once per frame going over all elements of that array and carrying out the required action, but this didn't readily allow for actions on key release or easy rebinding of keys. Now I've created arrays of functions for key presses, key releases, mouse button presses, and mouse button releases, this is all integrated into the events system and works very well.
I have implemented accurate mouse picking to allow interaction with the environment. About 6 months ago I put together a 3D game of life with mouse picking but when clicking far away from the centre of the screen it would register in the wrong place because I was just taking the x,y coordinates of the click on the screen, normalizing them, multiplying that by half the FOV, which gave an angle I used to march a ray which could intersect with cubes. This didn't work very well because the angle of light projection between each pixel is not constant and the terrain is a plane of triangles. Now I find the ray direction from the mouse by getting the normalized coordinates of the mouseclick, creating the same projection matrix used to render the scene, project the normalized coordinates into the world space, creating a view matrix the same as is used in rendering the scene, inverting it, adding the camera and ray location together, then normalizing the result. This is then raycasted using line-plane intersection math intead of marching a ray, which leads to far greater performance and allows for a some calculations to be done ahead of time. I took a while getting this right as misinterpreting the mouseclicks would be very frustrating in a city builder. Code for this is provided in [2] and [3].


The entity system has gone through a number of iterations:
The first approach I took was loading every model from the disk and adding a different origin to each vertex. The second was pre-loading all models into memory and writing them to the vertex buffer each time a new entity was required, it used push constants to reposition them instead of baking in the modified position which is how I accomplished animation in previous demos. The current approach loads all of the models into the vertex buffer once and runs a different draw call for each entity using the same vertex data but with different push constants. The push constants and other entity information are stored in RAM along with the terrain data which allows for easy frustum culling. Handling it this way is performant enough for now, but it's close to using too much CPU time, so soon I'll need to modify this to using instanced rendering.
(1) Recent demo

void getRayFromMouse(vec3 *step)
	 * xclick - location of horizontal click in pixels
	 * yclick - location of vertical click in pixels
	 * screenHeight - screen height in pixels
	 * screenWidth - screen width in pixels
	 * - players camera location
	 * player.cameraFront - coordinates 1 unit away from in the direction the camera is facing
	 * player.up - 0,0,1
	//normalized device coordinates for the on-screen click
	vec4 ray_NDC = {xclick/(float)(screenWidth/2.0)-1,yclick/(float)(screenHeight/2.0)-1, -1.0, 0.0f};
	//create projection matrix (same as in UBO) and invert it
	mat4 proj;
	proj[1][1] *= -1;
	mat4 invProjMat;
	//project ray_NDC into the world space
	vec4 eyeCoords;
	glm_mat4_mulv(invProjMat, ray_NDC, eyeCoords);
	eyeCoords[2] = -1.0f;
	eyeCoords[3] = 0.0f;
	//create view matrix (same as in UBO) and invert it
	mat4 view;
	mat4 invViewMat;
	//Add camera orientation and location to the ray
	vec4 rayWorld;
	glm_mat4_mulv(invViewMat, eyeCoords, rayWorld);
	//return vec3 ray
(2) GetRayFromMouse function

/*  Raytracing code
	*  thmem is an array of structs for terrain hitboxes 
	*  struct MapTerrainHbox{
	*	vec3 p1; //position of triangle point 1
	*	vec3 p2; //position of triangle point 2
	*	vec3 p3; //position of triangle point 3
	*	vec3 n; //plane normal
	*	float d; //distance from origin to plane
	*  };
	* lwm is the low watermark to find the closest triangle in the path of the ray
	* closest is the array index of the result
for(i=0;i < QUAD_CELL_COUNT*4;i++){
		NdotRayDirection = -glm_vec3_dot(step,thmem[i].n);
		t = (glm_vec3_dot(thmem[i].n,[i].d)/NdotRayDirection;
		if (t < 0)
		P[0] = step[0]*[0];
		P[1] = step[1]*[1];
		P[2] = step[2]*[2];
			lwm = t;
			closest = i;
(3) Raytracing code