Virtual Cleanroom

The Virtual Interactive Cleanroom is a training tool used to reinforce the knowledge pharmacy students gain in the classroom. It also provides a platform for instructors to test that knowledge. Within the virtual environment users complete sterile garbing procedures, gather materials, and prepare medications.

At the time of this posting the media on the Penguin Innovations website is a little dated. Here is a timeline of my work on the project, starting with a before and after video highlighting the bulk of my progress.

Before and after as of April 2018

Fall 2018 update

May 2019 Update

History

The project started as an alternative to hands-on experience in a physical cleanroom. Purdue’s pharmacy program does not have a hospital that it is coupled to, like many others do. Construction, maintenance, and stocking of their own cleanroom was too costly for the department. So they reached out to the Envision Center, where I now work, to build a VR simulation.

Early iterations used a four-projector CAVE system with hand and head tracking. In order to make the simulation more portable and accessible, it was moved to a desktop version. Around this time is when I started working on the project, mostly doing testing, bug fixes, and adding more medications to the list of content.

The simulation used a control scheme similar to an FPS, with a clunky array of UI buttons. We started getting usability complaints. Many of the students using the software had never played video games. So the controls were foreign and required learning a whole new set of motor skills. There wasn’t a good tutorial, so learning the program was an endeavor itself, which distracted from the real learning goals.

From a developer’s perspective, the cobbled together prototype code of the past several years was a pain to navigate. Nothing was organized. Variable names were meaningless. The project was riddled with “Easter eggs.” It was time to tear the whole thing down, actually design it, and build it back up on a stronger foundation. That is when my real work started.

Challenges

Meeting the user where they are

The biggest problem was that many students found the simulation unusable, or at least difficult to pick up. Rather than a system based on fine mouse and keyboard motor skills, we came up with a design more centered around buttons and numerical entry.

Syringes

Pushing or drawing the syringe plunger is the best example of the type of usability issues we faced. Before, the user would press and hold a button and the volume would gradually change. Once they reached their target volume, they would release the button. This system tested my patience so much. The new system uses a numerical entry field. After the user figures out how much to draw/push, they type that number into a field and press a button. Easy.

However there are three parts to drawing a volume into a syringe: knowing the volume you need, knowing where the plunger needs to go to get that volume, and technique. The numerical entry alone reinforced the “knowing the volume you need” part. It makes sure the user’s math is right. But a common point of failure is improper reading of the graduation marks on the syringe. Therefore, the “knowing where the plunger needs to go” part needs more attention.

In the Fall of 2018, I revised the way syringes are used in the virtual environment. Pushing and drawing has become a two step process. First the numerical entry is used to establish intent. Then the camera zooms in and the user clicks and drags the plunger into place, testing their execution of that intent.

The old models and syringe UI had been bothering me for quite some time. The models were ugly and incorrect. The UI felt redundant since there was already a 3D model present to represent the syringe. The need to interact directly with the syringes required that this section of the program be revised, so I opted to address these pain points along the way.

I remodeled all of the syringes, adhering to real world dimensions from both data sheets and my own measurements. Several core systems also had to be revisited to make them robust enough to move and position models appropriately relative to each other. And, of course, the behavior of the syringes themselves had to be updated.

Graphically, the syringes posed a couple challenges: transparent draw order and the smooshed rubber ring effect. When triangles are part of the same mesh, Unity draws them in sequential order, not back to front. So I had to manually identify what order the triangles should render in and build the models so that the triangle indices were in that order.

The rubber ring effect is important for reading the syringe volume, so it could not be compromised. The effect was created by duplicating the polygons on the perimeter of the plunger head and assigning them a material that would draw on top of the outer syringe geometry.

Navigation

Up until the Fall of 2018, navigation still resembled an FPS, but simplified down to just the four arrow keys or WASD. The mouse was removed from the equation when navigating. The up and down arrows moved you forward or back. The left and right arrow keys rotated you left or right.

Mouse look controls were removed, because our initial testing showed that two-axis rotation with the mouse was disorienting to some. Additionally, using the mouse for view rotation meant that there had to be a key that released the cursor so that buttons could be pressed. This added complexity frustrated some users.

However, the simplification was not enough. The free movement system has since been replaced by a node based navigation menu. Users select where they want to go from a list of location names, limited to those readily accessed from their current location. The motivations for this change were numerous: usability; learning the layout of the virtual cleanroom doesn’t matter, because it will be different from wherever the user ends up practicing; navigation is not being closely monitored or tested, so finesse is not important; using button events for navigation makes it easier to monitor navigation generally; and as an afterthought, I realized free roam actually complicated the code.

UI

The UI was a big undertaking. There are a lot of moving parts. Making it clear to the user, both with layout choices and word choice, was a big task.

Meeting multiple uses

There are two main uses for the simulation: training and testing. In training, the user should receive immediate feedback. This will help them identify what they are doing right and what they need to work on. If they do something VERY wrong, the simulation not only provides feedback, but stops them from doing it. In test mode, all that goes away. The program lets the user mess up and doesn’t tell them. It just silently records what they do wrong and reports metrics to a web server for later review.

The system I have come up with to satisfy this design requirement is based off of the command programming pattern. Before each command is created and queued, info about the command (validity, a feedback string, a human-readable description, a timestamp) is gathered into a struct which is then fed into the command constructor. When a command is executed it has all the data necessary to know how to behave. Further, a list of all commands is kept until a procedure is completed. Then the info is dumped into a report that is sent to the performance records server. This pattern both satisfies the design requirement and provides a method for very granular analytics.

While there are two main uses, there is actually a third that needs to be accounted for, the tutorial. In the tutorial nothing can be interacted with, except the thing the user needs to interact with, and it flashes. A list of tutorial steps is kept, each containing the text presented to the user and a reference to the object that needs to be interacted with to progress to the next step. Simple really, but keeping everything encapsulated so that tutorial code does not touch normal game code was the challenge. For this, Unity’s component system was leveraged extensively.

Meeting development needs

Prior versions used hard-coded values for checking products and procedures. One of my goals was to enable novice Unity users to add new content or change existing content without touching any code. The current implementation uses a set of Unity prefab objects to check the user’s work. A procedure prefab holds the name, category, and expected deliverables of that procedure. The procedure is added to the procedure picker list so that the user can select it from the dropdown. When submitting a final product, the grader script checks it against the procedure’s expected deliverables. This design allows developers to add content simply by creating additional procedure and deliverable prefabs.

Deployment has been a pain point. With my lab working as one of several groups contracted for this project, we don’t have control of the platform Penguin Innovations uses for distribution. So if a small fix is needed, it needs to pass through many hands. To open up this production choke point I leveraged Unity’s asset bundle system. With it I am able to add or edit content, upload it to an asset server, and every instance of the Cleanroom application will automatically download these updates.

Polish and Graphical Fidelity

Animations

I made it my own personal mission to make the Virtual Cleanroom look good. This is most evident on the main menu, where multiple camera angles of the freshly light-mapped environment are animated in the background and all of the sign-in menus animate to expand and zoom off the edge of the screen when the next one comes in.

The node base navigation system has been updated to animate between nodes. This was accomplished using a combination of Cinemachine and Navmeshes, which I documented here. There are some important environment details, such as the line of demarcation, that are never seen when the camera simply snaps from one position to the next. Additionally,

Vial revamp

Once the graphical fidelity of the syringes was updated, opportunities to bring the other objects up to date revealed themselves.

The prefab variant system introduced in December of 2018 allowed me to create multiple, properly dimensioned, vial sizes where each of the 100+ items became variants of 6 base prefabs. Now when a transform node needs tweaked or a model feature needs adjusted all the changes propagate, instead of having to hit paste 100 times.

IV bag revamp

With the expansion of the procedure list came the need for stock solution bags. Previously the labels were bakes right on the bag texture, but this is not very future-proof given that labels change and the volume of supported content expands. I was already working on a label mapping system for the vials, so I decided to put in a little extra effort to make it robust enough to apply to both bags and vials. The label mapper adjusts tile and offset parameters of the label material to position the label image within the UV bounds (with padding and alignment options) of the label mesh without distorting the image.

While I was adding label geometry to the IV bags, I decided I should go ahead and remodel them to be dimensionally accurate, like the vials and syringes. Also long as I was doing that, I figured I might as well set up blend shapes controlled by their current volume.

The labels for the IV bags are unlike the vials in that they are printed straight onto the bag. Therefore, I needed to use transparent materials for the bag labels. Using the white sections of the labels as a mask created artifacts where the letters were surrounded in a white outline. To fix this, I adjusted levels to get a black and white layer mask in Photoshop, then just blocked in color based on the those used on the label. This fixed the color channels to display the right color in the areas around the text where both the color and transparency were previously approaching white. The result ended up more legible than the source material.

Optimization

Previously, the entire environment used pre-baked, un-atlased textures. This made the texture footprint massive. Today the project contains roughly three times as many medications as when I started, but the project file size has dropped from 2.09 GB to 539 MB. A large effort was made to change all of the assets to use Unity’s light mapper and PBL. I converted many of the textures which were needlessly eating up space as Targa files. Overall I think it looks lots better, and it actually renders more smoothly.

WebGL Deployment

Optimization became huge when pivoting toward web deployment. While strides had been made to keep the asset sizes down, I soon found that memory usage was still too much for some devices. The culprit was the label textures. Historically, I did not have much say in the dimensions of the textures, so they were neither power-of-two resolution nor square. They still are not, but I atlased them. This gives me square 2k textures which allowed for better compression and mipping. The label texture footprint was reduced to 3% of its former size. A little extra c# and shader work was required to get the label mapper to play nice with atlases, but the savings were well worth it.

Some scene changes were also made to reduce extraneous geometry and textures. Smaller rooms allowed for smaller lightmaps of the same quality. These savings were modest compared to the label textures, however.

Lightweight Render Pipeline

In order to support a wider range of machines and ensure good performance in browser, the Cleanroom was moved to the LWRP in the Fall of 2019. Transitioning from the old pipeline to LWRP was mostly pain-free, but it did have a few pitfalls. Some shaders had to be rewritten, which led to the C# changes as well.

One of the most impactful pitfalls was the lack of camera layering. In the old pipeline, UI canvases could be setup in camera space. This meant you could use different cameras for different UI elements and then overlay them based on draw order. LWPR takes away that ability. This caused issue with the 3D icons in the inventory screen. They required a camera to be rendered, but that camera could not overlay the main camera. A world-space UI was considered, but other project goals were of higher priority. So for now the inventory uses basic icons rather than the 3D previews.

The transition to LWRP created a huge frame rate boost on the Chromebook used for low-end testing.

Future Goals

I would like to add more animations to the program. Displaying proper motions for working with needles would help remedy the current lack of technique and finesse demonstrated. Adding links to videos that explain techniques has also proven to be a desirable feature.