Lesson 4: Putting it All Together

Pre-Reading

MATLAB GUI development:

https://www.mathworks.com/videos/creating-a-gui-with-guide-68979.html

Overview of GUI’s:

https://www.computerhope.com/jargon/g/gui.htm

Review

Let’s review the Computer Vision and Image Processing workflows we learned about last class. Recall that the purpose of the Computer Vision is to use machines to precisely detect, classify, and track objects in images or videos to understand a real-world scene. Computer Vision processes usually follow this general process:

  1. You obtain the image or video to be processed
  2. Processes and clean the Image so that it is easier to perform Computer Vision Techniques on
  3. Perform your Computer Vision Techniques where you track, detect, or segment the objects in the image or video
  4. Interpret the output of the computer vision process. For example, classify the output as a pedestrian, car, etc.

Last Lesson we applied this general computer vision process to our project by:

  1. Obtaining the Image of our face from our laptop webcam
  2. Processing this image to be compatible with the trained neural network in 3 ways: a. Convert the image to grayscale b. Crop the image to just include the facial region c. Reduce the image pixel size to 48x48
  3. Pass the image into the trained CNN to perform the Computer Vision techniques of the neural network on the image
  4. Interpret the output of the trained CNN and classify the facial emotion

Those were the basics of what we learned and implemented last class. Also, in preparation for this class, we learned about the “appdesigner” tool in MATLAB which we will utilize to design our apps for this lesson and the basics of what makes up a GUI.

In-Depth GUI Structure

What exactly is a GUI and why should we use one in the first place?

The purpose of making a GUI is to enable a person to communicate with the computer through a series of symbols, figures, and visual aids. It is meant to allow users with no knowledge of the code in the background or the software behind the GUI to make use of it and see the results in a simple manner. The two most important aspects of a GUI are that:

  1. It is easy to use and inherently understand
  2. It displays the results in a simple and logical fashion

You don’t want to overwhelm the user with too much information or buttons, and they need to see the end results of the code displayed on the GUI.

A less important aspect of every GUI is that it indicated some sort of progress is happening when the press the button. The user likes to know that stuff is happening in the background as to indicate they are making use of their time while waiting around for the function to finish. We see this in other GUI’s in the form of loading bars and progress bars.

There are many different elements that can be put on a GUI interface to allow the user to interact with the software. Here is a list of such elements and a description of what they do which will be relevant to this project:

  1. Button - A graphical representation of a button that performs an action, known as a callback function, in a program when pressed
  2. Axes – these are used to display graphical information but can also be configured to display images which will be done in this project
  3. Text box – An interactive object of the GUI which can be used to take inputs into the functions before a button is pressed or display relevant information as an output
  4. Panel – These allow the designer to separate out the GUI interface into subsections that may be more relevant to the user, such as inputs and outputs.

To show all these topics in a GUI interface that I designed for this project, I included a snapshot of my own GUI below.

image

To start up app designer, you type “appdesigner” into the command window of MATLAB. This will run the App Designer tool and another window should pop up a couple seconds later. This new window will allow you to choose different layouts of your new App and also display recent apps you have made. For my designer, I went with the two-panel layout to separate out the control buttons which call the functions and the panel that displays the relevant outputs. At the top of my controls panel, I have a progress indicator, which is a text box object. In each function I assign this text box different values to indicate the progress of the code running in the background. The below this I have the buttons which call the different functions we designed in the previous lessons. I also have an accuracy output below my train CNN button to show this relevant output for that function. The start button begins the Computer Vision Process we went over build last class and displays the image captured from the webcam and the output label to the output panel on the right. Then the stop button changes the stop property of the GUI to stop the Computer Vision Process.

Go ahead now and open app designer and construct the layout of your GUI…

Now that you have laid out your GUI Interface It is now time to implement the functions from the last couple of lessons into the GUI. For now, left click on each button and there will be an add callback function tab within the menu that pops up near the button. Go ahead and press this and it will add a callback feature to your button in the code view. The code that is within this method is the code that will execute when that particular button is pressed.

It is important to note that when different buttons are pressed at the same time on a MATLAB GUI both functions that are called by these buttons will execute at the same times, this is a property we will use in the stop button feature.

Now that we have the whole layout of the GUI and the call back functions added to each button, move over to the code view and copy paste it into a new script. This is necessary because when using the app designer tool, we can’t call other functions in the current working folder of our MATLAB console.

If you do not know how to declare and use functions within MATLAB, it would be good to look this up and understand how it works before moving on.

We have the core code the implement into the GUI which we have built over the last several lessons, but to implement it we need to change this core code into functions that can be called by the callback methods of the different buttons. We will be implementing 4 different call back methods using the core code for the CNN that we have built. The different methods will perform each of these tasks separately:

  1. Build the Database
  2. Train the CNN
  3. Start the Computer Vision Process
  4. Stop the Computer Vision Process

It is also good to note now that within the functions if you want to update an aspect of the GUI, you can call the property of the app you want to change and set it equal to a different value. For example, this is the code I use in my functions to change my progress bar value:

%Adjust the Progress Meter
app.ProgressEditField.Value = "Building Data Base";

Calling app.ProgressEditField references the Progress text box I have in the GUI. Then I specify I want to change the value of this to “Building Data Base”. This is how I updated the progress bar in my GUI. Now it is time to go over how to implement each of the different methods in the GUI and display the results to the outputs panel of the GUI.

Each of the functions in sequential so you should implement if statements to check if the other functions before have been called to meet the parameters necessary to be able to run the callback method.

Building Database

For this callback method you can just copy past the code into the call back function and if you are doing a progress bar add the progress metric outputs as shown above.

Train CNN

For this call back method, you should implement the core code we coded in lesson 2 into a function in MATLAB to be called within the method. This is just to keep things a bit simpler within the app code itself and when constructing the GUI I ran into some issues loading the trained CNN in the actual GUI code itself, so I transitioned it into a function. Example code is shown below for this method. In this case I use a flag to indicate whether or not to train the CNN in the case if the database is not constructed. I also created a property within the GUI itself to hold the trained CNN information to be called in other functions within the app.

            %Construct file path to check if data b ase constructed 
            filelist = dir(fullfile('/Users/user/Desktop/Emotional Face Recognition Program/database/Angry','*.*'));
            ErrorFlag = 0; %Initialize Error idicator 
            
            %check to see if database is constructed by counting number of
            %images in Angry Folder
            if length(filelist) <= 4
                app.ProgressEditField.Value = "Database not Constructed";
                ErrorFlag = 1;
            end
            
            if ErrorFlag == 0
                %update the progress metric on GUI
                app.ProgressEditField.Value  = "Training Neural Network";
                
                %pause to allow progres metric to register
                pause(0.1)
                
                %Train the CNN and get back the trained neural network with
                %accuracy
                [Progress, Accuracy, EmotionNet] = TrainCNN();
                
                %update Progress and Accuracy elements in the GUI
                app.ProgressEditField.Value  = Progress;
                app.AccuracyEditField.Value = Accuracy;
                
                %store the trained network in GUI element for cross function
                %purposes
                app.net = EmotionNet;
            end

Start

In this case check if the neural network is trained and the data base is constructed. If both are then you can execute the Computer Vision process we implemented last lesson. In this part, loop through the part of capturing the image and then processing the frame and classifying it based on a property of the app which indicated whether to stop or not as shown below:

while app.stop == 0
 RGBframe = snapshot(mycam); %take frame image from webcam
 [label,IMwithFace] = ClassifyFrame(RGBframe,app.net); 
 imshow(IMwithFace,'Parent', app.UIAxes); %show facial 
 app.EmotionEditField.Value = label; %update emotion field 
 pause(0.1) %pause to allow GUI to update 
end

Stop

This method should just change a stopping property to stop the looping of the computer vision process in the start method. So in this case it would change app.stop to 1 and stop the looping above.

Note that in order to show an image on the GUI axis object you should use the following function:

imshow(IMwithFace,'Parent', app.UIAxes); %show facial image with label on GUI

IMwithFace is the image to be shown, parent references the app, and app.UIAxes is the axis object to show the image on.

Also to insert a bounding box onto an image, use the following function:

IMwithFace = insertObjectAnnotation(RGBframe,'rectangle',bboxes,label); %inset bounding box onto image

If you are having any issues with this process feel free to email me any time for assitance. alynch@ucsbieee.org