In my previous article, I was telling about the different voice assistants and how we choose one for the implementation into the smart mirror.
Now it's time to work on the actual implementation ...
Prototype
As the decision has been made to use the Google voice assistant for our project, we went to build a prototype of it at a Raspberry Pi.
Google and the MagPi had teamed up and released a DIY kit for the Google voice assistant, of which I posted already in my article Google AIY Voice Kit.
Google AIY Voice Kit |
Into the smart mirror
One issue we came across right away was that the smart mirror was written in Python 2 while the SKD of the Google voice assistant in Python 3. The easiest solution was to raise the smart mirror code from Python 2 to 3. In fact, not much needed to be changed.
Visual feedback
The original frame layout of the smart mirror had no space for the voice assistant. There has been a frame for the news at the bottom, and a frame for the other information at the top, divided into a left (weather) and a right frame (time/date). Again, please excuse the German labels in the drawings.
Frames before the integration of the voice assistant |
The frames had to be rearranged a bit to make some space for the visual feedback of the voice assistant without the other elements being moved or influenced when the conversation is triggered or over.
Frames with the area for the voice assistant |
Our solution was, to split the upper frame into two frames again, an inner upper and an inner lower frame. The inner upper frame could hold the weather and time information like before and the lower inner frame the voice assistant feedback.
Now we got the space to show the visual feedback, but we ran into another issue, that we had not encountered before ... threads. The Google voice assistant runs in a separate thread as it would otherwise block all other tasks because of the constant listening to the keyword "OK Google". So far not an issue, but we wanted to show the status information including the recognized question at the mirror surface. That means we had to transport this information from one thread to another. We managed to do that by a class variable and a new function, that periodically checks for new status entries to react by showing the visual feedback.
Function to check status changes |
In the end, the mirror shows a face and the status text while a conversation takes place. Five seconds after the conversation is finished, the face and status text will disappear so the middle part of the smart mirror stays clear to show the mirror image of whoever is standing in front of it.
Screenshot of the smart mirror while a conversation takes place |
Getting all together
After all these steps had been done, the new hardware was placed in the mirror frame along with the existing.
Hardware for the voice assistant added to the mirror |
The speaker was placed with some double sided tape directly at the back of the display, the audio HAT on top of the Raspberry Pi and the microphone at the bottom of the mirror at the outside. The microphone was moved to the inside later and even placed against the wood of the frame as it was so sensitive, it could hear commands from next room with closed doors.
Finally, the smart mirror is back in place with the voice assistant implemented.
Finally, the smart mirror is back in place with the voice assistant implemented.
The smart mirror with active conversation |
Checking the result
After everything was assembled and working, we wanted to check how well we have done. This for, we repeated the questions that we used to select a voice assistant first place and compared the results against the previous results of the Google voice assistant as that's the most adequate reference for our implementation.
We were expecting the same results as before but got surprised by some deviations.
Google voice assistant at smartphone vs smart mirror |
As you can see, the questions recognized category was identical. But at the categories for the question answered and answer helpful there have been differences.
After analyzing the questions that made the difference, we came to the conclusion that it has been because the assistant in the smartphone "knows" it had a display, while the assistant in the smart mirror is more comparable to a Google home, without a display. That means, some questions got a visual answer at the smartphone, but the same questions got answered verbally at the smart mirror.
For the answers helpful category, there have been some questions the smart mirror assistant simply can't help with at the moment as the features are not supported yet by the SDK provided by Google. This is, for example, the Google Play Music integration, which simply is not available to the SDK at this point.
Conclusion and outlook
All in all the project was a full success and a lot of fun. The smart mirror now has the ability to answer questions and shows a visual feedback. Even though, the feedback is not completely done yet as the animation of the gif is not working as I want it to be. Also sometimes, there is a little glitch that requires restarting the Google assistant.
But that just means, a bit more to do with this funny project :)
The smart mirror has been a very nice project so far and is a very good base to continue with more DIY projects, which will follow for sure.
Google just released another of this kits, this time it's a vision kit. I already got some ideas of what to build with it but that's clearly for another article at some point.