I guess this is a better time than ever to begin writing on Woogie's blog: last Wednesday, I held a presentation for Google Developer's Group Bucharest. It was an IoT meetup, so I tailored my presentation to talk about Woogie the device, or as we call it, Woogie the Alien.
Woogie is not just a thin client
People might think that it's just a thin client, that the intelligence behind Woogie's eyes is only on the cloud, but that’s not true. Woogie the Alien is a rather complex IoT device. It’s a device with no other interface besides a voice interface and a few buttons, yet it has to find a way to handle a lot of things such as: Wi-Fi connection, audio streaming, audio processing, text-to-speech conversion, face animations, sensors, proactive content, and it has to do all this while being as secure as it can!
Our design is highly modular and built on top of Linux. We stand on the shoulders of giants, as the Linux environment offers both incredible building blocks and great security. All this means that our tiny modules inside Woogie can be turned on and off independently or easily modified to do their task better.
Our Wi-Fi connection management is one such module that recently suffered a major overhaul since we considered it to be less user-friendly than it should. Configuring Woogie for the first time is now like connecting to a hotel Wi-Fi: when Woogie doesn't find a known Wi-Fi network, it will generate a new Wi-Fi network called woogiewifi. Once you connect to it, instead of a login page as you would have in a hotel, you can enter passwords for one of the other networks that Woogie has detected and you're good to go!
We have a module that downloads proactive content and waits for the perfect moment to challenge the kid, a module that processes and streams audio content to our servers for speech recognition, a module that transforms text to speech with Woogie's voice... An added benefit of separating functionality among modules is extra security, but I I'll talk about that in another post.
Presenting to a developer audience is inherently different. It is more akin to a lecture. Which is fitting, because I was just done with my lecture on "Design with Microprocessors" an hour before in the same lecture room! The norm is to take questions at any time and this makes it very interesting and turns into more of a discussion. The spectrum of reactions is no longer between "wow" or "meh", but turn towards the inquisitive: "why did you make X like this?", "could you do Y with this product?", "have you tried Z framework?". It turns into a brainstorming session very quickly, and you leave with a satisfied feeling of having talked about something you built with love and with new ideas.