Many years ago, I worked with my daughter on her science fair project built around a modified CMUcam that was originally developed at Carnegie Mellon University. She modified the code so that it delivered basic obstacle recognition after converting the RGB image to one using hue, saturation, and intensity (HSI). The 8-bit micro was pushed to the limit and the frame rate was very low, but it worked.
The CMUcam was updated multiple times and I even picked up the PixyCam (CMUcam5) from Charmed Labs. This had a dual-core micro and did more sophisticated image analysis and object tracking. I worked with the Pixy when it first came out. The family has continued to grow—now there’s a Pixy2 that even works with the Lego Mindstorms robotic platform.
The latest platform is named Vizy (Fig. 1) and it’s available as a Kickstarter project. To get a better idea of what Vizy can do, I talked with David Kapsner of Charmed Labs.
Can you tell me more about Charmed Labs?
We’re a small company in Austin, Texas. We like to design products for educators, hobbyists, and roboticists. “Educational robotics” is a niche that we’ve been in for quite a while. We launched the Pixy camera as a Kickstarter campaign several years ago. We’ve also been involved with designing the Gigapan line of camera mounts. We’re big nerds obviously.
What is the story behind Vizy?
I’ve been working with the Education School here at UT Austin, helping some of the students with “maker skills” that they can use later in their teaching. The program is called UTeach Maker, and they’re just awesome. Anyway, one of the students has plans on becoming a physics teacher and we got to talking about some of the cool gadgets used by physics teachers to teach various things—projectile motion, conservation of momentum, etc.
It got me thinking—physics teachers could probably benefit from a vision system to track the motion of objects, etc. That was the original idea behind Vizy. We’ve since added other things that Vizy can do, like monitor your birdfeeder as a way to show off its versatility. It’s a good solid platform for vision applications. Or a good/fun learning platform for various things, such as physics, astronomy, AI, image processing, IoT, etc.
Now that you have a successful Kickstarter campaign, what are your plans for Vizy once the campaign is finished?
Thanks! We’ll be busy getting everything manufactured, tested, documented, and shipped out. One nice thing about crowdfunding is that at the end, you get answers to important questions. Like how many should we make? What aspects of Vizy are people most excited about? Where should we focus our efforts? Vizy can do lots of different things, and the campaign should help us focus on what’s important to our customers. So that’s good.
We also want to create an easy and efficient way for Vizy users to share their code with others. We suspect that ideas we have about how to use Vizy aren’t going to necessarily be the best ideas, so we want to be open and develop the right kind of back and forth with the community.
We also have some cool ideas on the software side that we want to explore. It’ll be fun to see where all this goes!
Why do we need an open-source AI camera?
What’s interesting about the AI space is that the open-source software is cutting-edge (or near cutting-edge). Google has released and is maintaining TensorFlow, and is making sure it runs well on the Raspberry Pi hardware. It’s an interesting time in AI and machine vision. The fact that it’s all open source means more people can get involved and contribute their effort and ideas.
Why did you use a Raspberry Pi as the base?
The Raspberry Pi 4 came out last year, and for the first time the Raspberry Pi is able to run deep-learning neural networks at reasonable speeds, say 4 or 5 Hz. It’s not super-fast, but lots of applications can use this level of performance. Our birdfeeder application is a good example.
The Raspberry Pi Foundation will keep improving the performance, and future versions of Vizy will improve as well. The Raspberry Pi is also so ubiquitous—everyone is using it and the amount of information and support for it is huge. Contrast that with other platforms that aren’t as well known and have spottier support. As a result, we expect the Vizy platform (with Raspberry Pi) to be a compelling platform for AI and machine vision going forward
What types of machine-learning software can be used with the system?
I can’t think of any machine-learning software that can’t be used with the Raspberry Pi. TensorFlow, PyTorch, Keras, OpenCV, and the smaller packages supported by Python. It’s a big space, and it’s not constrained by lack of support, so that’s another reason why the Raspberry Pi is compelling. With Vizy, we wanted the take the good parts of the Raspberry Pi and add better power hardware, better I/O, a robust enclosure, and easier-to-use and more integrated software (Fig. 2). We’re excited to see what people do with it!