AngelBrain, a cyber salad for the Raspberry PI

Carlo Alberto
7 min readFeb 10, 2021

Today, I want to write about my latest side project, AngelBrain, a sort of guardian of my house built on a very cheap and constrained hardware, a Raspberry PI 3 B+.

Preface

I’ve always been looking for something new to create, or to build since I was very young. One of the earliest creations I remember was a red lamp, mounted on the rear of my bicycle, turning on when I braked. I was 10 years old and it was so raw and fragile. Nevertheless, it caught the interest of the kids in the neighborhood, so I spent the next two weeks “enhancing” their bikes too. Not so bad for my pockets.

In the following years, I have found myself working in “many variants" of that red braking light. Some of them were just “side projects", others became real businesses. In all these cases, my desire to build something was (and still is) the actual propeller; and I’ve been always learning a lot.

And even this time, I had a lot of fun and learned so much :-)

The idea

In recent years, there has been a huge increase in the supply of surveillance cameras you can buy and install on your own. For a few bucks, you may get great HD devices transmitting videos straight to your mobile phone, wherever you are. Indeed, their presence has enormously grown in our cities and homes: I installed 4 of them to cover the perimeter when I moved to my new house!

Security cameras you can buy with “pocket money”

Usually, these cameras can be configured with a dedicated app. Most of them are nowadays integrated with Google or Alexa, and they have their own premium services, like the ability to save the video stream in the cloud other than leveraging a local SD memory.

One of the most interesting features, at least for my own needs, are “triggers” based on motion detection. Basically, if “something moves” then your camera can start recording or may send an alert. When you leave your house, eg. for your holidays, you will be alerted in the unfortunate event someone would break inside.

But this great functionality miserably fails if you live with a pet or you put these devices outside, as I did. In fact, a falling leaf, a quick change in the lights, or a flying bee in front of their lenses would suddenly trigger an alert.

Believe me, I was flooded by a lot of false alarms. So I deactivated these triggers and started periodically checking my cams. In the very beginning even more times per day, but quickly a lot less frequently…!

So, the idea popped out of my mind: let’s build something watching my cameras and alerting me only in very specific situations, like a man showing up… if I had this, I could considerably reduce or even eliminate any false alarm!

I was so thrilled, I quickly started to think about it and I realized I’d need to put together something able to read the camera video streaming, analyze frames, and communicate, maybe sending messages?

The idea for “AngelBrain”, my new project, was set.

The software

In the following spare time, I started implementing the “core” of AngelBrain, a C++ program able to read a video streaming and classify the frames using a neural network.

Since I didn’t want to make this code too complex, I also decided to send these raw events, the classifications, to a Java server; the latter would be then responsible to filter, aggregate the events, and ultimately communicate with a simple Telegram chat.

The “core” design is quite simple and its key components are:

  • An RTSP video stream grabber based on the FFmpeg library (I started with GStreamer but I’ve found it too complex for my purposes)
  • A classifier built with OpenCV and YOLO V3 neural network
  • A messenger that communicates with the Java server by leveraging the MQTT protocol provided by the PAHO MQTT library

I was able to have a prototype of the software running on my PC in a matter of few week-ends.

AngelBrain core at work

The Java server, hosted on Heroku, receives these events, applies some basic logic, and sends the alert in case of a “significative event” as a Telegram chat message.

The main blocks of the Java server are:

  • the free MQTT broker add-on from Heroku
  • a message interpreter from the “core”, with some basic and configurable filtering logic
  • the Java client to Telegram Bot API
Some of the events received by the MQTT broker on Heroku

The prototype version of my AngelBrain was ready, it was smoothly running on my computer… but I wanted something more… I just wanted to have this code running on somewhere else than my computer (I am talking about the “core”, so the grabber and the neural network classifier)…

… and this is where I have met the true challenge!

Porting AngelBrain to the Raspberry PI 3 B+

Just after I run the prototype of AngelBrain, I was playing with a Raspberry PI, nicely boxed and I started thinking it was the “perfect platform” to run my software: a low consumption energy device, super compact, and very cheap! Something that can be turned on and simply forgotten!

My Raspberry PI 3 B+, nicely boxed, equipped with a Broadcom BCM2837B0, Cortex-A53 (ARMv8) 64-bit, VideoCore IV, SoC running at 1.4GHz, and 1GB LPDDR2 SDRAM.

But how to make it really work? I needed to take my C++ code, the libraries, compile everything for that platform, add the neural network weights. I felt I had to squeeze and save even the single clock pulses and bits of my AngelBrain… and so I did, but it was not an easy trip!

My first move was to kick out the Raspbian OS and replace it with a custom kernel, highly optimized. A friend of mine suggested Yocto:

The project provides a flexible set of tools and a space where embedded developers worldwide can share technologies, software stacks, configurations, and best practices that can be used to create tailored Linux images for embedded and IOT devices, or anywhere a customized Linux OS is needed.

Yocto is a beast. It’s very powerful and potentially limitless, but it comes at a price: the learning curve is very steep, it requires a lot of time, and building your configuration looks like “trial and error.” At least in my own experience.

Then you have to deal with the bundled version of the libraries. It happened that Yocto was shipping older versions. Sometimes it’s not a big problem. Sometimes it is huge.

After some time spent tuning the versions, rebuilding on my computer, testing, I was finally able to build my first OS image, flash the SD card, and turn on my RPI.

Ta-daaaa…. my trip to Hell was just beginning…

Yes, because I experienced so many problems, core-dumps, and unexpected behaviors that I was on the verge of giving up several times.

The first biggest problem was with GStreamer. The pipeline I was smoothly running on my PC just silently died on the RPI. I tried so many different things and approaches, but they never worked and I was not really able to replicate the problem out of my RPI. I spent weeks reading documentation, browsing forums, formulating hypotheses, trying to create some code to test. Nothing.

In the end, I was so desperate that I decided to re-implement the video grabber from scratch moving to FFmpeg. But this chapter was so exhausting, that I put the project on hold for several months. I had to recharge my batteries.

Then I got new energy and started again.

After re-implementing part of my AngelBrain to introduce FFmpeg, I had to review the set of Yocto configurations and recipes to purge GStreamer (the easy part) and to include FFmpeg. In the meantime, a new version of Yocto was released: Gatesgarth. I decided to update it too. Luckily it wasn’t so hard.

Something started working, the software didn’t die, but it was not running as expected. At least for the effort that is costed: it was not detecting anything.

I started debugging again… and found the source of the new problem.

The RPI is very slow. When I say slow, I really mean it. And decoding an RTSP video stream requires some “juice”: you can’t think to be late in calling the FFmpeg methods to read the stream. Otherwise, everything literally turns into a marmalade. And your neural network won’t work. Simple as this. Period.

So I had to re-think my “core” again, splitting the video capturing code from the neural network classification, introducing a multi-threaded producer/consumer pattern.

New debug, tests, compilations, and image build. Luckily this time it went better… much better: my RPI started recognizing and sending events to the Java server :-)

Inside my RPI 3B+ showing the dump of classified events

Ok, it’s very slow, a few frame recognitions per minute, but it’s fairly enough if you think that the RPI 3 B+ costs about 40$ :-)

--

--