The Perfect Cure For Anything That Ails You

You do not need to put yourself down, criticize the size of your waist, or your inability to resist that piece of cake You are perfect in your imperfection. Do not beat yourself up. Your emotions of…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Computer Vision and Robotics

A practical introduction to autonomous robots for beginners

DIY Smart Tracking Camera

I had the opportunity to speak with Steven Snyder, Chief Technologist and Co-Founder at Stout AgTech. He told me in an interview that with the help of Stout AgTech’s Smart Cultivator, the company is helping to alleviate the field worker shortage that many farms are facing. “Farmers are lucky if they can find even 15 workers to weed the fields. There is a serious labor shortage in agriculture as there are not enough people who have a desire to work outside in the fields.” Stout Ag Tech’s Smart Cultivator is an automated weeding system that uses cameras, computer vision models, and robotic blades to remove unwanted plants from the fields while leaving the desire plants unharmed. Amazingly, just one of their machines is able to accomplish the work of dozens of field workers in the same amount of time.

In this blog post, you will see my own attempt at creating a robot with autonomous capabilities. I will introduce you to some basic robotics and computer vision concepts, and detail how you can build your own AI enabled robot at home. Specifically I will show you how I created a DIY smart robotic surveillance camera that autonomously tracks moving people.

If you enjoy this blog, please clap 50 times, share, and make sure to subscribe so you can stay up to date with upcoming posts.

The control system is the “brain” of the robot. It is the computer that handles all input sensor data and provides instructions to the mechanical components of the robot. In the case of this project, the control system is a Raspberry Pi computer.

Raspberry Pi Computer

The Raspberry Pi runs software that I wrote for this project. The software is a Flask Web Application written in the Python programming language. The Raspberry Pi simultaneously serves two functions here. It works as:

The sensors of a robot are used to detect information about a robot’s environment. Sensors send real-time information to a robot’s control system so that a robot can interact with its environment. Sensors can be used to detect things like temperature, light, radiation, moisture levels, gases, and much more. In this project, the sensor being used is a Raspberry Pi camera module.

Raspberry Pi Camera Module

The camera in this project sends real-time video feed to the Raspberry Pi where the information is processed and used as input to decide how the mechanical components should move

An actuator is the component of a robot that is responsible for moving and controlling a mechanism. Here, the actuators are two small SG90 servo motors. These are special electric motors that have the ability to turn to precise angular positions.

SG90 Servo Motor

Now that we understand the fundamental components of this robotic system, let’s take a look at it in more detail and put it all together.

There are a few things you will need to to gather in order to start this project:

Wiring Diagram for Servo Motors

IMPORTANT: Servo 1 is the servo mounted closest to the base and Servo 2 is the servo mounted closest behind the camera. It is important to wire the correct servo to the correct pins shown in the illustration above. The web application and controls will not work properly if the servos are not wired correctly.

TIP: Accidentally wired the servos backwards? Edit the code on lines 43 and 49 of the code in app.py. Switch pin numbers 12 and 18 in the code.

After everything is set up it should look like this (*except not moving yet*):

Port forwarding allows remote computers to connect to the Raspberry Pi that sits behind your private network. We need to set up port forwarding so so that SSH and the Flask Web Application are are both accessible on the Raspberry Pi.

WARNING: Before enabling SSH on your Raspberry Pi, make sure to change the default password on the Raspberry Pi to a new password.

Before you configure port forwarding on your network, you must know the following:

Now that port forwarding has been set up on your Raspberry Pi, you can SSH into the Raspberry Pi from your laptop or computer’s terminal by running:

Enter the password to your Raspberry Pi when prompted, and then you should be remotely logged in.

Pi-Tracker is a web application that I wrote for this project. It is a Flask web application that provides live video feed from the Raspberry Pi camera module and also controls the mechanical hardware. It can be accessed from any browser and includes a menu for configuring some video settings. Pi-Tracker also has a control panel in the user interface where the user can manually change the viewing angle of the camera.

Pi-Tracker uses a computer vision model to detect the presence of humans in the video frame. If a person is detected, it automatically adjusts the camera angle until the detected person (or persons) is centered in the frame.

At this point you should be remotely SSH’d into your Raspberry Pi. To install the Pi-Tracker, enter the following terminal command in your Raspberry Pi terminal:

Now the code for the web application should be downloaded to a directory called pi-tracker on your Raspberry Pi.

Next, cd into the pi-tracker root project directory by running:

Enable execution of the setup.sh script:

Now we need to install all the project dependencies on your Raspberry Pi. To install all the dependencies run:

It might take a couple minutes to finish installing all the dependencies for the web application and you may be prompted for several download confirmations. Once everything is finished downloading, run the application:

Finally, open the link to the web-application by following the hints given in the terminal.

Log in to the Pi-Tracker application with the default username (robot) and password (HelloRobot950).

You should see live video feed from the Pi-Tracker application. Make sure object detection is enabled in the settings and then have fun playing with it. Walk around in front of it and watch it follow you around the room — almost like magic!

The model that I am using has been optimized to run on the Coral USB accelerator from Google, an edge TPU (tensor processing unit) coprocessor that enables high speed machine learning inferencing. This USB accelerator is about the size of a lighter and can perform 4 trillion mathematical operations in a single second.

Coral USB Accelerator

Without using the Coral USB accelerator, the Raspberry Pi is only able to run object detection on 2 frames per second. With the help of the Coral USB accelerator, the Raspberry Pi can run inferences on an average of 35 frames per second, a 1,650% increase in speed. Without the USB accelerator, the video feed would appear much more laggy when object detection is enabled.

Comment down below if you gave this project a try and how it worked out for you. If you liked this article, please clap 50 times and subscribe to be notified for future content and projects.

Add a comment

Related posts:

JUnit Migration Guide from JUnit 4 to JUnit 5

JUnit 5 is a modular and modern take on the JUnit 4 framework. Given its many advantages over JUnit 4, I recommend using JUnit 5 for writing unit tests in your microservices. This page serves as the…