Approaching Programming for a Humanoid Robot Project: Python, C++, and Java
Published on: September 25, 2024
Building a simple humanoid robot , which basically crawls on wheels can be simplified, but can be challenging experience...
Read MorePublished on: September 30, 2024
Building a humanoid robot on wheels can be an exciting and challenging experience. This project aimed to create a robot with features like face detection, recognition, speech processing, and obstacle avoidance. The robot had a car-like driving mechanism with front-wheel steering and rear-wheel drive. The driving mechanism was automated using ultrasonic sensors, and the robot's head would tilt to detect and greet humans. Along the way, we encountered numerous challenges—both in programming and electronics. This document outlines the steps, decisions, lessons learned, and how future generations can approach similar projects with better planning and execution.
The robot was designed to be 4'5" in height, with the following features: Face detection and recognition using a camera. Obstacle avoidance using ultrasonic sensors. Automatic driving using a front-wheel steering and rear-wheel drive system, controlled by DC motors. Voice recognition and greeting with human name recognition using a simple database. The project used a Raspberry Pi 4B, which acted as the brain of the robot, processing input from sensors and controlling motors.
We initially planned to power the robot using a 20000mAh power bank (18W output). However, this power bank was insufficient for the Raspberry Pi 4B, causing glitches and instability. The power bank was used and couldn't provide enough power to run the Pi reliably.
Lesson Learned: Always ensure that the power source meets the minimum requirements for the Raspberry Pi, especially the power-hungry Raspberry Pi 4B.
Using the Scooter Battery:We decided to use a 12V 2.5Ah lead-acid scooter battery to power the entire robot, including the motors, sensors, and the Raspberry Pi. A buck converter was used to step down the voltage to 5V for the Raspberry Pi.
Lesson Learned: Lead-acid batteries are typically heavy and not ideal for lightweight projects. Lithium-ion batteries are better for projects where weight and portability are important.
Charger Compaitbility: Charger Compatibility:The charger used to charge the scooter battery had a rating of 12V 2A, which was insufficient for charging the battery. This led to undercharging and battery performance issues. The correct charger with higher amperage should be chosen.
Lesson Learned: Always match the charging current with the battery specifications. Using a charger with lower current than required can result in slow charging and potential damage to the battery.
Wireless Charger Adapter:At one point, we tried to use a 12V 3.5A wireless charger adapter to charge the battery. While it could technically supply more current, its compatibility with the lead-acid battery and the required charging algorithm was insufficient.
Lesson Learned: When dealing with lead-acid batteries, always use chargers specifically designed for them, rather than trying to adapt other power sources.
The software was written using Python, a popular language for Raspberry Pi projects. The project involved several modules: Face recognition (using the Pi Camera and OpenCV). Motor control for driving the robot. Speech recognition and synthesis (using libraries like pyttsx3 or Google Text-to-Speech). Ultrasonic sensor logic for obstacle avoidance.
*We faced challenges due to the high power consumption of the Raspberry Pi, causing it to become unstable during processing.
Lesson Learned:For performance-critical applications, consider using C++ for motor control and other time-sensitive tasks, while leaving high-level tasks (like face recognition) to Python, which has an easier-to-use library ecosystem.
Multithreading and Software Design:The software was split into multiple files, each handling a specific task. For example, control.py handled the motors, face_recog.py handled face detection, and the main.py coordinated everything.
Lesson Learned: A well-organized modular approach is essential for complex projects. However, using multiple files requires careful management of multi-threading to ensure synchronization between tasks, especially when handling sensors, motors, and the camera simultaneously.
Communication Between Modules:The Raspberry Pi needed to coordinate multiple tasks at once, which required handling multiple sensor inputs and controlling motors without delay. We tried using different approaches, like threading, to execute multiple functions simultaneously. However, the overall performance suffered due to limited CPU power and lack of efficient resource management.
Lesson Learned: concurrency and Multithreading can be difficult to manage in Python, especially on low-powered deices like the Raspberry Pi. If possible, use C++ for performance-heavy tasks or utilize framework like ROS for better Resource management. Or you can use Arduino uno or mega, to distribute the work load, like sensing and detection is done by the microcontroller and the processing and calculation is done by the Rapsberry Pi or the Single Board Computers , we can save some resources from memory side, plus this process generates less heat, but remember you need a good powersupply with longer backup
We used 4 ultrasonic sensors (front-left, front-right, back-left, back-right) to handle obstacle avoidance. The robot would stop and reverse if both front sensors detected an obstacle. The back sensors would then guide the robot to turn., Also used a additional Head ultrasonic sensor on the front side of head top facing front, used when if obstacle detected it will trigger camera to check for human for safety
Lesson Learned: Sensor placement is critical for accurate obstacle detection. The robot’s movement patterns should be thoroughly tested in different environments to ensure the sensors are positioned correctly for optimal detection.
Motor Control:The Robot initially used a 12 volt motor controlled ia L298N motor driver. one of controlling forward and Revserse and another for steering mechanism (left and right) , lets say a logic like 2 second clock wise and 2 second anticlockwise, and mechanism was made such a way that it will center it self(the mechanical part)
Lesson Learned The L298N motor driver was actually effecient and can take heavy load for this project, rest motor driver like DR8833 smoked out during tests (under room temperature), but using a servo motor for this steering purpose can be worth it ( opt for a heavy one like capacity more then 30kg)
We used OpenCV and dlib to implement basic face recognition. The robot could detect a face, recognize it from a database, and greet the person by name. The database stored 15 samples of each person, compiled into a pickle file for face recognition.
Lesson Learned: While OpenCV and dlib provide powerful tools for face detection and recognition, they can be computationally intensive on the Raspberry Pi. Consider using optimized versions or offloading processing to a more powerful system if real-time performance is critical.
Speech Processing:The Robot greeted the user using the text-to-speech(TTS), we initially used Google Text-to-Speech or pyttsx3 to generate speech from text, basically a prewritten code with some special instructions alinged to it, and the peice of programming will trigger that will activate this speech program
Lesson Learned: Speech Processing required significant CPU and memory resources, if Possible , pre-generated speech samples (as we did with our pre-recording greetings) to avoid real-time TTS delays.
Ensure that your power source can handle the total power consumption, especially when running a power-hungry device like the Raspberry Pi 4B. Lithium-ion batteries are preferable for robotics projects due to their lighter weight and better energy density compared to lead-acid batteries. Always match the charger's current rating with the battery's requirements.
Python is great for high-level tasks (e.g., face recognition, speech synthesis), but consider using C++ for time-sensitive tasks like motor control and sensor processing to optimize performance. Use ROS if you're working on complex robotics projects with multiple sensors and actuators. It will help you handle concurrency, communication, and hardware abstraction efficiently.
Break down your code into manageable modules and ensure synchronization between threads or processes. This will help keep the project organized and scalable for future enhancements. Consider using event-driven programming or state machines for handling complex behaviors in robotics systems.
Test each module independently (e.g., motor control, face recognition) before integrating them into the final system. Debugging robotics projects can be difficult because you're working with both software and hardware. Make sure to have a methodical approach to track issues, from sensor calibration to motor control.
Building a humanoid robot on wheels is an ambitious yet rewarding project. By integrating software, hardware, and sensors, we can create intelligent robots that interact with their environment. This document provides valuable insights into the challenges and solutions encountered during the project. By learning from the mistakes made, future generations can approach robotics projects with better planning, informed decisions, and efficient execution.
Published on: September 25, 2024
Building a simple humanoid robot , which basically crawls on wheels can be simplified, but can be challenging experience...
Read More