New ESP32-based platform testing: Updates!

New ESP32-based platform testing

Works:

  • GY-91 through I2C: gyroscope + accelerometer
  • Serial for external control
  • I2C slave for external control
  • Motors control

Zakhar disassembled

While the robot is changing all the time, after the last update I’ve decided to take some snapshot of its current state and show to the public what’s going on now. Let’s start:

Zakhar consists of three (sometimes four) platforms and a service OLED display. I’m using the last one for debugging, testing, etc.

Prototype of a new moving platform. Work in progress

Planned: ESP32, gyroscope, accelerometer, onboard data processing, variable speed, wireless mode and more! https://github.com/an-dr/zakhar_platform/tree/feature/esp32

Some zakharos_core documentation

Before moving to the reimplementing of moving platform on ESP32 I’ve added some documentation to the zakharos_core repository. Code has a lot of roughness that will be polished later (probably on the next milestone - The Emotions Demo) Documentation is not complete but will be supplemented with time. Probably the illustration, requirements and how to build should be enough to check the implementation out. So PTAL if you are interested!

Reimplemented the Reptile demo with ROS!

Half-milestone. What’s new for now: Brand new moving platform United sensor/face platform Raspberry Pi 4 New ROS-based application architecture Updated face module with stabilized voltage Updated cable system with 8-pin connectors Stable (finally) I2C connection. Thanks to lower frequency (10KHz) and the new connectors.

Out of memory: moving to RPi4

Ok, ROS needs RAM. Just received Raspberry Pi 4 B with 4 GB of RAM and finally I can continue implementation of this structure. Stay tuned!

Improved ROS-based architecture for the program core

I’m about to finish migration to the ROS. Aside small things, architecture seems crystalized. It looks like following: Every block represents ROS-node (except the hardware block). Interaction between blocks is trying to mimic a real brain (as I understand what is happening there). I’m lazy to describe every node right now, I will write an article when this work will end up with the second demo. But if you interested you can check the code out in the repository here:

Just a photo with current state and a repo I'm working on.

https://github.com/an-dr/zakharos_core (just pushed one more step according the plan)

Moving platform update 1

When three wheels are better than four