Category: robotics

Zakhar Brain Service

Yesterday I merged a big software update to the Zakhar’s Raspberry Pi Unit - brain_service.

The update brings a service providing the robot’s status (network, OS status) and access to the CAN bus for many simultaneously connected clients. Also, the service tracks the presence of other robot Units on the CAN bus.

Brain Software Architecture

Before updating Alive OS to support qCAN (my CANbus-based protocol) I have the last thing to do. To simplify my live in future I need a CAN publisher that can publish messages to many subscribers. My main subscribe of course is AliveOS but also to display information about connected devices I need a second subscriber - a service listening only qCAN Present messages.

To do it I will use a ZeroMQ protocol - an extremely supported and documented for many programming languages standard. I’m going to update my brain_service to support the protocol and it will be responsible for all interaction with Raspberry Pi.

Sensor Unit with qCAN Support is Merged!

Updated code for the Sensor Unit with qCAN support is merged and documented!

The Wheeled Platform with CAN support is merged!

A new step in Zakhar global transition to CAN bus is done!

New Zakhar Face Module

The new Unit is based on the ESP-Wrover-Kit and uses a TJA1050 module for CANbus communication.

It is compatible with the qCAN protocol (described on the Zakhar main page) and can show 5 facial expressions: Anger (0x32), Sadness (0x34), Pleasure (0x33), a Blink (0x31), and Calm (0x30).

CAN Bus and a New Simple Protocol

Not Only Two Wires

It’s been more than two years already since I started working on my robot Zakhar. The Zakhar 1 was built out of Lego and relied on I2C communication between modules. It was a nightmare because as it turned out each MCU developer has its own understanding of how a developer should interact with the I2C unit. What I wanted from the interface:

Zakhar II: Separately controllable DC motor speed

"On the way to AliveOS" or "making of a robot with emotions is harder than I thought"

So, indeed, the more I develop the software for Zakhar, the more complicated it goes. So, first: Contributions and collaborations are welcome! If you want to participate, write to me and we will find what you can do in the project. Second, feature branches got toooooo huge, so I’ll use the workflow with the develop branch (Gitflow) to accumulate less polished features and see some rhythmic development. Currently, I’m actively (sometimes reluctantly) working on the integration of my EmotionCore to the Zakhar’s ROS system.

EmotionCore - 1.0.0

First release! The aim of this library is to implement an emotion model that could be used by other applications implementing behavior modifications (emotions) based on changing input data. The Core has sets of parameters, states and descriptors: Parameters define the state of the core. Each state has a unique name (happy, sad, etc.) Descriptors specify the effect caused by input data to the parameters. The user can use either the state or the set of parameters as output data.

Updated draft of the ROS-node network with the Emotional Core

Working on the Emotion Core update it became clear to me that placing responsibility of emotional analysis to Ego-like nodes (a Consciousness, Reflexes, and Instincts) is a wrong approach. It leads to the situation where the developer of the application should specify several types of behavior themself, which is too complicated for development. I want to implement another approach when concepts themselves contain information about how they should be modified based on a set of the emotion parameters.