The ciNeuroBot is an umbrella name for a project that attempts creating mobile robots that utilize the capability of the Neurosolver to learn temporal patterns to acquire skills to perform useful tasks. Robots can be taught by being told (e.g., by remote navigation that follows the routine to learn) or by explorations (e.g., by awarding positive behavior and scorning the bad).
Following a number of experimental testbeds for the Neurosolver that included a virtual rat navigating in a maze finding food (as described in one of my papers). The work has now progressed towards creating a physical version of the rat and a maze.
The rat-robot utilizes the Neurosolver, it is named ciNeuroBot. In several past projects, we experimented with a large robot platform (please look for “Aider” the CI theses repository), but we have moved to a modern, smaller, flexible, and affordable platform. ciNeuroBot is based on the DiddyBorg platform that operates on Raspberry PI SBCs (single-board-computers). We have added a single 360o planar LIDAR scanner to give the robot capability to determine its location through identification of environmental cues (contours).
In the course of its operation, the robot associates its locations with the states of the Neurosolver, and uses the Neurosolver’s capabilities to learn behavioral patterns — sequences of states — that allow the robot to relocate according to its current objectives that may be set externally, or may be a result of some higher level logic. The objectives constitute the robot’s internal drive (“desire”) to maximize the award for fulfilling the objectives. For example, a low level of satiation may lead to the robot seeking places with virtual food that in turn will increase the level of satiation. The robot may switch to some other objective dynamically; for example, it can act as a delivery robot and take on new delivery requests as they arrive and are prioritized.
The following are specific sub-project in this line of research:
The RaspberryPI-based server controls the hardware including the engines and the LIDAR scanner. It also includes a communication server that receives commands from clients, and sends data to clients. The data include LIDAR scans and video stream.
Android-based remote robot controller and video streaming client.
This Java-based client works with the robot resident server allowing for robot control and obtaining data from the robot-resident LIDAR and video camera. It can play the video stream, record it, take snapshots, and it can display contours detected by the LIDAR scanner.
iOS-based remote robot controller and video streaming client.
This Swift-based client that works exactly as the Android version.
This Python-based simulator allows for explorations of LIDAR-based localization in an offline environment. It simulates LIDAR scans by projecting LIDAR-like rays onto virtual walls and building a contour perceived by the robot from any given location. It also outputs the virtual LIDAR scanner readings (the intersections of the virtual rays with the virtual walls), so the data can be used for classification purposes.
The simulator reads a configuration file describing the maze, build it virtually, and then takes variable input in a form of a velocity vector that determines the robot movements. It also uses the LIDAR data to stay away from obstacles (e.g., the walls) by optimizing its distance from anything perceived by the scanner.
The simulator is capable of accommodating several robots running in the maze at the same time. Each rat can have its own pluggable behavioral module.
ciNeurobot behavioral module.
This Neurosolver-based module uses the localization data obtained from the classifier to build a model for goal-oriented behavior. A robot with a fully-trained behavioral model is able to move between locations if given a specific goal to attain. For example, it can use the camera and image processing unit to visit specified locations. The specification can be as simple as moving to a yellow place, or as complex as fetching parts needed to assemble some construction.
ciNeurobot localization module.
This module uses the data obtained from a virtual or physical LIDAR to build an SVM-based classification model that categorizes perceived contours into landmarks. Furthermore, it uses meta-level heuristics to differentiate between similar landmarks present at different locations in the maze.