0

Line Robot - Distance and color sensors

Wall following

This is one of the easiest tasks. Only 1 sensor is needed. Logic is simple: if You are too close to the wall, move away. If You are too far, turn towards the wall. Let's follow right wall. The wall-following lidar should be directed at 45º to the right, if ahead of robot is 0º. According to the picture that shows lidars, this is lidar number 2 (C++ numbering, in picture this is sensor number 3).

We will be following right wall at 100 mm distance (more precisely in 45º direction to the wall). So, if distance is more than 100 mm, turn towards the wall. Otherwise away from it. This program will do the task:

void loop(){
	if (rightFront() > 100)
		go(80, 20);
	else
		go(20, 80);
}
Distance to the wall is checked. If it is more than 100 (mm), turn right (toward the wall). 80 is left motors' speed, 20 right motors'. As left rotates faster, robot will be turning toward the wall.

Start the program by entering command "loo". If 80 and 20 are not right speeds, change them. Increasing difference will result in sharper turning but may start unwanted oscillations. Use trial and error method.

This simple program will not yield smooth movements and may not be able to follow all the walls. We will consider these problems later.

Wall following one more time

Following a wall has the same problem we noticed in line following: turning is not proportional to error. Let's correct this method, too.

void loop(){
	int16_t error = (rightFront() - 100) * 0.5;
	error = constrain(error, -50, 50);
	go(50 + error, 50 - error);
}
When the robot is too far from the wall, error will be positive and proportional to actual error. If we add the positive value to the left motor, and deduct from the right, the robot will turn towards the wall, decreasing (proportionally) the error. The other way round for a negative error. This was easy as we had only one value for error. We didn't have to cope with 9 values, and 9 errors, which we got from the reflectance sensor.

Avoiding obstacles

It there is an obstacle ahead, when we follow a wall or a line, the robot will have to avoid it.

  • In case we are following a wall, we shall be turning in the direction away from the wall we are following, till the obstacle ahead disappears, when we will be able to restart wall following.
  • In case of line following, we will also have to turn, till the obstacle disappears, but will have more freedom in choosing on which side (any of the 2). The next phase will be the same as for the wall following: we will be following the wall (i.e. the obstacle). The only difference will be that we will be looking for a black line. As soon as we detect it, we will restart line following.

First, how will we code distance measuring? Here is the code fragment:

...
	if (frontLeft() < 90)
		go(-50, 50);
...
}
frontLeft() is a function that calls "mrm_lid_can_b->reading(1)" which is in object-oriented notation, as the lidar is called "mrm_lid_can_b" and the function "reading()" measures distance in mm. It is not necessary that You remember this construction if You like to use frontLeft() instead. Lidar "1" looks straight ahead. If the measured distance is less than 90 mm, the robot will be turning leftwards in place. In the successive passes, it will continue turning to the left, till the obstacle disappears.

You can continue developing the code following that logic: first the front obstacle must disappear, then You should follow the right wall (in fact, obstacle), looking for a line.

Many beginners use a different approach. After detecting an obstacle, they turn left (or right) for a fixed angle, then go ahead following a fixed trajectory, till the robot encounters the line again. This method is much easier to program, but will not be so good as the previous one.

Task

The robot must recognize green markers so it uses 2 pcs. of ML-R 6-channel color sensor CAN Bus (mrm-col-can).

Menu actions

Run the program and choose "col" in the main menu. This will show a submenu in the picture left.
  • lof and lon turn illumination off and on. It is almost always a good idea to turn it on. Different intensities can be chosen in code. Here, only the weakest can be selected.
  • per erases all the recorded patterns. More about patterns later. It is possible to erase only selected ones programmatically, but here all will be deleted.
  • ppr prints all the recorded patterns. This is an easy way to check what has been recorded.
  • pre not implemented yet (but is in code). Use "hsv" to get similar results.
  • par records a pattern.
  • 6co tests 6 colors of the current surface under the sensors: red, green, blue, orange, violet and cyan, with 6 built-in sensors.
  • hsv tests HSV (Hue Saturation Value) of the surface.
  • x main menu.

6 colors and HSV

Sensor's native mode is to measure 6 different colors. You can use them to recognize green markers. The other way is to use HSV (Hue Saturation Value). Sensor calculates the HSV values. The 6 colors or HSV values can be fetched from the sensor, or You can use recorded patterns. You can define a pattern by presenting a chosen color to the sensor and instruct it to store its characteristics: 6 colors. If You present an unknown color later, You can ask it which stored color (pattern) is the closest.

Main purpose of HSV values is to exclude external light intensity. Hue and saturation should be constant for a given color, under various illumination intensities. So, in theory, HSV is preferable. The reality is a little tricky however. White and black can have any H and S values, thus green included. So, taking V into account could be a good idea. What is worse, when the robot crosses black to white, or vice versa, it can pick the green V, too. Fortunately, in crossings, the color sensors should not experience the last problem.

In the end, which method is the best? It is difficult to say. You should try automatic pattern recognition by both HSV and 6 colors, but some manual tweaking will probably be needed.

Record a pattern

Put a chosen color below a sensor, enter "lon" and "par" commands. Interactive screen will ask You for sensor's number and pattern's number. If You wait too much a timeout will occur. Otherwise, the pattern will be saved.

If You made a mistake, repeat the procedure and record it again. It is not necessary to repeat "lon". The new value will override previous one. Patterns are not lost when You switch power off.

It is a good idea to use "lof" to switch the illumination off after the session. LED is quite strong. Also be careful when choosing even higher intensities (not possible in menu, but is in code). Check the temperature.

Print patterns

Here is an example in which each sensor holds 5 recorded patterns. Choose "ppr" shortcut.
  • 0 red.
  • 1 blue.
  • 2 green.
  • 3 white.
  • 4 black.
The sensor doesn't come with predefined patterns. You must record them.

Test HSV

Choose "lon", followed by "hsv" shortcuts and put different surfaces below the sensor. For each sensor in system 3 HSV values will be displayed, followed by "By HSV" and "By col" columns. The last 2 are recognized patterns' numbers, by HSV and by 6 colors.

How does the sensor decide which pattern is the closest? It uses 6-dimensional space for 6 colors and 3-dimensional for HSV and calculates metric between test point in space and all the existing pattern-points. The metric is a close approximation of distance, but avoids square roots.

Test 6 colors

Testing 6 colors will display sensors' raw values. It is probably needles to say that "Bl" is blue, "Gr" green, "Or" orange, "Re" red, "Vi" violet, and "Ye" yellow. Use "lon" followed by "6co" to get these results. Do not forget "lon", otherwise You will get very small numbers. It is possible to increase them even in low light conditions, but You have to turn on-chip amplification on. You can choose different values.

Usage in code

Take a look at

void RobotLine::lineFollow(){
...
if (mrm_col_can->patternRecognizedBy6Colors(0) == 2) ...
...
code in mrm-robot-line.cpp. Also study mrm-col-can.h header in the sensor's library.