0

Line Robot - RCJ Line improved

Task

Our task is now noticing and correcting imperfections in line and wall following.

Line revisited

First, let's improve line following. We want the robot to follow the line smoothly and want the line to always be in the middle of the sensor. In that case, when same challenging situations emerges, the robot will be in the best position to overcome it.

Obviously, it is not good to turn vigorously when the robot's center is close to the line. Doing so, the robot will overshoot the centre and the result will be oscillatory motion. A crude solution is here:

void loop(){
	const uint16_t LIMIT[] = { 300, 490, 450, 480, 470, 425, 410, 480, 340 };
	for (uint8_t i = 0; i < 9; i++)
		if (line(i) < LIMIT[i]) {
			go(60 + (i - 4) * 15, 60 - (i - 4) * 15);
			break;
		}
}
To see how this logic works, consider cases when i = 0 (line under the leftmost sensor), i = 4 (line in the middle), and i = 8 (under rightmost). When the line is closer to the robot's center, turning will be gentler.

This was better, but there are still problems. If the line is wider, 2 or more sensors may be activated, but only the first from the left will be taken into account. Also, there are discrete jumps between 2 adjacent sensors. Either first triggers action, or second, although there are many line positions between them. But first, let's address another problem: variations in measurements.

Calibration

Due to different illuminations and different transistors' sensitivities, each of the 9 sensors produces a different measurement, as we already noticed. There will be no problem when everything is fine, but bad things happen: irregular external light, bumpers (which change surface's distance), a poorly reflective tape. It is the best to calibrate the sensors on each site where competition will take place.

Calibration could be performed in a separate function and the results saved for later in flash so that power switched off will not erase them. Here, we will show a simpler case - auto-calibration while the robot is moving:

void loop(){
	static uint16_t bright[9], dark[9];                                          // Static values are preserved between passes. Darkest and brightest readings for each sensor.
		for (uint8_t i = 0; i < 9; i++)                                      // Store approximate middle values.
			bright[i] = 570, dark[i] = 570;
	}

	bool found = false;                                                  // If line found, start the motors and stop searching.
	for (uint8_t i = 0; i < 9; i++) {
		uint16_t reading = line(i);                           // Take a reading once, use many times later.

		if (reading < (bright[i] + dark[i]) * 0.5 &amp;&amp; !found)  // Use mid-value (*0.5) between extremes.
			go(60 + (i - 4) * 15, 60 - (i - 4) * 15), found = true;

		if (reading > bright[i])                                             // If current value is out of the present range, extend the stored extremes.
			bright[i] = reading;
		if (reading < dark[i] &amp;&amp; reading != 0)
			dark[i] = reading;
	}
}

Wall revisited

Following a wall has the same problem: turning is not proportional to error. Let's correct that method, too.

void loop(){
	int16_t error = (frontRight() - 100) * 0.5;
	error = constrain(error, -50, 50);
	go(50 + error, 50 - error);
}
When the robot is too far from the wall, error will be positive and proportional to actual error. If we add the positive value to the left motor, and deduct from the right, the robot will turn towards the wall, decreasing (proportionally) the error. The other way round for a negative error. This was easy as we had only one value for error. However, if we want to do the same for reflectance sensors, it will be not so straightforward as we have 9 errors, from 9 sensors.

Line, unified error

It is true that we measure 9 values, so have 9 errors, but their contribution to the line position can be measured and then line's position will result in only 1 error. We will neglect the sensors that do not measure any trace of black and consider the rest.

void loop(){
	float nominator = 0;
	float denominator = 0;
	for (int8_t i = 0; i < 9; i++) {
		uint16_t reading = line(i);
		if (reading < 700) {
			nominator += reading * (4 - i);
			denominator += reading;
		}
	}

	float error = 0;
	if (denominator != 0)
		error = nominator / denominator * 12.5;

	go(50 - error, 50 + error);
}

PID controller

We established proportional dependance between a positional error and robot's motion. However, there are some more problems. For a longer explanation, check this article covering PID controller . In short, let's say that the robot follows a wall at 100 mm and it is 110 mm away from it. There is a positional error of 10 mm and our algorithm will instruct the robot to turn slightly towards the wall. Let's imagine further that the robot's current direction is sharply towards the wall already. Turning it even more towards the wall will do no good. This will start an oscillatory movement. In this example, correcting error in position will not be enough. Instead, we will have to consider the rate of change of the error. Therefore, its first derivation. You can develop Your own algorithm for proportional, integral, and integral dependencies. We will use here a simple ML-R implementation. Note that this example is not peculiar but that this type of error correction is a must for most of the models. As the real world values do not stop changing abruptly, they will overshoot the target value, be it temperature, body motion, or something else. More about this problem later.