Can the image processing be carried out with Arduino?

Digital image processing on a self-constructed model vehicle


1 Digital image processing on a self-constructed model vehicle Master's thesis in the computer science course presented by Georg Jenschmischek on December 9, 2016 at the University of Technology, Economics and Culture Leipzig First examiner: Second examiner: Prof. Dr. Sibylle Schwarz Prof. Dr. Klaus Bastian

2 Table of Contents I Table of Contents Table of Contents ... I List of Figures ... III List of Tables ... IV List of Source Texts ... V List of Abbreviations ... VI Abstract ... VII 1 Introduction Objective Basics Autonomous Driving Definition of Terms Current Developments Digital Image Processing Definition of Terms Processes Used Digital Image Processing in autonomous driving model vehicle construction motivation requirements implementation of the requirements chassis, engine and steering obstacle detection vision speed and attitude determination, driving behavior computing technology power supply suggestions for improvement digital image processing initial situation lane recognition algorithm used so far ... 36

3 Table of contents II Weaknesses of the previously used lane recognition algorithm Requirements for the new lane recognition Planning Functionality of spatiotemporal lane recognition Reasons for using spatiotemporal lane recognition Implementation of basic architecture of lane recognition according to theory Adjustments Practical test of the new lane recognition Comparison of the camera systems Meeting the requirements of the new lane recognition Limits of the new lane recognition Conclusion Outlook Bibliography, Annexes, Affidavit ... 80

4 List of Figures III List of Figures Figure 1 Hough algorithm example Figure 2 Binarization example ... 9 Figure 3 Model vehicle of the AADC 2015 / Figure 4 Hardware layout of the model vehicle of the AADC 2015 / Figure 5 1:10 EP Touring-Car 4WD RtR 2nd Figure 20) Figure 22 Camera image from Figure 18 with detected lane marking Figure 23 Basic architecture of lane detection ... 59

5 List of Tables IV List of Tables Table 1 Test results for determining the accuracy of the ultrasonic sensors HC-SR04 and SRF Table 2 Fulfillment of the requirements through the new lane recognition ... 69

6 Source code directory V Source code directory Source code 1 Interface definition of lane recognition Source code 2 Spatiotemporal lane recognition implementation ... 60

7 List of Abbreviations VI List of Abbreviations AADC Audi Autonomous Driving Cup ADTF Automotive Data and Time Triggered Framework BASt Federal Highway Research Institute CISC Complex Instruction Set Computer HTWK University of Technology, Economics and Culture IEEE Institute of Electrical and Electronics Engineers IMN Computer Science, Mathematics and Natural Sciences IMU Inertial Measurement Unit, engl. inertial measurement unit IPM Inverse Perspective Mapping KI Artificial Intelligence LiDAR Laser-Radar, engl. Light detection and ranging NHTSA National Highway Traffic Safety Administration OpenCL Open Computing Language OpenCV Open Computer Vision Library RISC Reduced Instruction Set Computer ROS Robot Operating System USB Universal Serial Bus

8 Abstract VII Abstract This thesis deals with the construction of a model car, which should be able to move autonomously over a miniature street landscape. After a requirements analysis, the hardware used is presented and the reasons why it is suitable for the model vehicle. A list of suggestions for improvement that arose after the car was designed and tested is also presented. A lane recognition algorithm is then developed for the model vehicle constructed. For this purpose, a spatiotemporal approach is implemented, which has proven to be suitable after a requirements analysis. This implementation is adapted to the conditions of the miniature road landscape. Finally, the fulfillment of requirements and the limits of the developed algorithm are presented using a practical test. Keywords: autonomous driving, digital image processing, lane recognition, lane tracking, model vehicle, fish-eye camera, wide-angle camera, spatial temporal

9 Introduction 1 1 Introduction The film I, Robot from 2004 ([VG04]) shows how people will no longer have to drive their cars themselves in the near future. Instead, the vehicles move independently to the specified destination, and the occupants can do other activities. What was presented as utopia in a science fiction film twelve years ago is now almost a reality. Large vehicle manufacturers such as Audi, BMW and Tesla are intensively researching driver assistance systems that can work together to enable autonomous driving functions. There is also a student research group for autonomous driving at the Leipzig University of Technology, Economics and Culture. The present work was created within the framework of this research group and deals with the construction of an autonomous model vehicle and the development of a lane recognition algorithm. The exact goals of this work are explained below. 1.1 Objective The objective of this thesis consists of two main tasks. First, an autonomous model vehicle is to be designed and constructed for the student research group for autonomous driving at the Leipzig University of Technology, Economics and Culture. This is to be equipped with sensors, actuators and computer technology so that it can move independently over an unknown miniature street landscape. To do this, it must first be clarified which requirements are placed on such a vehicle. When implementing these requirements, it is then considered which hardware can meet them. In the second part of this thesis, a lane detection algorithm for the previously constructed model car is to be developed. Here, too, requirements must first be specified that limit the selection of lane recognition approaches. A selected lane recognition approach is then to be implemented and adapted for the new model vehicle.

10 Basics 2 2 Basics At the beginning of this work, the theoretical basics should first be explained. This is why the term autonomous driving is defined in the following chapter. In addition to the historical development of the autonomous vehicles, current developments will be discussed and the research work at the University of Technology, Economics and Culture (HTWK) will be examined in more detail. The second part of this chapter deals with digital image processing. Here, too, terminology and methods should be clarified at the beginning. Subsequently, possible uses of image processing methods in autonomous driving will be explained and the connection between the two subject areas of this thesis will be shown. 2.1 Autonomous driving Definition of the term The definition of the term autonomous driving turns out to be difficult because there is no internationally recognized, uniform definition of this term. Accordingly, two definitions are presented below that together create an overall picture of autonomous driving. A first definition approach can be made by considering the two terms autonomous and driving separately and explaining their meaning. The word autonomous is derived from the two Greek words αὐτός (autos) and νόμος (nomos), which translated mean themselves and law. In German, autonomous corresponds to the words independent or independent. (cf. [Du14]) In this work, the term driving is equated with driving people or people. Trucks in everyday traffic. This includes driving through urban areas, on country roads and motorways by day or night. Special vehicles, such as forklifts, or special driving regions, such as forest paths, should be excluded for the sake of simplicity.

11 Basics 3 If you combine the meanings of the words autonomous and driving in the sense just explained, this also results in the intention of the compound word group autonomous driving, which should apply to this work: the independent driving of a passenger or truck. More precisely, this means that a vehicle can move independently in traffic without any action by a driver. [JM15] try to explain the term autonomous driving differently. They see autonomous driving as a logical continuation of previous vehicle development. In recent years, more and more systems have been installed in cars that are intended to support the driver in driving the vehicle. These include e.g. Lane stability systems, cruise control, the anti-lock braking system or parking aids. If these systems are combined or expanded with cameras, for example, the authors suggest that a car will soon no longer need a driver. Both definitions for autonomous driving are relevant for this work. The first definition describes the activity of autonomous driving in general, while the second definition looks at the technical development of hardware and software. The autonomous model vehicle that is to be created in the course of this work will, following these definitions, be equipped with hardware and software in such a way that it can perform the activity of autonomous driving. The following chapter deals with the current progress in the field of autonomous driving and the work of the HTWK in this area. Current developments In the previous section, the term autonomous driving was defined for this work. It will now be explained which current developments are taking place in this research area. Information about the HTWK Smart-Driving research group is also presented. [We14] and [Vi15] agree that the first research breakthrough in the field of self-driving automobiles was the work of Ernst Dickmann between 1986 and 1994. During this time he equips a Mercedes van with several camera systems at the Bundeswehr University in Munich. With this he drove hundreds of kilometers autonomously on the autobahn at an average of 110 kilometers per hour. The computer technology that evaluated the camera images still filled the entire rear space of the van.

12 Fundamentals 4 Despite this breakthrough in research in this area, the automobile companies initially did not dare to further develop Ernst Dickmann's work. According to [Vi15], they did not want to patronize the motorists. Therefore, no research has been done on autonomous driving for a long time. It was only in the past ten years that developments in the field of autonomous vehicles began to move again. From 2008, Google in particular emerged as the driving force (cf. [We14]). In the meantime, automobile manufacturers have also changed their minds on the subject of autonomous driving, and Audi, for example, is doing research in this area. A research group at the HTWK Leipzig is also working on the topic of autonomous driving. The so-called HTWK Smart-Driving team has been taking part in the Audi Autonomous Driving Cup (AADC) since 2014. In this competition, which is organized by the eponymous automobile manufacturer, student groups are given a 1: 8 scale model car. On this are sensors such as cameras or ultrasonic sensors, as well as a mini computer that can process the sensor data. Like the other competition participants, the HTWK Smart-Driving team is working on developing software for the model vehicle so that it can move independently over a previously unknown miniature road landscape. After a period limited to about six months, all participating student groups meet in Ingolstadt and test which model car is safest to drive autonomously. Camera systems are an important sensor in all autonomous vehicles. The evaluation and interpretation of the image information can be carried out with the aid of digital image processing. What is hidden behind this term and what digital image processing can be used for in self-driving cars is explained in the following section.

13 Basics of digital image processing Definition of terms A picture is worth a thousand words, is a well-known German saying. In fact, this sentence can be used to describe the content of the methodology of digital image processing briefly and concisely. The aim of digital image processing is to obtain information of a higher level of abstraction from image data of any kind. It has thus developed into the standard tool for practically all natural sciences and technical disciplines (cf. [Jä05]). The methods used in digital image processing all have the goal of making the information required by the user more recognizable. Therefore, it is not uncommon for different users to apply different chains of processes to the same image in order to highlight exactly the features that interest them. The actual information acquisition takes place, starting from the processed image, mostly with methods from the field of pattern recognition, which is closely related to digital image processing. (cf. [GW08]) The following chapter briefly explains the digital image processing methods used in this work. Methods used In this section, the digital image processing methods used in this work are explained. These include the Hough algorithm, inverse perspective mapping and the binarization of images. First, however, the term image should be formally defined. An image B can be viewed as a function which maps the set of image positions pos into the set of colors col. B: pos col If the image has the height h B and the width b B then the position set is: pos = {0 ,, b B 1} {0 ,, h B 1} In this work, an element of the position set is determined using its x - and y- coordinates (x, y) pos specified.

14 Basics 6 The amount of colors can be defined differently depending on the color space of the image. A gray image, which has a color depth of n col -bit, has a color quantity: col gray = {0 ,, 2 n col 1} The value 0 corresponds to the color black, the value 2 n col 1 to the color white. Color images can be represented in digital image processing in which three color spaces are used instead of one color space. For this purpose, three gray color spaces are linked, with each of the three gray color spaces in the color image corresponding to a red, green or blue color space. Any color can be represented by combining red, green or blue tones of different strengths. The result is: col RGB = {0 ,, 2 n col 1} 3 Since the algorithms presented in this work do not work with color information but only with brightness information, gray-scale images are now assumed as standard. Hough algorithm The Hough algorithm is a Process that can identify straight lines in images. It is one of the segmentation processes that search for areas in the image that have certain, known properties. (cf. [Jä05] p. 481f.) In this case the known property would be straightness. The Hough algorithm is basically based on the fact that straight lines can be represented in the Hesse normal form: g: d = x cos φ + y sin φ The straight line g is represented by the shortest distance to the coordinate origin d R 0 and by the angle of the Straight line to the x-axis φ [0, π]. In the position set pos, x and y would be variables, while φ and d are parameters. The Hough space H reverses this relationship and is defined as: H = [0, π] R 0 It therefore has a φ or d axis instead of an x ​​or y axis. Accordingly, a point in the Hough space corresponds exactly to a straight line in the pos space.

15 Basics 7 The Hough algorithm now generates a so-called voting matrix V, in which every point (i.e. every straight line in pos) can receive a number of votes. Let P pos be the set of all image points that meet the properties of the straight line sought, that is, for example, are particularly light or dark. V is then defined as V: HNV (φ V, d V) = {1, for d V = x P cos φ V + y P sin φ V 0, otherwise (x P, y P) P generate all points from P. so one vote in V for all straight lines that go through it. The strongest straight lines in the image are then the global maxima of V and the results of the Hough algorithm. Figure 1 shows the result of the Hough algorithm. The initial image can be seen on the left, the voting matrix V on the right. The two bright points in V, which received the most votes, are clearly visible. These correspond to the two straight lines from the picture on the left. Figure 1 Hough algorithm example. Output image on the left, voting matrix V on the right (adapted from [Da06]) The results of the Hough algorithm can be influenced by restricting the number of relevant pixels P differently. In addition, the value ranges and the resolution of H in V can be reduced in order to exclude or summarize certain straight lines.

16 Basics 8 The Hough algorithm is used in this work both in the previously used (see Chapter) and in the newly designed lane recognition (see Chapter) Inverse Perspective Mapping (IPM) The Inverse Perspective Mapping (IPM) is a perspective Transformation. Perspective transformations are functions that map the perspective distorted image positions pos to desired, rectified image positions pos. Inverse perspective mapping is the projection of the image scene from the image plane onto a plane of the real world where the projection center is retained (cf. [Fa90] p. 614). In the case of autonomous driving, IPM can be used, for example, to rectify the perspective image from a front camera so that it looks as if one were looking at the street scene from above (bird's eye view). One way of determining a unique IPM mapping function is explained in Chapter.There, the projection of the vehicle camera image onto the street level is described with the help of the pinhole camera model. Binarization In digital image processing, the term binarization is understood to mean a function f bin: BB bin that produces an image with the color space col = {0 ,, 2 n col 1} into an image with the binary color space col bin = {0,1}. An image that has been subjected to binarization then only consists of the colors black (color value: 0) and white (color value: 1). Binarizations use a limit value g col. All color values ​​above the limit value are colored white, all below black. 0, for B (x, y)

17 Basics 9 Figure 2 Binarization example Above: Gray value image. Bottom: Binarized image with limit value g = 205 The limit value can be statically defined and thus remain unchanged for the entire binarization. However, it can also be calculated dynamically on the basis of various parameters, such as the surroundings of the current pixel, brightness ratios or the temporal progression of these parameters. In contrast to static limit values, dynamic limit values ​​can react to changes within an image or between different images. Static limit values ​​do not require any additional calculation time for this. One method for dynamic binarization that is used in this work is the Yen algorithm from [YCC95]. This is used in the previously used lane recognition algorithm (see chapter) as well as in the newly developed lane recognition algorithm (see chapter).

18 Basics 10 Like the Hough algorithm, binarizations are counted among the segmentations. They summarize areas that have certain aspects of brightness. In general, they are used to enhance contrast, in that one color describes the areas of interest in the image while the other depicts the background (cf. [CSS14] p. 1f.). In autonomous driving, for example, bright lane markings (color value: 1) can be highlighted in front of the road surface (color value: 0). Following this idea, the connection between digital image processing and autonomous driving will be established after this chapter. 2.3 Digital image processing in autonomous driving As explained in chapter, the aim of digital image processing is to obtain abstract information from image data. In order to achieve this goal, methods such as those from the previous section are applied to the data material and the desired image features are thereby extracted. What digital image processing can be used for in the field of autonomous driving is briefly explained below. Various sensors are built into autonomous vehicles that are intended to replace the human senses of a driver. In many self-driving cars, this also includes optical sensor systems. On the one hand, these can be used for pure distance determination, such as infrared sensors or the laser radar systems based on them (LiDAR, Light detection and ranging). On the other hand, the optical sensors also include the camera systems. These differ in their number, position or lens systems depending on the autonomous vehicle. What they all have in common, however, is that they generate a sequence of images of the vehicle environment. If certain information is now searched for in this image sequence, this can be done using methods of digital image processing. One task that can only be solved with the help of cameras is the detection and tracking of lane markings. Since these markings are mostly stuck flat on the street or even just painted, they can only be detected in pictures. For this reason, algorithms for lane recognition and tracking are implemented using digital image processing methods.

19 Basics 11 Another application of digital image processing on camera images is the recognition of street signs and traffic lights. These two traffic notices can currently only be recorded via the camera image. In the foreseeable future, traffic lights and traffic signs could also communicate with vehicles and thus make their recognition superfluous (cf. [Ha15]). In addition to the information that the autonomous vehicle can only perceive via the camera, digital image processing methods can also be used to gain knowledge about obstacles, road conditions or the vehicle's orientation. Other sensor systems on the self-driving car can also generate this information. If the information from different sensors is combined, incorrect measurements from individual sensors can be compensated for. This technique is called sensor fusion. In order to use methods of digital image processing in autonomous driving, it must be possible to test appropriate procedures. This can be done, for example, with the help of prepared videos that simulate the video cameras as sensors. However, as development progresses, it is advisable to test algorithms on real vehicles as well. As an intermediate step from the video to the real car, model vehicles can be used that are equipped with hardware similar to real vehicles, but are less expensive due to their reduced size. The development of such an autonomous model vehicle is a goal of this work and is presented in the following chapter.

20 Model vehicle construction 12 3 Model vehicle construction After the theoretical fundamentals of autonomous driving and digital image processing were explained in the previous chapter 2, the realization of the objectives of this thesis can now begin. This includes first of all the construction of a model vehicle that is able to drive independently. This is what the following sections will be about. First of all, it should be shown why the construction of an autonomous model car makes sense for the HTWK Leipzig. The requirements for such a vehicle are then defined and finally it is shown how these requirements have been implemented. The model car presented here was designed by a group of students from the HTWK Leipzig as part of the autonomous driving seminar. Since the author of this work took over the planning and organization of the development, the main focus will be placed on this in the following. 3.1 Motivation The Smart-Driving team, which is the student research group on autonomous driving at the HTWK Leipzig as described in chapter, was founded in 2014. At this time, the AADC, which is a competition for autonomous driving organized by Audi, took place for the first time. After successful applications, ten student groups from Germany, Austria or Switzerland will receive a model vehicle from Audi, which is equipped with sensors and a computer. The teams then have six months to develop software for these model vehicles so that the cars can move independently on unknown miniature roads.

21 Model vehicle construction 13 The previous research work of the HTWK Smart-Driving team was limited to participation in the AADC 2014/15 and 2015/16. For the period of the competition from September to March, a five-person team of students came together and worked intensively on the implementation of the competition requirements set by Audi. Since the model vehicles had to be returned to the organizer after the AADC final in March, the HTWK Smart-Driving team lacked a test object for research in the summer months and the work was largely stopped. To prevent this, the Faculty of Computer Science, Mathematics and Natural Sciences (IMN) of the HTWK Leipzig came up with the idea of ​​building their own model vehicle. On the one hand, this would enable the HTWK Smart-Driving team to conduct research in the field of autonomous driving all year round and independently of Audi's competition. On the other hand, the research group could also take part in other competitions for self-driving model cars with their own vehicle. The Carolo Cup 1 is an example. In the course of this work, a first, prototype version of an autonomous model vehicle is to be developed and built for the HTWK Leipzig. The next chapter shows the demands placed on such a car. 3.2 Requirements To find out the requirements for an autonomous model vehicle, it is worth taking a look at the vehicle from the AADC 2015/16. As mentioned in the previous section, this can be referred to as the inspiration for the vehicle constructed in the course of this work. Figure 3 shows the vehicle from the AADC 2015/16 that Audi made available to the participating teams. Figure 4 shows the detailed hardware layout of this model car, which will be discussed in more detail below. 1

22 Model vehicle construction 14 Figure 3 Model vehicle of the AADC 2015/16 (taken from [Au15]) Figure 4 Hardware layout of the model vehicle of the AADC 2015/16 (taken from [Au15])

23 Model vehicle construction 15 The model vehicle from Audi was based on a remotely controllable car on a scale of 1: 8, which was expanded to include various sensors and computing technology. An embedded mainboard with a soldered-on processor, graphics unit and RAM was installed as the main computer. This was connected to four microcontrollers via the universal serial bus interface (USB), which implemented the hardware-related communication with the sensors and actuators. In addition to ten ultrasonic sensors, a color and depth camera, a gyroscope and accelerometer as well as a wheel tachometer were used as sensors. The actuators in the sense of engine and steering were taken from the remotely controllable car and supplemented by a hood that contained controllable vehicle lights such as blinkers. Based on the Audi model vehicle as a model, the following requirements were defined for the autonomous HTWK car, which must be met: Chassis In contrast to the Audi model vehicle, the chassis should only have a scale of 1:10 in order to meet a competition requirement of the Carolo Cup . Motor and steering In order for the HTWK model car to move, an electric motor and steering are required in addition to the chassis. These must be addressable near the hardware in order to enable control by artificial intelligence (AI). Obstacle recognition The vehicle must be able to recognize surrounding objects. This enables it to avoid obstacles or collisions. Vision The model car must be able to perceive its environment visually in order to be able to follow lane markings, for example.

24 Model vehicle construction 16 Determination of speed and position In order to achieve clean driving behavior, the vehicle must be able to determine its current speed. In addition, it should have sensors that can determine information about the position of the car in space. Driving behavior The vehicle should be able to drive at a speed specified in meters per second regardless of the battery charge level. In addition, it should be able to follow a specified curve. Computing technology A computer is to be installed in the model vehicle as the central control unit, which receives all sensor data, carries out the calculations for the AI ​​and forwards the decisions made to the actuators. Power supply Since the autonomous car is supposed to move independently, it must have a power supply based entirely on batteries or accumulators. This must be able to supply the actuators as well as the computing technology and the sensors with power. Furthermore, the following additional requirements were specified for the autonomous car of the HTWK, which do not necessarily have to be met: Power supply via power supply In addition to the mobile power supply, a connection for a power supply should be available on the vehicle. As soon as a power pack is connected, the batteries or accumulators should be disconnected from the circuit. Use of a remote control The vehicle should be able to be moved via a remote control in order to facilitate work on a large course. As soon as a remote control is connected to the model car, only the commands from the remote control should be forwarded to the actuators.

25 Model vehicle construction 17 Lighting system In order to be able to depict correct road behavior, the vehicle should have a lighting system. This includes the headlights, turn signals, brake lights and reversing lights. Charging circuit A charging circuit is to be connected to the power supply connection mentioned above, which charges the batteries or accumulators as long as the power supply is used. Visually appealing appearance The vehicle should be given a chassis that gives the model car the appearance of a real car or truck. 3.3 Implementation of the requirements In the previous chapter, the requirements for the autonomous model vehicle for the HTWK Smart-Driving team were defined. The following section deals with the implementation of the individual requirement points. Technical details and decisions made are explained. Chassis, engine and steering. When selecting the chassis, it was required that the scale of the model vehicle should be 1:10. There is a large number of different chassis in this size segment. Basically, remote-controlled model cars of all scales can be purchased in two different ways. On the one hand, there are complete packages on the market that contain a chassis on which a motor and steering control are installed. These are connected to a remote control receiver, the matching transmitter of which is usually included. The advantage of such overall packages is that all of the built-in parts fit together and work together. In addition, time is saved as no mechanical assembly is required.

26 Model vehicle construction 18 On the other hand, the parts of a model car can also be bought individually. There is a larger selection of individual components than the complete packages. In addition, parts of higher quality or individual parts that are better suited to the requirements can be purchased. In order to assemble a model car completely yourself, however, experience in the field of model vehicle construction is required. Otherwise there is a risk that incompatible individual parts will be purchased and installed. For the model vehicle of the HTWK Leipzig it was decided to use a complete set of chassis, engine and steering as a basis. There were two reasons for this: On the one hand, the group of students who designed the autonomous car for the HTWK had no experience in the field of model making. On the other hand, no finely tuned individual model parts were required for the first vehicle, as the interaction and effect of the individual components with regard to autonomous driving had to be tested first. The choice of a complete set fell on the 1:10 EP touring car 4WD RtR 2.4GHz from Reely. This package is characterized by its easy-to-use motor and steering controls. It also has enough space for the superstructures that are to carry the computing technology and sensors. Figure 5 shows the 1:10 EP Touring-Car 4WD RtR 2.4GHz in frontal view. Figure 5 1:10 EP Touring-Car 4WD RtR 2.4GHz

27 Model vehicle construction 19 In order to attach all sensors and computing technology to the vehicle, the chassis was expanded to include superstructures. Aluminum profiles served as the basic structure. These can be easily cut to size and combined into different shapes. An aluminum plate about the size of the chassis was attached to the aluminum frame. Computer technology and sensors could then be attached to the top and bottom of this plate. The complete hardware layout of the autonomous model vehicle of the HTWK Leipzig can be seen in Figure 6. The word structure means the additional aluminum plate that was attached to the original chassis. Both the top and bottom of this were used to attach components. The computer and the peripherals or sensors connected to it are marked in green. The power supply has been marked in red. The microcontrollers and the sensors and actuators connected to them were colored in different colors (orange, purple and yellow). The names SRF-08, MPU-6050 and Minnowboard are explained in the following chapters.

28 Model vehicle construction 20 USB camera 5V SSD hard disk WLAN SATA USB USB SATA minnowboard 5V USB USB HUB 5V USB USB USB USB Electronic circuit (current converter) 5V power supply connection Battery connection construction top SRF-08 SRF-08 SRF-08 Remote Receiver SRF-08 Arduino Micro # 1 USB Arduino Micro # 2 USB SRF-08 Battery # 1 7.4V MPU-6050 Arduino Micro # 3 USB MAB25 SRF-08 SRF-08 SRF-08 Structure underside of the chassis Motor & steering 7.4V Battery # 2 7.4V Figure 6 Hardware layout of the autonomous model vehicle of the HTWK Leipzig Obstacle detection In order to implement the required obstacle detection for the autonomous model vehicle, sensors must be installed in the car that can determine the distance to plastic objects in space. The autonomous model vehicle from Audi, which served as a model for this work, had built in ultrasonic and infrared sensors as well as a depth camera for obstacle detection.

29 Model vehicle construction 21 During the work of the HTWK Smart-Driving team in AADC 2014/15 and AADC 2015/16, it turned out that both the depth imaging camera and the infrared sensors were working unreliably. The depth imaging camera delivered a very noisy image from which no reliable information could initially be obtained. Useful information could be extracted from the images by averaging over several depth images. In exchange for this, the boundaries of obstacles could no longer be determined, since these were blurred by the averaging. The infrared sensors were due to inaccurate resp.incorrect measurements were already noticed in the AADC 2014/15 and removed by Audi in the AADC 2015/16. Only the ultrasonic sensors were able to convince with permanently correct measurements in the last two years. For this reason, it was decided for the HTWK model vehicle to only use ultrasonic sensors for obstacle detection. Audi uses the HC-SR04 sensor on its model vehicle, while the significantly more expensive SRF08 sensor was installed on the HTWK model car. The SRF08 only resolves its measurements to whole centimeters, while the HC-SR04 theoretically resolves its results to an accuracy of 0.3 cm. In direct comparison, the measurements of the SRF08 are accurate to 1.5 cm, while the HC-SR04 had an inaccuracy of 6.7 cm on average. The schematic test setup for determining these values ​​is shown in Figure 7, the results of this test in Table 1. Object Sensor Distance Angle Figure 7 Schematic test setup for determining the accuracy of the ultrasonic sensors HC-SR04 and SRF08

30 Model vehicle construction 22 Table 1 Test results when determining the accuracy of the ultrasonic sensors HC-SR04 and SRF08 Average measurement result after 10 measurements in meters HC-SR04 SRF-08 Object distance to the sensor in meters 0.1 0.5 1 2 Angle to the sensor in degrees 0 0.14 0. 15 0.1 20 0.15 0.9 30 0.16 0.11 0 0.57 0. 55 0. 56 0.5 30 0.53 0.51 0 1.1 1.0 10 1.07 1,. 06 0.99 30 Not recognized 1.03 0 2.08 2,, 15 2.03 20 Not recognized 1.98 30 Not recognized 2.04 The test results also show that that the SRF-08 still detects objects precisely even at greater distances and at steeper angles. The HC-SR04 could no longer perceive any obstacles at these points. Another advantage of the SRF-08 compared to the HC-SR04 is that it does not block the controlling microcontroller during the measurement. The measurement is carried out entirely on the chip of the SRF-08, the micro-controller can devote itself to other tasks during this time. Due to the test results and the aforementioned relief of the microcontroller, a total of eight SRF-08 sensors were installed on the autonomous model vehicle of the HTWK.

31 Model vehicle construction Visual ability After the autonomous model vehicle has the ability to recognize obstacles through the ultrasonic sensors, it now needs the ability to react to stimuli that can only be recognized visually. These include, for example, lane markings, signs or traffic lights. In order for the autonomous model vehicle to be able to see, it needs a video camera. An ASUS XTion camera was used in the Audi model vehicle. This combines a video and a depth camera. The weaknesses of the depth imaging camera have already been discussed in detail in Section. The video image from the ASUS XTion camera was particularly annoying in the AADC 2014/15 and 2015/16 because of the narrow viewing angle. When cornering, this ensured that the outer lane markings disappeared from the video image. Likewise, intersections from one and a half meters before the beginning of the intersection could no longer be seen, which made checking the right of way situation considerably more difficult. For this reason it was decided to test two new camera systems for the HTWK model vehicle. These are, on the one hand, a wide-angle camera and, on the other hand, a fish-eye camera. The latter promises an extreme viewing angle of 90 to the left and right. However, it has the disadvantage that, due to its special lens, lines that are straight in reality are shown curved in the image. To compare the viewing angles, Figure 8 to Figure 10 show the same scene recorded with the three camera systems mentioned. Figure 8 Intersection scene captured with the ASUS XTion camera

32 Model vehicle construction 24 Figure 9 Intersection scene recorded with the wide-angle camera Figure 10 Intersection scene recorded with the fish-eye camera

33 Model vehicle construction 25 The images clearly show the improvement in image quality with the new fish-eye and wide-angle camera compared to the XTion camera. The images also show how the viewing angles increase from the Xtion to the wide-angle to the fish-eye camera. An increased viewing angle also facilitates the development of the lane recognition algorithms, which are dealt with in Chapter 4. Whether the wide-angle or the fish-eye camera is better suited for lane detection is discussed in more detail in the section Determining speed and position, driving behavior In order for the model vehicle to move autonomously, it must be able to maintain a specified speed. It should also be able to drive curves. In the simplest case, a curve is a road curve. But there are also more complex steering situations in which the vehicle has to follow an S-curve, for example, or change lanes to overtake. In order to be able to meet these requirements for driving behavior, the vehicle first needs sensors with which it can determine its speed and its position in space. The model vehicle from Audi used the incremental encoder HOA and the inertial measurement unit MPU-6050 (IMU) for this purpose. The incremental encoder is an optical sensor through which a perforated disc rotates and which triggers an impulse with every hole. The duration of the impulses indicates the speed of rotation of the turntable (cf. [HLN12]). This turntable was mounted on the rear axle of the Audi model vehicle. The speed of rotation of the wheels or the speed of movement of the vehicle could thus be measured by this sensor. The IMU, on the other hand, determines the acceleration and orientation of the vehicle to earth. In order to determine the accelerations on the roll, pitch and yaw axes, a gyroscope is used as part of the IMU. In modern IMUs, however, no real gyroscopes are used to determine the forces. In these so-called micro-electro-mechanical systems (MEMS), the deflection is determined by a vibrating silicon mass (cf. [HLN12]).

34 Model vehicle construction 26 During the work of the HTWK Smart-Driving team, it turned out that the incremental encoder HOA on the Audi vehicle produced incorrect pulses, especially at low speeds, and that an exact speed calculation was therefore impossible. For this reason, it was decided to use the MAB25 Hall sensor to determine the speed of the autonomous car from the HTWK. This can determine the rotation of an axis using the physical Hall effect. Magnets rotate past current-carrying conductors, creating a voltage difference in the conductive material that can be measured. This allows the current position of the magnets and therefore the axis of rotation to be determined. In contrast to the incremental encoder, the Hall sensor has no weakness with slow rotations and resolves a 360 rotation to 4096 instead of just 32 sections. The Hall sensor picks up the drive wheel of the model motor via a gear and regularly returns its position. This allows the rotational speed of the drive and the wheels to be determined. If the speed of rotation of the wheels is known, the vehicle speed can also be calculated. An exact description of the speed and steering control on the HTWK model vehicle can be read in the advanced bachelor thesis by [Fr16]. In this thesis, the implementation of the driving behavior requirement is described in detail, i.e. maintaining the speed or driving curves. Computing technology In order to meet the computing technology requirements, the autonomous model vehicle of the HTWK needs its own on-board computer. On the one hand, this must be able to carry out all the AI ​​calculations of the HTWK Smart-Driving team. On the other hand, there must be a hardware-related connection possible for the sensors and actuators on the computer. Without this connection, the AI ​​would not be able to evaluate any sensor data and control neither the steering nor the engine.

35 Model vehicle construction 27 When choosing an on-board computer for the autonomous vehicle, the size restrictions first had to be taken into account. There is only one centimeter of space on the body of the car, which limits the choice of computers. In the area of ​​small computer systems in particular, there is a large selection of mainboards based on the ARM architecture. ARM computers are characterized by a reduced instruction set computer architecture (RISC). This means that the command set consists of a few commands which can be used for various general purposes (cf. [Se01]). This should make it possible to build space- and power-efficient processors, which for this reason are particularly suitable for battery-operated, small devices such as smartphones, tablets or autonomous model vehicles. In order to enable the programming of an AI for the model vehicle, a software framework is required that provides means for hardware abstraction, modular software development and communication. The development of an easily expandable and maintainable robot AI is only possible with the help of such a framework. Audi uses the Automotive Data and Time Triggered Framework (ADTF) for this purpose. However, due to its closed, commercial licensing model, this is not suitable for research and teaching at a university. Therefore it was decided to use the Robot Operating System (ROS) for the HTWK model vehicle. This is published under the open source BSD license and provides free tools and libraries for developers of robotics systems. At the time the computing technology was selected in April 2016, both ROS and ADTF were only available to a limited extent or not at all compatible for computers with ARM architecture. For this reason, it was decided to install a computer in the vehicle with the x86 architecture that is common for desktop and server computers. The x86 architecture differs from the ARM architecture in that it is a Complex Instruction Set Computer Architecture (CISC). Compared to the RISC architecture, the machine commands of the CISC architecture are more abstract and powerful. In order to simulate a CISC command in the RISC command set, several RISC commands are usually required. In general, x86 processors are said to have a higher power consumption and higher heat dissipation than ARM processors (cf. [UHF13]).

36 Model vehicle construction 28 Finally, the MAX minnowboard was chosen as the on-board computer for the HTWK model vehicle. As required at the time, it uses the x86 architecture. It is also characterized by its compact design with a firmly soldered Intel Atom processor and two gigabytes of RAM. The internal graphics card contained in the processor can also be addressed using the Open Computing Language (OpenCL). OpenCL enables digital image processing to be calculated on the graphics card with little effort, thereby relieving the processor. You can find more information on the use of OpenCL in chapter The MAX minnowboard is supplied with storage via a solid-state drive (SSD). A WLAN USB stick is also used to access the on-board computer from outside. To enable the hardware-related query of the sensors and control of the actuators, three Arduino microcontrollers are used on the HTWK model vehicle, as in the Audi model. These offer options for hardware-related communication and are connected to the on-board computer via USB. At 5 2 cm, they are space-saving, but also have less work and program memory than larger microcontrollers. With the help of the ROS library ROS-Serial, an easily expandable communication between the on-board computer and the microcontrollers could be programmed. Power supply So that the model vehicle can move autonomously, it must have a mobile power supply. Two batteries are used on the car for this purpose. One supplies the motor and the steering with electricity, the other operates all the electronics that are additionally attached to the chassis. Just like the Audi model vehicle, Brainergy batteries with a charge of 5200 mAh were installed on the HTWK Auto. The AADC has already had good experiences with these and there was no reason to rely on unknown batteries.

37 Model vehicle construction 29 The motor and steering can be operated directly with the specified batteries. However, a specially designed electronic circuit must be used with the battery to supply the on-board electronics. This fulfills two tasks: Adjusting the operating voltage The Brainergy batteries supply a voltage of 7.4V. However, the entire on-board electronics only require an operating voltage of 5V. The developed electronic circuit reduces the voltage of the batteries to the required level. Use of a power supply unit As an optional requirement for the autonomous vehicle, it was stated that there should be a possibility of temporarily supplying the car with power via a power supply unit for development purposes. The electronic circuit enables the power supply source to be switched between the battery and the power supply unit without interruption. Figure 11 shows the circuit diagram of the electronic circuit that meets the two requirements just mentioned. It was designed by Andrej Lisnitzki and Martin Morcinietz.

38 Model vehicle construction 30 Figure 11 Circuit diagram of the electronic circuit for the power supply of the HTWK model vehicle (designed by Andrej Lisnitzki and Martin Morcinietz) 3.4 Suggestions for improvement The overall development time of the autonomous model vehicle of the HTWK was seven months from April to October The development work was carried out by a team consisting of the students Patrick Bachmann, Kay Szesny, Tino Weidenmüller, Martin Morcinietz and Andrej Lisnitzki under the direction of the student Georg Jenschmischek and the professor Dr. Sibylle Schwarz. All mandatory requirements and two optional requirements for the vehicle have been met. The finished model vehicle can be seen in Figure 12.

39 Model vehicle construction 31 Figure 12 HTWK model vehicle Although all the mandatory requirements for the autonomous model car of the HTWK Leipzig were met, this vehicle remains a first prototype of its kind. The team was able to gain a lot of experience during its development. In retrospect, the following suggestions for improvement can be made for the next HTWK model vehicle in the future: Fulfillment of the optional requirements The lighting system, the charging circuit and the visually appealing appearance are among the optional requirements that have not been met. The lighting system in particular should be planned for the next autonomous HTWK car from the start, as it is an important part of both the AADC and the Carolo Cup. In addition, the completely correct behavior of an autonomous vehicle also includes the setting of the light signals, which is why the model vehicle should offer a possibility for this.

40 Model vehicle construction 32 A charging circuit for the batteries is difficult to implement, as a balancer is required to charge the batteries used up to now, which balances the voltage of the individual cells of the battery. Such a charging logic would cost a lot of development time and space on the vehicle and should therefore not be listed as a requirement for the next model car. Observance of the requirements of the Carolo-Cup In perspective, the HTWK Smart-Driving team and its new model vehicle should aim to participate in the Carolo-Cup. This competition sets two requirements that can be important for the development of a future model vehicle: On the one hand, Carolo Cup cars should be as inexpensive and energy efficient as possible. On the other hand, at the Carolo Cup it is important to drive through a given course as quickly as possible. Both requirements were not taken into account in the vehicle that was created in the course of this work and should play a role in the development of the future. Use of different or more microcontrollers When developing the code that controls the microcontrollers, it was noticed that the three Arduino Micro microcontrollers did not have enough memory for the tasks to be performed. This resulted in unexpected behavior of the sensors or actuators. With a lot of optimization effort, the code could be reduced so that all microcontrollers worked correctly. However, this code can now hardly be extended or maintained. In order to prevent this in the future, either stronger microcontrollers should be used in smaller numbers in the next model vehicle or an additional Arduino Micro microcontroller should be installed. Use of an on-board computer with ARM architecture As already mentioned in chapter, only an on-board computer with ARM architecture was dispensed with, because ROS was not yet approved for ARM operating systems at that time. This has changed in the meantime, so that the larger selection of ARM computers can also be used for the HTWK model vehicle. The choice of the on-board computer should be made again for the next autonomous car.

41 Model vehicle construction 33 During the practical test of the lane recognition developed in this thesis (see Chapter 4.5), it was also noticed that the processor of the on-board computer is too small in terms of performance to be able to calculate sensor processing and AI. The upcoming on-board computer should have a much more powerful processor. Improved transmission of the remote control signal At the moment, the remote control signal is passed through a microcontroller if the autonomous control has been deactivated. Since the remote control signal is encoded using pulse width modulation and the sampling rate of the microcontroller is too low, the remote control signals are not correctly imitated by the microcontroller. As a result, the motor and steering control jump back and forth between different values ​​when using the remote control, which leads to uneven movements of the vehicle. To prevent this, the future model vehicle should have a direct connection between the receiver of the remote control and the motor or the steering.Via an electronic switch, a microcontroller should be able to connect or disconnect this connection as required. Use of an SD card instead of an SSD hard disk The SSD hard disk installed in the vehicle is only used to store the operating system and the log files. Both tasks could also be performed by an SD card that is connected to the SD card slot on the mainboard. This saves the space of the hard disk on the upper deck of the vehicle. There is also no need for an additional power connection for the SD card, as was the case with the SSD. It must be checked whether the lower read and write speeds of the SD card are sufficient for the purposes of the autonomous vehicle.

42 Model vehicle construction 34 Overall, the goal of completing the first autonomous model vehicle of the HTWK Leipzig can be regarded as fulfilled. In the following chapter 4 the processing of the second objective of this thesis is started by developing a lane recognition with digital image processing on the constructed model car. In the bachelor thesis of [Fr16], the HTWK model vehicle is also used. In it [Fr16] develops a steering and speed control for the autonomous car.

43 Digital Image Processing 35 4 Digital Image Processing In the previous chapter it was shown how an autonomous model vehicle was constructed in the course of this work. As mentioned in section 3.1, the possibility of year-round research by the Smart-Driving team was the main motivation for building the model car. In the second part of this work, an initial research work with the new model vehicle is to be carried out using digital image processing. More precisely, the aim is to improve the previous lane recognition from the AADC competitions. Lane recognition describes the discovery and tracking of lane markings in the camera image of an autonomous vehicle. For this purpose, methods of digital image processing are used. After the lane has been successfully identified, it must be converted into a lane model. With the help of this model, steering control can then take place on the vehicle, which keeps it within the lane. In this work only the lane recognition in the camera image is to be considered. How the previous lane recognition works and why an improvement is necessary at all is shown in the following subsection. 4.1 Initial situation In order to meet the requirements of the AADC competitions in the last two years, the HTWK Smart-Driving team had to ensure, among other things, that the model vehicle drives autonomously within the given miniature road landscape. The competition jury deducted points from the team result both when leaving the lane and when cutting lane markings. All in all, the lane departure warning system is the basic building block for clean, autonomous driving, since all other driving tasks depend on it: Parking spaces cannot be approached or even not recognized due to poor positioning on the road. Furthermore, no lane changes can be made when overtaking and turning at intersections is made more difficult. For these reasons, two years of research have already flowed into the lane recognition of the Smart-Driving team. In the following, the functionality of the previously used lane recognition is presented.

44 Digital image processing The lane recognition algorithm used to date The lane recognition algorithm used in the AADC 2014/15 is essentially based on the assumption that lane markings in the IPM image of a road always end at the edges of the IPM image. The edge points between which the lane lines run can be followed. If a track marking is interrupted, it can be found again using the saved edge point as soon as it appears again in the image. The entire lane detection algorithm consists of three phases, with the approach just mentioned forming the last phase. To prepare for lane detection, a vanishing point and an IPM are first determined. Phase 1: Vanishing point determination The vanishing point of a perspective image is the point at which all lines in the image intersect, which in reality lie on the same spatial plane. Transferred to road traffic, for example, all lane markings lie on the same spatial plane and therefore intersect in the image at a vanishing point. In reality, on the contrary, road markings run parallel and would never intersect. Figure 13 shows an example of the vanishing point of a street scene. Figure 13 Escape point (VP) of a street scene

45 Digital image processing 37 The determination of the vanishing point as part of the track recognition used so far is based on the method presented by [Hu06]. In summary, a Hough algorithm (see chapter) is first applied to the image, which recognizes the lane markings as distinctive lines. As just mentioned, the vanishing point is at the intersection of the lane markings. As a result, using the least squares method, a point is calculated that is as close as possible to all Hough lines. This point is transferred to the calculation of the vanishing point on the next video image, since it can be assumed that the variance of the vanishing point of two successive video images will be slight. When selecting the point that comes as close as possible to all Hough lines, the distance to the previous vanishing point can now be used as an evaluation criterion. This algorithm is applied to the video image stream until the vanishing point shows hardly any changes to the vanishing point of the previous image. This point can be recorded as the camera vanishing point with respect to the plane of the road, since the camera is permanently mounted while driving. The calculated vanishing point is required in phase 2 to generate the IPM image. Figure 14 shows the same street scene as Figure 13 with Hough lines marked in green and vanishing point marked in red. Figure 14 Street scene with Hough lines (green) and vanishing point (VP, red)

46 Digital image processing Phase 2: IPM generation The generation of the IPM image from the video image is primarily used to ensure that distances can be measured in a correctly generated IPM image. More precisely, this means that a pixel in the IPM image corresponds to a previously defined distance in reality. If two IPM image points and their counterpart points, which are shown in reality, are considered, then the distance between the image points is directly proportional to the distance between these points in reality. As a result, the position of the vehicle in the lane can be read directly from the image. In order to generate such a proportionately accurate IPM image from the video image, each video image point must be mapped onto an IPM image point. This mapping function f IPM can be calculated using the pinhole camera model, as [BBF98] show. The mapping function f IPM has the following parameters: The displacement of the camera to the origin of the coordinates in the IPM image d x, d y. Since the center of the front bumper was set as the origin of the coordinates, the camera was offset by d y = 15 cm to the rear and by d x = 3 cm to the right. The height of the camera above the ground d z. The camera in the AADC was d z = 25 cm above the floor. The focal length of the camera Br. The camera in the AADC had a focal length of Br = 525. The opening angles of the camera horizontal α h and vertical α v. With the XTion camera in the AADC, these were α h = 29 and α v = 23. The image size of the output image horizontally G h and vertically G v. In the AADC, the Smart-Driving team used the image size G h = 640 times G v = 480

47 Digital image processing 39 The angle of inclination of the camera δ to the projection surface. This is determined on the basis of the vertical distance d FP of the vanishing point FP to the center of the image. d FP = FP y G h 2 With the help of trigonometric relationships in the pinhole camera model, the angle of inclination δ can be calculated dynamically based on the distance d FP and the focal length Br. δ = tan 1 d FP Br The mapping function f IPM receives the x and y coordinates of an image point and calculates the position of this image point in the IPM image. f IPMx (x, y) = dz cot ((δ α v) + 2y α v G v) cos (α h + 2x α h) d G yhf IPMy (x, y) = dz cot ((δ α v) + 2y α v G v) sin (α h + 2x α h) d G xh If these calculations are applied to the camera image from Figure 13 and Figure 14, the IPM image from Figure 15 is generated. The red horizontal line is drawn at a distance of 100 pixels from the lower edge of the image, which should correspond to a distance of 100 cm from the vehicle. Since two median strips correspond to a distance of one meter, it can be seen that the IPM image is actually true to reality.

48 Digital image processing 40 Figure 15 Proportional IPM image. The red line marks a 100 pixel distance to the lower edge of the picture or a 100 cm distance to the vehicle. Phase 3: Lane edge point tracking After an IPM image was generated from the video using the determined vanishing point in the two previous phases, the third Phase the actual lane detection and tracking can be carried out. For this purpose, as already mentioned under, the image points are determined and tracked which belong to the lane marking and lie on the edge area of ​​the IPM image. The edge area of ​​the IPM image is not synonymous with the image edge, since the perspective distortion in phase 2 resulted in the rectangular video image being mapped onto a trapezoidal IPM image. Figure 16 clarifies the terms just mentioned again. There the image edge is marked blue, the IPM edge green and the lane edge points of the left lane are marked red.

49 Digital image processing 41 Figure 16 Proportional IPM image with edge markings. The image edge is marked in blue, the IPM edge is marked in green and the lane edge points of the left lane are marked in red. In order to find and track the lane edge points, the IPM image is first binarized (see chapter). The Yen algorithm is used to determine the limit value (see [YCC95]). This creates a robust limit value even with dark or overexposed images. Connected component labeling is then carried out on the binarized IPM image. Connected Component Labeling describes the process of recognizing coherent, white areas of the image and collecting information about these areas. Such information is, for example, the point furthest to the right or at the top, height, width and area. For initialization, the lane recognition algorithm looks for two connected components that are to the right or left of the center of the image and have the highest possible height-to-width ratio. The algorithm assumes that it is started on a straight line.

50 Digital image processing 42 Once the algorithm has found suitable connected components, it uses the points of the components to approximate a third-degree function that approximates the course of the lane marking. Then the intersections of the function with the IPM boundaries are determined and saved. These points are the so-called lane edge points. In the next video image, binarization and connected component labeling are carried out again. Starting from the last lane edge points, a search is made for connected components which, on the one hand, are not too far away from the last lane edge points and, on the other hand, do not violate any typical lane marking properties. In addition to the already mentioned high height-to-width ratio, the latter include a small area and a maximum width. Depending on whether or not suitable connected components have been found, the track can now adopt one of four states. The new lane edge points are determined in different ways depending on the condition. The four states are: Full-Line The full-line state is reached when the same component has been found in the vicinity of both previous track edge points. The lane marking is not interrupted. The track edge points are shifted towards the common component. Broken-Line A suitable component was only found in the vicinity of a previous lane edge point or different components were found for the two previous lane edge points. The track marking appears to be interrupted and the track assumes the Broken-Line state. The larger of the two components in terms of area is selected and the nearby lane edge point is shifted appropriately. The new position of the more distant lane edge point is again determined using a third degree function, which was approximated from the points of the selected component.

51 Digital image processing 43 Vanished-Line Vanished-Line describes the status of a lane marking when the lane edge points are close together and no suitable label has been found in their vicinity. In this case, the vehicle is on a curve and the internal lane marking disappears from the camera image as planned. The lane edge points remain in their positions and the lane is marked so that the steering control on the vehicle knows that the lane has disappeared and should not be observed. Lost-Line The lost-line is a modification of the vanished-line, whereby the lane edge points are not close enough to each other to represent a cornering. The behavior is similar to that of the vanished line, apart from the fact that the search space around the lane edge points is enlarged with each new video image in order to find the lost lane as quickly as possible. Basically, the right and left lane markings are recognized and tracked separately. When moving the lane edge points, however, care is taken that the points assigned to the right or left do not come too close or even cross one another. Figure 17 shows a test image of the lane detection. You can see a right turn. The IPM image was binarized, which is why only black or white pixels can be seen. The lane edges of the right and left lane markings are marked red and green. The approximated functions that can be used when determining new lane edge points are drawn in orange for both lane markings.

52 Digital image processing 44 Figure 17 Test image of the lane recognition used so far. Lane edge points are marked in red and green. The approximated third-degree lane marking function is shown in orange. Since lane recognition, as explained under 4.1, is a central and important component of autonomous driving, the previously used lane recognition contributed to a large extent to the fact that the Smart-Driving team could not keep up with the top teams of the AADC. During the error analysis according to the AADC 2015/16, the following weaknesses in the previously used lane detection algorithm were found:

53 Digital image processing 45 Recognizing lane markings After the Connected Component Labeling step, only those connected components were used for lane recognition that had certain properties. These properties were determined experimentally for various scenes by examining lane markings recognized as incorrect for features that distinguished them from real lane markings. A list of properties was compiled that the Connect Components had to meet in order to be recognized as a lane marker. Unfortunately, these properties turned out to be both too strict and too lax, which leads to misrecognition. However, the list of properties could not be extended or shortened without incorrectly classifying connected components that were already correctly recognized or rejected. As a result, the lane recognition made errors during initialization and when there were bright obstacles. Misinterpretation of intersections When looking at intersections and curves in the IPM image, it is noticeable that they are similar in terms of their lane markings. Both markings are basically curved. Due to their right-angled straight lines, however, intersections appear more abruptly in the IPM image than curves. The previously used lane recognition interpreted intersections mostly as curves due to their similarity. This resulted in the automatic turning process of the AADC vehicle at intersections, which in most cases was a driving error. Getting stuck in faulty states Another weak point of the lane recognition used up to now was that the lane edge points could only be freed from faulty states with great difficulty. This means that due to incorrect detection, lane edge points have been shifted to wrong edge locations. There they did not find their actual lane markings in the vicinity quickly enough to prevent the vehicle from leaving the lane.