To install StudyMoose App tap and then “Add to Home Screen”
Save to my list
Remove from my list
The official website for OpenCV [7] describes the Computer vision and how deep learning is emerging in this field. It completely describes all the models for image detection, image recognition, Object detection, and segmentation. Reading about this model gives us insight into how to choose a model for deep learning of our cloud clustered data in order to make it efficient for autonomous cars. Along with that, it describes the tools for video recognition which can also be very useful for our project.
Since we have a camera that will capture the images continuously and will be showing us also in the form of video. Using a proper deep learning model is thus very important.
Obstacle avoidance is one of the important features of autonomous driving. The paper presented by Nikolaos Baras, Georgios Nantzios, Dimitris Ziouzios and Minas Dasygenis [8] proposes solid solutions for avoidance of obstacles in self-driving vehicles. For indoor navigation, it uses Raspberry Pi and LIDAR thereby ensuring that our vehicle is capable of navigating in an unknown environment while avoiding obstacles.
One of the key points here is it does not use Computer Vision (CV) techniques for obstacle detection but uses only a single LIDAR sensor. It [8] further adds that the reason for using single LIDAR is it provides 360 degrees of coverage. Authors [8] say that this work can be further enhanced by generating images and mapping them during navigation. This key feature we are implementing in our project.
In recent years it has been demonstrated that deep learning can overcome traditional algorithms for image classification as well as to object recognition and face recognition tasks.
The paper presented by Nikolaos Baras, Georgios Nantzios, Dimitris Ziouzios and Minas Dasygenis [9] focuses on deep learning to solve high complex tasks such as perception. The paper [9] deals with techniques of deep learning in the field of Computer Vision (CV) using embedded systems and GPU processing on NVIDIA Jetson TX2. The authors [9] first did an experiment using a deep convolutional neural network named FADNet and clearly explains the datasets. Then the second part of the research covers real-time tests on a dataset of self-acquired video frames from Jetson TX2 embedded system camera and found out that it gives promising research. Nikolaos Baras, Georgios Nantzios, Dimitris Ziouzios and Minas Dasygenis say that the accuracy can be further increased using a fully connected neural network and using more powerful GPUs such as Nano Jetson and thus we are using Jetson nano in our project.
The conference paper presented by Pawel Skruch, Marek Dlugosz, Pawel Markiewicz, Wojciech Mitkowski, Janusz Kacprzyk and Krzysztof Oprzedkiewics [10] talks about the control systems in autonomous vehicle and communication of microprocessors using different protocols. The paper [10] focuses on the challenges which come in picture when someone talks about the designing of a system with quality assurance and increased complexity. Thus, for this methodical approach is required which is presented in this paper [10]. The focus of this approach is black box testing and includes test design, implementation and execution.
Most of the deep learning models either improve the performance of either semantic segmentation or object detection. The paper presented by Yu-Ho Tseng and Shau-Shiun Jan [11] develops a unified network architecture that uses both semantic segmentation and object detection to detect people, cars and roads simultaneously. The model is trained in a unified manner that combines both the above approaches with the simulation dataset. The results of this experiment are that image processing is done in about 30 ms on an NVIDIA GTX 1070. It uses the unity engine simulation environment as a dataset. Image labeling is done using the Matlab image Labeler Tool.
The paper presented by Shinpei Kato, Shota Tokunaga, Yuya Maruyama, Seiya Maeda, Manato Hirabayashi, Yuki Kitsukawa, Abraham Monrroy, Tomohito Ando, Yusuke Fujii and Takuya Azumi [12] presents Autoware on Board, a new profile of Autoware to accommodate embedded computing capabilities. It [12] uses NVIDIA's computing platform named DRIVE PX2 for the development of autonomous vehicles and evaluates its performance on ARM-based embedded processing cores and on Tegra-based embedded graphics processing units (GPUs). This paper [12] focuses on ARM-based embedded. This includes localization of points, Image detection, Clustering of data. "Autoware is a popular open-source software project that provides a complete set of self-driving modules" (Shinpei Kato, Shota Tokunaga, Yuya Maruyama, Seiya Maeda, Manato Hirabayashi, Yuki Kitsukawa, Abraham Monrroy, Tomohito Ando, Yusuke Fujii and Takuya Azumi). Thus, this paper [12] can be used to get the theoretical approach of which technique should be used for embedded systems in autonomous driving.
The paper presented by [13] presents about Edge Intelligence (EI) which is a combination of computer system research community and AI community created to meet demands. Existing computing techniques used in the cloud are not applicable to edge computing directly due to the diversity of computing resources and the distribution of data sources. This paper [13] introduces an open framework for Edge Intelligence which equips edges with intelligent processing and data sharing capabilities. Authors [13] clearly describe the four application scenarios by OpenEI which are Edge Intelligence, edge computing, deep learning, cloud edge collaboration. Thus, out of these four two edge computing and deep learning are an important part of our project. The limitation of EI is the sharing of data and collaborating with different AI algorithms that need to be kept in mind while using the OpenEI framework.
Safety is one of the important requirements of autonomous vehicles. The challenge in designing an edge computing is to deliver large computing power and security to guarantee the safety of autonomous vehicles. This has been proposed in the paper presented by Shaoshan Liu, Liangkai Liu, Jie Tang, Bo Yu, Yifan Wang, Weisong Shi [14]. The paper [14] tells about the complexity of autonomous vehicles and how technologies such as localization, sensors, perception as well as smooth interaction with cloud platforms for map generation and data storage makes them extremely complex. So, for this author [14] first did edge computing to process a large amount of real-time data and data from sensors. It [14] also tells that enough computing power needs to be provided even at high speed and ensures the safety of the vehicle also. In our project, we are also using different sensors. Thus, this paper provides important key points to us for handling data. Secondly, it [14] points out energy constraints on the edge side i.e. when vehicles communicate with each other. This requires more research. Lastly, it says that [14] if security is compromised then safety can't be guaranteed. Thus, data need to be protected from attackers.
The official website for OpenCV [7] describes the Computer vision. (2019, Dec 11). Retrieved from https://studymoose.com/the-official-website-for-opencv-7-describes-the-computer-vision-example-essay
👋 Hi! I’m your smart assistant Amy!
Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.
get help with your assignment