Home / Technology Policy & Ethics / September 2018 / Deep Learning: Is it the Main Challenge Behind Autonomous Vehicles Deployment?

Deep Learning: Is it the Main Challenge Behind Autonomous Vehicles Deployment?

By Fatima Hussain and Rasheed Hussain

September 2018

There has been a growing interest in the field of intelligent transportation systems (ITSs) to improve road safety [1] and traffic management issues. ITS is realized through social interaction among vehicles which offers plethora of applications and services ranging from safety to information and entertainment (collectively referred to as infotainment). These applications on one hand guarantee safe driving, and on the other hand add value to our driving experience. Significant efforts have been made in the last decade by researchers from both academia and industry to realize the ITS through different communication paradigms such as vehicle to vehicle (V2V) and vehicle to infrastructure (V2I) communications. ITS uses existing communication technologies such as Dedicated Short-Range Communication (DSRC), WiFi, 4G/LTE, Bluetooth, WiMax, and so on.

The Society of Automotive Engineers (SAE) has defined 6 levels of autonomy in cars where level 0 has no automation, level 1 refers to function-specific automation, level 2 guarantees , for instance advanced driver assistance system (ADAS) where vehicle can control steering wheel operation and can apply brakes automatically under some circumstances; however, driver’s presence is mandatory behind the wheels. Level 3 and level 4 are the autonomous or self-driving cars where level 3 allows human intervention whereas level 4 autonomous car does not allow any human intervention. Level 4 autonomous cars will have a paradigm shift in the automotive industry; however, taking human out of equation opens pandora box of social and ethical implications such as; ownership of cars, policies among stakeholders, security, privacy, and business, to name a few. Apart from these issues, technology usage and design of intelligent brain for such autonomous cars is also a daunting challenge. Numerous challenging tasks and decisions must be made by these vehicles to observe flawless and risk-proof drive as well as information reporting and communication. Few are listed as under [2]:

  • Localization and mapping (where I am?)
  • Scene/environment understanding (where/who/what why of everyone else)
  • Movement planning (how do I get from A to B)
  • Human Robot Interaction (HRI) (how the physical and mental state of the driver is translated)

One approach is to program or train these vehicles with real world data (road size, traffic signals and rules etc.) and let them decide their actions in real situations. Again, the question is; will vehicle trained with test data in one part of the world, work equally good in another part? Furthermore, despite testing the vehicles with such data, there are always unforeseen circumstances that need to be handled, for instance, ‘right-of-way’, ‘the trolley problem’, ‘an elderly person crossing the road’, ‘pedestrians crossing the road while the traffic light is red’ and many more. In other words, autonomous cars lack human empathy. Such circumstances limit the functionality of autonomous cars.

In this context, Deep Learning (DL) is a special branch of Machine Learning (ML) that roots back to Artificial Neural Networks (ANN), which mimic the human brain. DL is the multi-layer NN, which is trained layer by layer.  We can explain this multilayer DL concept by taking example of image recognition as shown in Figure-1. The first layer learns primitive features, like color in an image by finding frequently occurring combinations of digitized pixels. These identified features are fed to the next layer, which trains itself to recognize more complex (combination of already identified) features, like a edge or corner of an image.  This recognition process is continued for successive layers until smart system can identify the object [3].

Figure 1: Multi-Layered DL in Image Recognition

In the context of autonomous driving, DL has been used for object detection, image recognition and perception for performing tasks like locating and identifying lanes, locating objects and pedestrian with the help of sensors and computer vision techniques.

We can conveniently say, intelligence of these vehicles depends upon computation occurring deep within recurrent or convolutional neural network. Clear understanding and adjustment of their level of intelligence is very important and difficult specially when human lives are at stake. Deep neural networks and DL can produce great results if fed with large amounts of data, and vehicles can learn a lot from previous experiences. However, it is not possible to create a data set that includes every environment. In other words, the requirements for autonomous driving are not constant. Also, these ML systems lack “common sense” and ability to act beyond trained data or ambiguous data, which is the profound human instinct and it is human’s ability to transfer learning between various situations.

To this end, the question whether DL is the only solution for autonomous cars, is not clear in the light of the status of autonomous cars and its battle for commercialization. The social and economic implications as well as recent incidents where autonomous cars either hurt or killed passengers are also playing pivotal role in the impeding momentum of surge in the commercialization process. The researchers and technologists are striving for both new techniques and the existing ML and DL approaches but there is a long way to go for the autonomous cars to make it to the commercialization stage. Hopes are high, learning from the mistakes, autonomous car at some point, will pave its way to the commercialization stage, provided that the concerns of all the stakeholders are addressed in a satisfactory way.

References

[1] F. Hussain, H. Farahneh, X. Fernando and A. Ferworn; VLC Enabled Foglets Assisted Road Asset Reporting. In IEEE Vehicular Technology Conference (VTC).

[2] S. Mousavia,  M. Schukata and  E. Howleya; Deep Reinforcement Learning: An Overview. In Proceedings of SAI Intelligent Systems Conference.

[3] https://selfdrivingcars.mit.edu/

[4] https://www.technologyreview.com/s/513696/deep-learning/

Dr. Fatima Hussain is currently working as an Adjunct Professor in Ryerson University, Toronto. Prior to this, she was working as an Assistant Professor in University of Guelph, Canada. Ms. Hussain has done her PhD and MASc. in Electrical & Computer Engineering from Ryerson University. She is engaged in various NCSER funded industrial projects such as; Smart Machine Automation, Smart Warehouse, Smart Watch etc. Dr. Hussain has more than 8 years of teaching/ research experience in GTA and overseas. Her research interests include Machine Learning, Internet of Things Networks, and Public Safety. She has dozens of journals and conference papers and an introductory book on “Internet of Things; Building Blocks and Business Models”, on her credit. She is serving as an editor and technical lead for IEEE WIE Newsletter, Toronto section.

Dr. Rasheed Hussain received his B.S. in Computer Software Engineering from N-W.F.P University of Engineering and Technology, Peshawar, Pakistan in 2007, MS and PhD degrees in Computer Engineering from Hanyang University, South Korea, in 2010 and February 2015, respectively. He also worked as a Postdoctoral Research Fellow in Hanyang University South Korea from March 2015 till August 2015. Furthermore, he worked as a Guest researcher in University of Amsterdam (UvA), Netherlands and consultant for Innopolis University, Russia from September 2015 till June 2016. Hussain is currently working as Assistant Professor at Innopolis University, Russia and establishing a new Masters program (Secure System and Network Engineering). He has authored and co-authored more than 45 papers in renowned national and international journals and conferences. He serves as reviewer for many journals from IEEE, Springer, Elsevier, and IET that include IEEE Sensors Journal, IEEE TVT, IEEE T-ITS, IEEE TIE, IEEE Comm. Magazine, Elsevier ADHOC, Elsevier JPDC, Elsevier VehCom, Springer WIRE, Springer JNSM, and many more. He also served as reviewer and/or TPC for renowned international conferences of repute including IEEE INFOCOM, IEEE GLOBECOM, IEEE VTC, IEEE VNC, IEEE ICC, IEEE PCCC, IEEE NoF, and many more.

Editor: 

Muhammad Bilal is an assistant professor of computer science in the Department of Computer and Electronic Systems Engineering at Hankuk University of Foreign Studies, Yongin, Rep. of Korea. He received his Ph.D. degree in Information and Communication Network Engineering from Korea University of Science and Technology, school of Electronics and Telecommunications Research Institute (ETRI), MS in computer engineering from Chosun University, Gwangju, Rep. of Korea, and BS degree in computer systems engineering from University of Engineering and Technology, Peshawar, Pakistan. Prior to joining Hankuk University of Foreign Studies, he was a postdoctoral research fellow at Smart Quantum Communication Center, Korea University. He has served as a reviewer of various international Journals including IEEE Systems Journal, IEEE Access, IEEE Communications Letters, IEEE Transactions on Network and Service Management, Journal of Network and Computer Applications, Personal and Ubiquitous Computing and International Journal of Communication Systems. He has also served as a program committee member on many international conferences. His primary research interests are Design and Analysis of Network Protocols, Network Architecture, Network Security, IoT, Named Data Networking, Cryptology and Future Internet