Autonomous vehicles and cybersecurity challenges. ENISA recommendations

2021-03-29T18:09:00

As we anticipated, autonomous vehicles are getting closer to Spanish roads. They pose legal, ethical and cybernetic challenges. This blog entry focuses on the latter.

Autonomous vehicles and cybersecurity challenges. ENISA recommendations
March 29, 2021

As we anticipated, autonomous vehicles are getting closer to Spanish roads. They pose legal, ethical and cybernetic challenges. This blog entry focuses on the latter.

Manufacturers must (i) solve complex organizational, technical and technological issues related to autonomous driving; and (ii) ensure the safe implementation of Artificial Intelligence (AI) techniques regarding cybersecurity risks.

As discussed on this blog, dangers for internet-connected cars include access to users’ personal data, theft of vehicles accessed without a key, takeover of a vehicle’s system, or the altering of AI-based image classifiers built into the vehicle (e.g., leading to the misclassification of a stop sign as a speed limit sign, making the car slow down instead of stop).

EU institutions consider this an urgent matter. Therefore, ENISA published a report examining AI cybersecurity challenges for autonomous vehicles and providing recommendations to mitigate them. 

1. Systematic security validation of AI models and data

  • It is necessary to define data governance, adapting to the data used in autonomous driving to understand, e.g., who owns the data, who has access to them, or their appropriate use. Since AI models change over time, ENISA recommends performing systematic security and robustness assessments to (i) prevent vulnerabilities after model updates; and (ii) ensure the quality and reliability of autonomous driving systems.
  • ENISA also recommends (i) establishing proactive and reactive monitoring and maintenance processes for the AI models; (ii) performing risk assessments specifically considering the AI components throughout their lifecycle; (iii) adopting resilience mechanisms preparing alternative plans and incident response activities; (v) implementing audit, monitoring and testing processes for vehicle operations and incidents; and (vi) introducing additional validation systems for an ongoing data verification.

2. Supply chain challenges related to AI cybersecurity

  • The supply chain is key for cybersecurity. Therefore, it is essential to shield the supply chain to prevent attacks or security breaches. Security processes in the supply chain should be flexible and dynamic to capture AI-specific features and updates.
  • The most remarkable recommendations are: (i) establishing an appropriate AI security policy across the supply chain, including third parties; (ii) ensuring governance of AI security policy throughout the supply chain; (iii) developing an AI security policy protecting stakeholders and identifying and monitoring AI-related risks in autonomous driving; and (iv) requesting compliance with regulations in the automotive industry across the supply chain.

3. Integrating AI cybersecurity with traditional cybersecurity principles through an end-to-end holistic approach

  • Since software systems change over time, software functionalities and cybersecurity measures must be constantly updated. Also, ENISA recommends integrating autonomous vehicle components and AI, which would be subject to traditional cybersecurity principles. 
  • ENISA’s recommendations include (i) ensuring proper governance of AI cybersecurity policy; (ii) creating an AI cybersecurity culture across the automotive industry; and (iii) promoting security patterns for the design and implementation of the AI-based components while promoting autonomous driving research projects and implementing solutions to prevent the jamming of vehicle sensors (which will be essential to achieve a cross-cutting security system based on traditional cybersecurity principles).

4. Incident handling, vulnerability discoveries related to AI and lessons learned

  • Given the increase in digital components in current vehicles, there must be a clear distinction between AI-based and non-AI systems, so as to track and identify potential vulnerabilities strictly related to AI decisions.
  • ENISA’s recommendations include: (i) adapting an incident response plan to AI specificities; (ii) encouraging a learning culture regarding technological errors with a case-by-case approach considering the various incidents and required responses; (iii) holding disaster drills with high management from the automotive industry so they understand the potential impact of vulnerabilities; (iv) establishing mandatory standards for AI security incident reporting; and (v) developing simulated incidents to raise awareness and knowledge in the industry. 

5. Limited capacity and knowledge on AI cybersecurity in the automotive industry

  • The lack of knowledge and maturity is one of the automotive industry’s major challenges. Developers must promote cybersecurity protocols, but it is also important that all stakeholders in the automotive industry be aware of potential risks and learn how to prevent them. 
  • ENISA’s recommendations include: (i) integrating specificities related to AI cybersecurity in corporate and organization policy; (ii) creating cybersecurity expert teams from the various fields related to vehicle production and supervision; (iii) involving mentors assisting in the adoption of AI security practices; (iv) launching security education and training programs focused on AI cybersecurity systems across the automotive industry.

    The latest technological developments and advancements in the automotive industry suggest that autonomous driving is close to becoming part of our daily lives. However, before fully deploying autonomous cars we must shape the regulatory framework and implement appropriate security measures to ensure reliable, safe, legal and ethical autonomous driving. This blog will follow any further developments closely.

Authors: Ainhoa Rey, Josu Andoni Eguiluz and Octavi Oliu

March 29, 2021