91³Ô¹ÏÍø

How Does EDA Form the Basis for Designing Secure Systems?

Adam Cron, Brandon Wang

Oct 08, 2020 / 5 min read

Introduction to IoT Security Risks

As Internet of Things (IoT) devices rapidly increase in popularity and deployment, security risks are arising at all levels. It could be at the usability level such as social engineering, pretexting, phishing; at the primitive level such as cryptanalysis; at the software level such as client-side scripting, code injection; and now even at the hardware level. During hardware operation, we often have seen risks from side-channel analysis, cold boot, and fault injection; or during the hardware design and manufacturing cycles. Notable risks include IP theft, reverse engineering, cloning, and hardware trojan.

Security infrastructure in an SoC

Figure 1: Security infrastructure in an SoC

Security Infrastructure in SoCs

A secure system needs implementation support at all levels as well. At each level, the strength of a security mechanism is evaluated based on the assumption that the underlying implementations satisfy a given set of security requirements which are subsequently enforced at the lower levels. The most common and important security requirements are confidentiality and integrity. Confidentiality demands that a system does not unintentionally leak information and is accomplished through the deployment of secret cryptographic keys. Integrity demands that a system perform no more and no less functionality than expected and is checked by verification and testing techniques. Ensuring a system performs no more functionality than expected is a much harder problem, but many solutions are being developed in this area now. Recently, DARPA awarded Synopsys to be a prime contractor for its Automatic Implementation of Secure Silicon (AISS) program, which is very much focused on hardware silicon security implementation and automation.

The AISS Program and Security Implementation

As a core EDA tool business unit, Synopsys¡¯ Design Group is a key part of the AISS program. Even though security implementation is not a new concept in IC design, many designers are unaware of the techniques used for implementation. This includes restricting logic don¡¯t cares, disabling unused finite states, static code encryptions in secure cores, dynamic code integrity verification, as well as fault-tolerant designs and watermarking, etc. Implementation engineers that are experts in security implementation also have to consider the trade-offs among security, power, cost, and performance; a difficult task that is even harder for the average designer. DARPA¡¯s AISS program is aimed at providing a practical solution in this space. This effort will not only automate and address security design challenges but having the security cast inside of the tools also increases the security level, so you don¡¯t have to expose some of the security mechanics to the wider development community.

AISS is a 4-year, 3 phase program. Phase 1 is putting a security system together, phase 2 is adding some automation to it, and phase 3 is optimizing the functionality, and this is where EDA can really shine. For example, a designer knows how to run a simple flow, but for security, there might be some special functionality that we don¡¯t want to expose to the designer. We just want to put that structure in the design and leave all the implementation to the tools and automation without exposing the design intent. Sometimes, the automation required for one domain is at odds with the needs of the security domain. Scan insertion, for example, where you¡¯re giving complete access to every flip-flop in the design, is a really easy target for a nefarious actor to get in and control the inputs or observe the outputs to see what¡¯s going on inside of the design. Adding compression can make this a little harder, where we have multiplexers in the front end or use exclusive OR gates in the back end. We can also add compression functions that have linear feedback shift registers in the front end and multiple input signature registers in the back end. Although this further obfuscates the data going in and out, it is still susceptible to attack by a bad actor.

Challenges and Future Directions in Security Implementation

From a flow standpoint, there are certain things a user would expect. For example, when they convert RTL to gates, they expect to be able to run flow verification, and everything comes out great and works. In some cases, we will have some RTL to RTL transformations, so we need to make sure that when we insert some sort of key or locking function between here and there, that those additional registers or extra combinational cones are still recognizable from one RTL to the other. Likewise, if we insert some gates at synthesis time between RTL and gate levels, we want to make sure that the things we put in earlier don¡¯t get wiped away and all of a sudden don¡¯t work after synthesis optimization comes through and impacts the design from a security standpoint. And then again, afterwards, we want full verification to work now that we have added extra keys and modified cones, and maybe keys are known or unknown.

Obfuscation and locking added to design logic

Figure 2: Obfuscation and locking added to design logic

AISS will expand our PPA driven implementation concept that the industry has used since Moore¡¯s law to PPAS, with the ¡°S¡± indicating the security constraint. While PPA (power, performance and area) are well defined in watts, hertz, and nanometers, it is more complicated to define security in a quantitative manner which is needed for algorithmically driven automation. In fact, not only does the risk level itself need a quantitative specification, but also the cost of security related to risk needs to be valued quantitatively to generate a high-level cost function for optimization. And once these parameters are defined, we will need industry standardization on these related security level definitions and measurements. Unlike PPA, design for security is a complex combinatorial problem, of which special cases exist which have a lower complexity than the general case which tends to be NP-hard. It is more than an optimization algorithm depending on control and data flow-based security analysis, which makes the work very exciting.

PPA plus Security

Figure 3: PPA plus Security

To give a bit more color, for example, for protecting the supply chain, we¡¯re applying watermarking. We are watermarking IP, so as it gets out into the field, we can see if the IP is ours. To protect against reverse engineering, we have logic locking and obfuscation features which we can choose to implement. We can draw on those security tools or implementation details and apply those features to RTL or a design based on our security goals. If we are trying to increase supply chain security, then watermarking or locking can be implemented. For reverse engineering, obfuscation may be combined with locking. There¡¯re different levels of protection you might want to set for these four kinds of security issues (supply chain, reverse engineering, side-channel attack, trojans), and we can, dialing these things in, pick a certain amount or pick a certain number of these modifications and to a certain level: a 128-bit lock or 256-bit lock. Will the watermark be 50 steps or 1,000 steps?

Our ultimate goal is to accelerate the timeline from architecture to security-hardened RTL from one year, to one month, even one week ¨C and to do so at a substantially reduced cost. The design flow pieces need to interact with each other so that the features inserted at earlier steps are not lost during later flow steps. Coordination of data elements and their use from design and deployment needs to be maintained so that keys can unlock the device functionalities, and watermarks can be validated in the field.

Continue Reading