Wibu-Systems Blog

Rüdiger Kügler

Recent Posts

From Stuxnet to iPhone: The evolution of modern computer viruses

Posted by Rüdiger Kügler on Sep 22, 2015 12:50:13 PM

Whether it be Stuxnet or an iPhone virus, it is people who are the cause for trouble. But let’s go back to how the story began: Just a few days ago, it was unthinkable for an iPhone to be infected with a virus. The concept of the App Store itself – which only allows the distribution of software authorized by Apple – seems to suggest that the spread of viruses through their apps would be impossible. It is the same belief we had for one of Siemens’ controllers years ago: "They can never be subject to viruses." It was just a matter of time before both assumptions were proven wrong.

What happened then? Any software running in a closed system, like an iPhone, must be signed by a software publisher. For this purpose, the developer uses a key pair consisting of a private and a public key. The private key is kept secret and used for the cryptographic signature. The public key is signed by the manufacturer of the closed system, in this case Apple, with his private key (root key). The resulting electronic document – which includes the developer’s public key and the signature from Apple – is called a certificate. For validation purposes, the closed system only requires the public key (public root key) that is already included in iOS by Apple: "Only developers that I know and trust, are allowed to run software in my closed system."

In a jailbreak, this mechanism is undermined by the user of the device. A modified operating system skips this check. While any software can then run on the device, the user of a jailbroken phone inadvertently opens the door to virus threats as well. However, the issue now affects respectable users (those not using jailbreaks) too.

And why is the iPhone case so similar to Stuxnet? In both cases, the development environment of the software developer was attacked. This means that the virus had already taken hold of the software after compiling, but before signing the application. When the developer signed the software, he included the virus as well, which thus passed any verification controls unnoticed. Compared to this, the attack via Stuxnet occurred at an alarming lower level. The new incident exploited human vulnerability – convenience first and foremost – by offering a tampered pirated copy of XCODE for download. China was affected more significantly by it, as the use of pirated products is widespread and usually regarded as a minor offense.

What are the takeaways from this incident?

  • Even free software needs protection against piracy, protection against reverse engineering, and very robust integrity protection.
  • The signature of a software must be made in a trusted environment. For instance, the key should be safely stored in a secure hardware element.
  • Even in a closed system, we should not assume that all software will be reviewed in detail and take our security for granted. The review process is only one link in the protection chain.
  • A security solution is only as good as the weakest point in the chain. Even the best approach may be undermined, if it is not done holistically.
  • A protection solution must offer the same level of security across all platforms. This is where a professional solution like CodeMeter comes into play.

Siemens responded quickly and did a good job after all. Let's hope that Apple is equally responsive. If you are ready to implement the lessons learned from this episode, you can count on CodeMeter, our all-in-one protection suite, and on the professional expertise of our team.

Topics: software protection, CodeMeter, Code Integrity

Michelangelo, anti-virus software, Authenticode, AxProtector, and You

Posted by Rüdiger Kügler on Oct 31, 2014 3:52:49 PM

This story started with Michelangelo and the end of it is still not in sight. Michelangelo is not only the name of an Italian Renaissance artist, but also of one of the earliest computer viruses. Michelangelo was named after the maestro, as it was activated on his birthday. Until then it kept silent, without perpetrating any evil deed.

Since then, the number of viruses has skyrocketed, and the damage caused is tremendous. In parallel, anti-virus vendors have developed their technology just as rapidly. As smart as they can be, virus scanners are still detecting false positives, identifying certain software as possible viruses when they are actually not. Why does this pattern occur in particular with protected software applications, and what can ISVs do about it?

The virus scanners that first appeared on the scene were simple. The anti-virus vendor had studied the virus in his laboratory and picked out a significant fingerprint that was referred to as a signature and stored in a virus signature file. However, this signature was not a cryptographic signature (I will clarify this concept later). The virus signature file and the virus scanner were then delivered. In turn, the user’s PC was scanned for executable files first, and then for all the other data files, to verify whether any of the signatures in the virus signature file had a match. If so, the presence of a virus was reported. In case the signature was very short, false positives were reported; this was an extremely rare occurrence though. I clearly remember an event that I came across personally. The affected customer happened to be a radio presenter that I had on the hotline. We contacted the provider of the anti-virus he was using, and in turn, he adjusted the virus signature file. In the late 90s, when the Internet was in its infancy, this process was quite a burden.

As new viruses multiplied and became able to self-modify and hide themselves, this simple technique turned out to be insufficient. Today, a virus scanner looks for behavioral patterns and anomalies of the application. This is usually done with a scoring system. How pattern details are collected and weighted, and on which threshold a virus alert should be switched on are the current trade secrets of anti-virus vendors. I can only guess that calls, for instance, of the write-protection of an application to the main memory, are analyzed too. In principle, I am sure that applications protected with encryption against reverse engineering receive additional points in the scoring system. This has been confirmed by several anti-virus vendors. The reason is that viruses use exactly these mechanisms to hide themselves. It takes therefore fewer matching patterns to reach the threshold. This explains why most of the false positives are protected applications.

What can you, as a software developer, do then? Here is where mechanisms like Authenticode come into play. If you have followed our presentations, or blog entries related to the comparison between Authenticode and Wibu-Systems AxProtector, then you know what we always evangelize: "Authenticode protects the user from viruses, but not the software vendor from piracy." Authenticode also protects the software vendor against false positives. That’s its goal: it helps to detect and prevent viruses, nothing more, but also nothing less.

Since there are countless software applications from countless vendors, it is virtually impossible for a user to distinguish what is good from what is evil. When it comes to downloads, the software can be modified by a virus on the way from the vendor to the user, again without any clear discrimination. The user needs a tool that allows him first to verify the source of the software, and secondly to certify that it has not been altered. This is the reason why encrypted signatures are used. They should not be confused with the signature of the virus signature file. As a vendor, you can buy a certificate for software signature from an authorized certification authority. That proves the validity of your identity. Often this action is accomplished by telephone. With the certificate and the private key, you can sign your software. The private key is known to you only. The root certificates of certification authorities are already integrated into the operating system. Thus, the operating system can directly verify both the author of the application and whether the application has been changed. When default settings are enforced, the user receives a message should the software be modified. These mechanisms are also used by anti-virus vendors. If a piece of software includes a matching trusted signature, the software obtains additional bonus points across the scoring process. This implies that the behavioral patterns should occur more conspicuously for the software to be reported as a virus.

Therefore, if you are a professional software developer, it is highly advisable to sign your software. But what does AxProtector have to do with this? AxProtector performs software encryption and software signature. As such, it protects ISVs from hacking attacks to their software. Authenticode injects an additional signature. Should the software be altered, the signature of AxProtector would no longer be valid. AxProtector has already been serving this role for several years, and works side by side with Authenticode. The signature from Authenticode is recognized by AxProtector, and used for verification purposes during the review process. So you can work with both and enjoy secure protection with AxProtector against hackers’ modifications, minimization of false positives, and protection against viruses with Authenticode. To attain all this, it is important that you encrypt the software first with AxProtector, and then sign it with Authenticode. If a virus scanning operation is launched before your eyes, you’ll realize that you cannot still obtain a 100% guarantee not to be blamed to be a virus. However, the risk is greatly minimized. Our own test applications were not reported as viruses when, after being signed with Authenticode, they were tested with 54 different virus scanners.

But what if a virus program was to buy a certificate on its own, or steal a certificate, or even a private key? Under that circumstance, the virus would be able to hide from virus scanners. This is exactly what happened with Stuxnet. Right then, a certificate and a private key were stolen, which determined an unnoticed wide spread of the infection. Countermeasures are blacklists, where stolen and invalid certificates are listed. With Stuxnet, things got nasty, because the theft was not immediately discerned. With the necessary attention to the matter, cases where viruses exploit the guest for an entire week unnoticed should hardly occur again. At the same time, you are also accountable for keeping your private key for the Authenticode signature, as well as the encryption key and the signature of your application securely locked.

How to Counter Threats Like the Heartbleed Bug

Posted by Rüdiger Kügler on Apr 11, 2014 2:21:00 PM

heartbleedDue to the discovery of The Heartbleed Bug in OpenSSL by Google security researcher Neel Mehta, private keys now should be considered vulnerable. This a mission critical, worst case scenario.

How could such an event happen? OpenSSL and all other Open Source software have been considered to be secure because the source code is available to the public and such a critical bug would most likely be found and fixed immediately. But like any software, Open Source software can have bugs due to errors in implementation. This case also demonstrates that 100% security can never be guaranteed, even in open software

So, what can be done to better protect private keys? There are two major promising approaches: hardware tokens and the concept of diversity.

Hardware tokens, such as the CodeMeter Dongle, store private keys in a secure smart card chip. Each cryptographic operation sends data to the token. Data is signed or decrypted in the token, using the private key in the token. The private key itself never leaves the token. So it is not vulnerable to attacks against the memory.

In the Industry 4.0 era of connected machines, authentication is essential for secure communication. Most implementations rely upon OpenSSL. So without additional security, all controllers and embedded devices can be considered to be vulnerable. The effort for enrollment of new keys is more expensive than providing one hardware token for each device.

The other approach to protecting private keys is diversity. In this context, diversity refers to the application of two different technologies from two different development teams. It can be used to achieve safety as well as security. In a safety implementation, the two technologies are connected in parallel. In a security implementation, the technologies are connected sequentially. If one of the technologies fails, the other takes over to prevent the security breach.

In theory, if there was a bug in the server, a hacker could have access to the token. But, he would not be able to extract the keys from the token. If there was a bug in the firmware of the token, a hacker could not access this bug, because he cannot hack the server. In order for the hack to be successful, there would need to be two exploits occurring at the same time, directed at two separate technologies from two different development teams. If both technologies are maintained and up to date, the probability that exploits occur at the same time is much lower than in a single technology solution.

Wibu-Systems has used the diversity approach successfully for many years in our CodeMeter License Central secure licensing tool, which relies on Apache Tomcat and Apache Httpd Web servers. The servers are connected sequentially. So if there is an attack to one of them, the other one still protects data in the database.

The OpenSSL bug demonstrates two things: 1) Private keys do not belong on the hard drive and in the memory of the computer, and 2) security of Open Source software is a myth. You need to evaluate the threat scenarios and develop the appropriate security model that provides optimum protection, even in the case of implementation errors. Ask our experts.

Rüdiger KüglerAfter receiving a degree in physics, Rüdiger became involved in marketing software solutions for the financial industry and key account management for e-commerce businesses. As Vice President of Sales at Wibu-Systems since 2003, he has played a critical role in helping customers implement innovative security solutions for desktop applications, cloud services, and embedded systems.