Wednesday, January 2, 2019

Thoughts on GitHub's SoftU2F

I was discussing GitHub's SoftU2F authenticator with a few people recently and thought it would be nice to write out some of the best counter-points to some common objections I heard.

If you read any of these and could never see yourself taking such a position, just skip down to the next section.

"Using a code-based authenticator (SMS, a TOTP-based phone app like Google Authenticator, etc) is just as secure as U2F/SoftU2F."


Let's first go through all the ways in which a SMS-based second factor can be broken:
  1. The user is phished and submits a valid code to the attacker's phishing page.
  2. Something happens that lets an attacker receive SMS messages on behalf of the user
    1. SIM swapping
    2. SMS re-routing through SS7
    3. An attacker abuses a carrier's SMS forwarding functionality 
  3. A compromise of...
    1. ...the company that runs the service's SMS API (ex: Twilio) 
    2. ...any telecom infra that can access the text of SMS messages 
    3. ...the cell phone that the SMS messages are sent to
    4. ...the service itself (ex: can an attacker silently add their own phone number as a new second factor device?)
(Probably not a complete list.)

The list for TOTP is shorter, but all code-based second factors (where the user is asked to type a code into the same webpage as the password) can fail in scenarios where:
  1. The user is phished and submits a valid code to the attacker's phishing page
  2. The secret TOTP seed is compromised...
    1. ...from the backend server that's doing the validation
    2. ...from the end user's cell phone's local storage
  3. A compromise of the service itself (ex: can an attacker silently add their own TOTP app as a new second factor device?)
Note that there are frameworks that can handle the "hard work" to make #1 from both lists possible at scale. This has already been observed to happen in the real world.

We will dive into exactly what scenarios can lead to U2F being bypassed later on, but the list will be strictly a subset of both of the lists above.

"A push-based two-factor authentication application (ex: Duo Push, Apple's Trusted Devices, etc) is just as secure as U2F/SoftU2F."


"Push-based 2FA" is another type of multi-factor authentication and is summarized well here:
This style of 2FA improves on authenticator apps in two ways: Acknowledging the prompt is slightly more convenient than typing in a code, and it is somewhat more resistant to phishing. With SMS and authenticator apps, a phishing site can simply ask for your code in addition to your password, and pass that code along to the legitimate site when logging in as you.
Because push-based 2FA generally displays an estimated location based on the IP address from which a login was originated, and most phishing attacks don’t happen to be operated from the same IP address ranges as their victims, you may be able to spot a phishing attack in progress by noticing that the estimated location differs from your actual location.
However, this requires that you pay close attention to a subtle security indicator. And since location is only estimated, it’s tempting to ignore any anomalies. So the additional phishing protection provided by push-based 2FA is limited.

Note that the most crucial issue with code-based 2FA is not addressed or resolved by moving to "push-based 2FA". Users are still required to manually parse and validate authentication information, and the user must use the result of this decision to validate or reject the 2FA attempt.

Take another read through the article mentioned earlier where attackers were bypassing code-based 2FA at scale. Would it be that hard to run the framework on AWS/GCP/etc and make the man-in-the-middle server mimic the real user's User Agent string and (approximate) geographic location?

If most users AND datacenters tend to be located near large cities, how exact do you really need to be to get a user to write off the subtle difference as just another bug or software approximation?

"Storing the keys of your multi-factor authentication device on the machine where they're used makes the system less secure. SoftU2F is strictly worse than a U2F authenticator that relies on an external device based on separate hardware (ex: Yubikey)."


It's important to be precise about exactly how the system is made "less secure" by using something like SoftU2F.

The Security Considerations section of the Github project puts it well:
There is an argument to be made that it is more secure to store keys in hardware since malware running on your computer can access the contents of your Keychain but cannot export the contents of a hardware authenticator. On the other hand, malware can also access your browser's cookies and has full access to all authenticated website sessions, regardless of where U2F keys are stored.
If the goal of 2FA is to prevent an attacker from being able to successfully masquerade as a user to the server, the game is already over if malware or a browser exploit is able to read data off your filesystem.

That said...
In the case of malware installed on your computer, one meaningful difference between hardware and software key storage for U2F is the duration of the compromise. With hardware key storage, you are only compromised while the malware is running on your computer. With software key storage, you could continue to be compromised, even after the malware has been removed.
I would argue this distinction is well beyond most organizations' threat models, but the counter-point to this is that you can also have SoftU2F store the U2F keys in the Secure Enclave Processor present in most modern macbooks. This makes the storage generally equivalent to a "separate hardware device" as it will no longer be possible for an attacker to read and export the raw key material.

When storing keys in the SEP, SoftU2F creates the key data and stores it in an object created with a call to SecAccessControlCreateWithFlags. The protection class that get passed in is kSecAttrAccessibleWhenUnlocked and the flags are BOTH .privateKeyUsage and .touchIDAny, which correspond to:
@constant kSecAttrAccessibleWhenUnlocked Item data can only be accessed while the device is unlocked. This is recommended for items that only need be accesible while the application is in the foreground. Items with this attribute will migrate to a new device when using encrypted backups.
from SecItem.h, and:

kSecAccessControlPrivateKeyUsage - Create access control for private key operations (i.e. sign operation)
 kSecAccessControlTouchIDAny - Constraint: Touch ID (any finger). Touch ID must be available and at least one finger must be enrolled. Item is still accessible by Touch ID even if fingers are added or removed.
from SecAccessControl.h.

Are you saying that there are no conditions where U2F/SoftU2F can be broken?


No. While U2F has been observed to work extremely well in very high risk environments, here are some specific scenarios where U2F or SoftU2F could be bypassed:

1. Users can be phished and the U2F "second factor" signature can be man-in-the-middled with a valid (forged) TLS certificate for the site in question.

While the TLS Channel ID extension is the proposed solution to the full TLS MITM scenario, it is optional and not widely supported (ex: Google's reference U2F implementation does not implement it and has a "TODO: Deal with ChannelID" comment here.) An example project with an open GitHub issue to add TLS Channel ID Binding (oxAuth) is here.

That said, if the TLS Channel ID extension were to be used properly alongside U2F, an attacker with MITM capabilities AND a valid TLS certificate for the website would not be able to perform a downgrade attack.

The reasoning laid out in Section 6.1 of this paper explains why:
U2F can complement and leverage TLS Channel ID to prevent all forms of TLS MITM.
According to the specifications, when a client supports TLS Channel ID and performs a handshake with a server that does not support TLS Channel ID, the U2F messages contain a U2F-signed specific entry: cid_pubkey with a string value unused. If TLS Channel ID is signaled by both parties, the string value contains instead the TLS Channel ID as seen by the TLS client. The legitimate server that supports TLS Channel ID is expected to check the cid_pubkey value.
If this value is incorrectly signed, if it is different from the TLS Channel ID that is expected for this client, or if the connection does not use TLS Channel ID and the cid_pubkey values unused, a TLS MITM is detected.
In practice, though, it's critical to note that it's unlikely ANY sites are really doing this extended TLS Channel ID validation in a way that's not vulnerable to downgrade attacks.

Later on in the same paper:
The second observation from our experiment was that we were able to successfully authenticate to our Google test account using U2F and Chrome or Chromium, while going through [a TLS proxy which downgraded the TLS connection to remove the TLS Channel ID extension]. 
This happened in spite of the browser support of TLS Channel ID and in spite of the U2F-signed support signal. We contacted Google and the FIDO Alliance to warn them about the success of this downgrade attack. They answered that Chrome support of TLS Channel ID is currently experimental and still buggy at times. Thus, even if Google websites could detect such downgrade attacks, they decided neither to enforce the use of TLS Channel ID nor to use it to protect against such TLS MITM.
They also mentioned that TLS Channel ID would prevent legitimate use of corporate TLS proxies and they were not ready to keep U2F users from accessing Google services when such proxies were in use.
In order for a service's U2F implementation to truly block a MITM attacker who has access to a valid TLS certificate, it must also never be possible to authenticate to this service over an internet connection that goes through a corporate TLS MITM proxy.

Note that this is still no worse than SMS/TOTP.


2. Some types of browser bugs could be exploited and combined with a phishing attack to forge a U2F signature. For example if there's a bug in the way the browser determines the origin of the web page, the browser could (in theory) be tricked into sending the wrong origin to the U2F device and a fake login page could receive a valid signature.

Note that (a) this is no worse than SMS/TOTP and (b) this is an unavoidable consequence of the fact that the security of U2F relies on the security of the web browser itself.


3. If there's an implementation bug in the U2F device, it's possible a U2F signature could be forged.

Note that this is still no worse than a buggy SMS/TOTP implementation where, for example, TOTP seed values could be improperly generated.


4. If the service's backend or datastore is compromised, U2F could be bypassed by, for example, silently adding a new U2F token to the user's account.

No worse than SMS/TOTP as an attacker could do the same there too.


5. A SoftU2F exploit could be leveraged to gain code execution and compromise the user's machine.

This is absolutely worth considering!

Some factors that make SoftU2F more trustworthy to me are that TrailOfBits performed an audit on the original driver code (here) and that SoftU2F is backed by GitHub's bug bounty (see the README).

Saturday, April 1, 2017

Reporting "HipChat iOS App does not validate TLS certificates"

Overview:

While I was looking into an unrelated application, I noticed requests coming from HipChat's iOS Application were visible to my man-in-the-middle server and quickly found that there was no validation of TLS/SSL certificates.

It took a bit longer to get it patched than I was hoping, but they ended up meeting the timeline outlined in their Security Bugfix Policy.

The fixed version (with nondescript update notes) is 3.16.2:



Correspondance:

On January 8th, 2017, I wrote:
There does not seem to be any validation of the TLS/SSL certificate used by likeabosh.hipchat[.]com, which appears to be the primary API server used by the Hipchat iOS app.

This means that all traffic can be intercepted and manipulated by an attacker in a privileged network position.

I'm attaching a mitmproxy (https://mitmproxy.org/) stream file "mitmproxy-stream.out", which contains the raw data from when I MITM'd traffic coming from my own iPhone. This file can be read into the mitmproxy application and interactively browsed with the command - "mitmproxy -r ./mitmproxy-stream.out".

You can see that session information, channel metadata information, and chat logs are all present in the traffic that was captured.

To reproduce this yourself, you can also use mitmproxy. In my trial, I used the mitmproxy default SSL/TLS certificate file that gets generated on installation.



On January 17th, 2017, I wrote:
Hello,

Could someone confirm that you've received this and can reproduce the issue?

Andy



On January 17th, 2017, I received the reply:
Hi Andy we have received your issue. Sorry for taking so long to reply to you.



On January 19th, 2017, I received the reply:
Hi Andy,

Thank you for sending this notification to Atlassian. This is indeed a vulnerability and an issue has been filed on an internal tracker.

The issue is HCIOS-1033. Unfortunately, this issue is not accessible externally, so you will not be able to monitor its progress. Feel free to check with us for updates.



On January 19th, 2017, I wrote:
Thanks for confirming, [names redacted].



On January 19th, 2017, I received the reply:
Andy,
Here is a coupon for you to use in our store (swag.atlassian[.]com): [redacted].



On February 2nd, 2017, I wrote:
Thanks [name], is there an estimated fix date?

I use the app every day for work, and it's a bit worrying.



On February 2nd, 2017 I received the reply:
Hi Andy

Thanks for checking in. The development teams are working on this and we'll attempt to come up with a fix according to our Security Bugfix Policy (https://www.atlassian.com/trust/policies/security-bugfix-policy). The issue HCIOS-1033 has been filed on an internal tracker. Unfortunately, this issue is not accessible externally, so you will not be able to monitor its progress. Feel free to check with us for updates.



On February 2nd, 2017, I wrote:
Hi [name],
Thanks again for the quick reply & helpful link. That sounds good to me.

Out of curiosity, what severity score (CRITICAL/HIGH/MED) was given to this?

Andy



On February 2nd, 2017, I received the reply:
Andy,
The issue has been given a "High" (CVSSv3 8.9) severity at this moment.



On February 2nd, 2017 I wrote:
Thanks for the additional info.



On February 21st, 2017, I wrote:
Hi [name],
Can you share any information about what progress has been made to patch this? Is there an estimated fix date that you all are targeting?

I received a HipChat update from the iOS app store today but noticed the app is still vulnerable.

Andy



On February 24th, 2017, I wrote:
Bumping this one more time in case it got dropped. Is there a timeline for remediation or any estimate for a fix date?



On February 27th, 2017, I received the reply:
Hi Andy

The fix is in the new version (I believe 3.16.2) which is out in the AppStore. We currently do not see it's possible to proceed with an invalid certificate anymore and thus can't confirm your finding. May I suggest it could be due to the trusted certificate installed on your device for/from mitmproxy? If this still repeats for you would you possibly be able to record a video showing your setup and the flow which you believe is still prone to MITM? That would help us get to the root cause of the issue if it still persists.



On February 28th, 2017, I wrote:
Hi [name],

I am still able to repeat this despite having upgraded to 3.16.2. On my iPhone, I have not added any additional certificates as trusted or jailbroken the device (running iOS 10.2.1).

In the attached video (out.mp4), I have my iPhone connected to the same WLAN access point as my Macbook.

On the Macbook on the left terminal, I have a DNS server that constantly returns the Macbook's IP address. On the Macbook's right terminal, I have a python script that listens on :443 and attempts to make a SSL connection, printing any HTTP-layer data it receives if its SSL certificate is accepted by the client.

When I override the DNS server used by my iPhone to have it use the Macbook as its DNS server, it starts making background requests for various Apple services. These result in SSL exceptions being printed in the right terminal – this is expected behavior for when a client does not accept my Macbook's fake SSL certificate.

When I open the HipChat iOS app, you can see a DNS query is made to likeabosh.hipchat[.]com and when you look at the right terminal, you see HTTP POST requests. This implies my python script's fake SSL certificate is being accepted by the client, and the client is proceeding to communicate, unaware that it's not talking to the "real" likeabosh.hipchat[.]com.

The end of the video is me going into the App Store to show that I have indeed updated to the latest version.

If you are still unable to reproduce this, could you explain any differences in your testing environment?

If you have any questions about my set up, please don't hesitate to ask.

Andy



On March 3rd, 2017 I received the reply:
Andy,

Thank you for your continued effort and the video PoC. Would you please check if:
you wiped the application off your test device after updating to 3.16.2 but before doing MitM
you're able to sign in with your MitM setup on
you're still able to MitM the application as shown in the video PoC after a successful sign-in process

Additionally, what kind of TLS certificate are you using for you Python application? Would you be able to attach it to this issue?

Thanks.



On March 7th, 2017, I wrote:
Hi [name],

I'm traveling this week and I don't think I will be able to test your second question until next week, but I wanted to answer your questions now anyway:

>> "Check if you wiped the application off your test device after updating to 3.16.2 but before doing MitM"
No, I did not remove/reinstall the application after upgrading. I will test this when I get a chance, but it seems unlikely (though technically possible?) that this is the issue.

>> "Check if you're able to sign in with your MitM setup on"
>> ....
>> "Check if you're still able to MitM the application as shown in the video PoC after a successful sign-in process"
I also did not logout/login after the upgrade.
I will try logging out & back in before I test reinstalling the app, but I would note that if TLS cert checking is enforced only at the login stage, that still leaves the vast majority of an average user's session lifetime open to MITM (and being MITM'ed after the login stage is equally damaging if the session token can be captured).
I'll still test this out.

>> "what kind of TLS certificate are you using for you Python application? Would you be able to attach it to this issue?"
I'm attaching these cert/key values, but they were simply generated with openssl without any extra trickery. I don't think they will be especially helpful in understanding what's going on.



On March 7th, 2017, I wrote:
See the earlier comment for more details, but I'm also attaching the "dns.py" and "ssl-server.py" scripts that were used in the video in case you'd like them too.

Both were downloaded from the internet and have had slight modifications made to them.



On March 18th, 2017, I wrote:
Hi [name],
I can confirm the SSL validation works after the user logs out and back in (see attached pictures). "1.png" happens during a MITM'd login attempt, and "2.png" happens if I log in over a valid connection and then later try to MITM the app.

I can also confirm the SSL validation does not come into effect before a log out/in cycle is triggered. Before I logged out and back in, I was able to capture everything in a similar manner as before.

The command I used to do the transparent MITM with recording is:
mitmproxy -w outfile.log -e -b 0.0.0.0 -p 8080 --host

And I had set my iPhone up to point at this server by tweaking the HTTP Proxy settings in the Wifi Connection menu (no DNS server changes are needed with this).

I'd like to ensure everyone I work with trigger the login cycle so they get covered by the update, but if you all think the above information is enough to warrant logging out all users who had previously authenticated with the iPhone app and forcing them to login again, I'll hold off mentioning this to anyone until you have time to do this.

Could you let me know if you plan on doing this?

Thanks,
Andy

Tuesday, September 6, 2016

FLARE On 2015 - Challenge 10

About:

This is the 10th challenge from FireEye's 2015 "FLARE On" challenge (http://flare-on.com/)


Solution:

The level starts out with an exe file that doesn't seem to do anything when executed from the terminal:



Rather than go straight to IDA, this time I decided to take a look with a common malware sandbox first. The most interesting take away from the output was that the file (allegedly) dropped a few files to the System32 directory:



The strange thing is, I couldn't find those files on my system after execution....

Breaking out ProcessMonitor quickly showed the problem :)



Ok, so re-running the EXE as an administrator gave a much nicer output:




So now let's take a look at the original EXE and those dropped files (ioctl.exe and challenge.sys) in IDA...

It's immediately clear that the original EXE is an AutoIt-compiled script (or something made to look like an AutoIt-compiled script...), but unfortunately I wasn't able to find a decompiler to use...



Another thing that's quickly clear is that challenge.sys is a kernel driver that starts out simple...




... but ends up being quite complex:



ioctl.exe seems to be a simple piece of code that triggers an event and calls DeviceIoControl with an argument that's passed in via the command line:




As it turns out, ioctl.exe requires this argument or it'll crash (first execution below succeeded, second one failed):




Debugging the execution made it clear that the issue was with the call to _strtoul (below) -- so putting two and two together let's us know that it's using this value as the ioctl code that it sends to the kernel driver:




To confirm all this, I took a look through the ProcessMonitor logs from the original execution, and it looks like after the driver & ioctl.exe files are dropped to disk, ioctl.exe is called with the argument "22E0DC" -- this must be our code!



So now we're in a pretty good place as far as our understanding of what's going on.

We still don't know exactly what we're looking for, but a good place to go next would be to try to analyze exactly what happens when the driver is sent the IOCTL with the correct code.

To do this, the three options that first come to mind are:
  1. try to debug the challenge.sys driver as it runs live on the system
  2. look through the disassembly of challenge.sys in IDA and try to determine the behavior via static analysis
  3. hack the driver code to the point where we can run it directly in user-space and debug it as if it were a traditional .exe/.dll (turns out this was done here: http://www.ghettoforensics.com/2015/09/solving-2015-flare-on-challenges.html -- pretty cool...)
I decided to hope that static analysis would be enough and went with (2) but quickly decided to switch and do the full (1) and live-debug the running driver instead.

I hadn't debugged a kernel driver before, but the StackOverflow answer here was very helpful with getting things set up. (You'll need to use msconfig instead of bootcfg to set the boot params on Windows 7 images, though.)

Once I had everything set up and brought up both virtual machines, I wanted to make sure I could detect the loaded challenge.sys driver, so I ran the following:




Woo hoo! That looks pretty good... We're missing the symbols, but that's expected.



Now, ideally we could find a good place in our kernel module, set a break point for it, and live-debug the module's code.

I decided to set a breakpoint the part of the code where all the jump table entries jump to after they call their included subroutine, hoping that the flag might be in memory somewhere after the driver is called with the correct code:


So now we know our offset within the driver (0x29D468), we just need to know the location our driver's been mapped to in memory.

I found this using Process Hacker, within System's modules:



You can see our final address needs to be the location we see in IDA + the base address from Process Hacker - 0x10000 (the default base address IDA used):



So this is good... we can double check that we have the right address by test disassembling the code in that location in WinDBG, too.

Unfortunately, after hitting the breakpoint and checking out the process's memory, it turns out that the flag is NOT present anywhere.... so it looks like there's still a bit more to do.

Let's trace the code to see what function we call when we go through the jump table this time.

It turns out you hit this rather interesting function:



You can see that it seems to be doing bit checks, which, after reversing the logic, ends up being valid for the string: try this ioctl: 22E068

This looks encouraging again!

So now let's retrace our way through the driver code after it's been triggered by the new ioctl.



It turns out we get sent to another monstrous function. This one seems to do some kind of decoding/decryption, however much of the logic doesn't seem to have any global effect.

Tracing down to the end of the function, we can see a reference to the offset byte_29F210 being passed as an argument to what looks like it could definitely be a decryption function, so that's worth watching.

After reviewing parts of the code above that, it becomes clear that many of the jnz's are being triggered off of registers hard-coded to 0:



 This logic causes our global buffer to always be uninitialized.... However what if we patch the code's memory to take the loops instead?

This can be done many ways, however for our purposes we can use WinDBG's .writemem/.readmem.

Replacing the mov [ebp + var_62], 0 instructions with mov [ebp + var_62], 1 initializes the buffer, and if we trace & watch the memory while it's decrypted, we get the following email address:

unconditional_conditions@flare-on.com

Tuesday, August 30, 2016

FLARE On 2015 - Challenge 9

About:

This is the 9th challenge from FireEye's 2015 "FLARE On" challenge (http://flare-on.com/)


Solution:

From hearing what other people had written about this one, it sounded like this was where things start getting pretty difficult.

Challenge 9 was somewhat of a repeat of Challenge 1 & 2, however this one had a number of tricky anti-analysis tricks thrown in.

It starts out by letting you know it's evolved from the first challenge:




Taking a look at the strings, it looks like we may be able to trace back from the spot in the code that uses "You are success" again...



... but after some basic debugging & breakpoint-setting, it becomes clear we don't actually hit that section of the code. Something funny's going on... :-/

(As it turns out, that entire subroutine is a decoy and is not used at all in the validation process...)

Let's break out IDA and take a look:




Almost immediately after we start stepping through the code, we hit some early trickery designed to throw off static analysis by jumping to the middle of an instruction:



This is pretty standard stuff, and since we're going through it with our debugger, it's no problem -- we can just have IDA re-disassemble the instruction, this time starting from the new EIP address.

As we keep going, it becomes clear they're also doing sneaky things by manually crafting/modifying stack frames (as opposed to simple call/return patterns).

After determining things were going to be not simple, I decided to do a execution trace, like so:



This unfortunately ran into issues related to my IDA version being the free version :-(



But it provided enough information to be useful -- it's now clear that they're crafting/assembling tables of some kind based off of static hex values, with lots of anti-analysis crap stuck in between:



Grabbing those tables out of memory and stepping through the program one more time let's us see that the tables are (a) used as XOR key values, (b) used as lookup values, (c) as ROR values, and (d) as the final expected values to validate the email address.

Reversing the logic gives us the final answer:

Is_th1s_3v3n_mai_finul_foarm@flare-on.com

Saturday, August 27, 2016

FLARE On 2015 - Challenge 8

About:

This is the 8th challenge from FireEye's 2015 "FLARE On" challenge (http://flare-on.com/)


Solution:

The 8th challenge was the stego one. Running the initial EXE file doesn't give too much info...





And opening it up in IDA shows there's really not much going on:




Looking through the data embedded in the file, it looks like there is some kind of structured data, however. Possibly Base64:




After using a hex editor to cut out the non-base64 data, I wrote a quick python script to translate the data to un-base64'd form.

This looks good -- Opening the written file in Notepad++ shows it has as PNG header!



Let's see what it looks like:



From here, I tried taking a look at it with StegSolve:



You can see something may be up when you look at the data planes on a per-bit level. Here's the 7th bit in the red plane:



Here are the 0th bits in each of the RGB planes. See the black bar at the top?



It makes sense that there may be some data hidden in there as the 0th bit would affect the picture the least and could easily be used for hiding some additional data.

From here, I pulled it apart with "zsteg", which immediately detected a PE32 executable file and extracted it easily:



Running this EXE file gives you the email for the next level!







FLARE On 2015 - Challenge 7


About:


This is the 7th challenge from FireEye's 2015 "FLARE On" challenge (http://flare-on.com/)


Solution:

Our prompt this time looks like this:



When I was initially working through these challenges during the contest, this was the one I failed out on.

The obfuscation they used here is really nasty, and you can see it below:




Fortunately, this time I'm going to use de4dot to attempt to deobfuscate the code:



Scanning through the cleaned code, ns2's Class3 looks like the best bet for where the application logic resides. 

We can see it starts out with a few bytearrays:



.... and ends with some interesting logic:



So from this, we can guess "bytes" is our "Warning!" message, "bytes2" is our prompt for the correct password, "text" is the value we type in, and "b" is some combination of other values (and is the value our text needs to match in order to succeed...).

Let's take a closer look at smethod_0:



Looks like an xor loop! We know the raw bytes passed in are (31,100,116,97,0,84,69,21,115,97,109,29,79,68,21,104,115,104,21,84,78) and the key is generated from Class3.smethod_2().

So this is good. We know that the data we pass into the prompt has to be equal to (byte_2 xor Class3.smethod_2()) + '_' + Class3.smethod_3().

Can we invoke these functions directly from powershell? Turns out yes, you can.




So now we have everything we need. After doing the xor computation, string concatenation, and entering the resulting value in the prompt, we get this:



Woo hoo!

FLARE On 2015 - Challenge 6


About:


This is the 6th challenge from FireEye's 2015 "FLARE On" challenge (http://flare-on.com/)


Solution:

This one was the first one I got legitimately stuck on.

Off the bat, it's clear that this one is an Android application that takes a string as input and returns "No" or possibly "Yes" when the correct string is entered.

Digging into this one was tricky for a couple reasons, but I was pretty quickly able to extract/decompile the Android code and see that the app was quite simple and seemed to use a native library to perform the main "checking" functionality.





Based on the function that triggers the call into the library being named "validateEmail", it seems pretty likely that we just need to find what string will cause an "OK" output and that'll be our email address.

The full path of the library where our function is implemented is lib/armeabi/libvalidate.so, so let's open this up in IDA. Here we can see the part where it chooses between "No" and "That's it!":



From here, it looks like we just need to work backwards to find out what will make us go to the green code block.

Looking to the left a bit, we can see a particular section of memory referenced that seems to contain a list of prime numbers...



Strange....  Although when we look at how the string we pass in is validated, it looks like an integer value (coming from two bytes of the string) are decomposed into its prime factors, and these factors are compared against static lists of values. For example, here is one that says we should have two of the fifth prime (11), one of the seventh (17), etc.:



If we go through the list of all expected prime numbers, generate the composite numbers from them, and convert these integer values into two-byte strings, we get the following email address:

Should_have_g0ne_to_tashi_$tation@flare-on.com