Fuzzing Embedded Browser APIs - Part 1: Moving the Fuzzer Logic Into the Browser

In my work as penetration tester I've performed several security tests of embedded devices that contained browser rendering engines or full browsers as add-on to the main functionality. Despite the fact that the used rendering engine is usually quite outdated (often years at product launch) and unmaintained by the device vendor, the browsers often contain extensions in the form of plug-ins and JavaScript APIs, that offer interfaces to the functionality of the embedded device and have not seldom a poor quality. Further, the input of API calls is mostly passed to code in languages with unmanaged memory management that causes chances for memory corruption vulnerabilities as stack-based buffer overflows and finally full ownage of the embedded device due to poor hardening on OS level; you won't believe how many browsers run with administrative permissions in this area. Usual attack vectors are:

  • Normal web pages which are invoked in the browser if the APIs are exposed to Internet (this itself is already a finding because of privacy concerns and exposure of attack surface).
  • Malicious apps in cases where the browser engine is used as application runtime environment.
  • Man-in-the-Middle attacks where privileged web pages are replaced with malicious content.

Altogether, such APIs worth testing efforts and fuzzing is one effective technique for security testing in this area. I want to share some experiences I've made and techniques I used in a serieis of blog posts.

Why the "Classical" Fuzzing Approach is Often Unsuitable with Embedded Devices

In the security literature fuzzing is often described as follows:

  1. Select an input parameter of the fuzzed program
  2. Select a test case
  3. Invoke the program with the test case as value for the selected parameter

The last point means that the fuzzed program is invoked for every test case. This is quite inefficient and optimized away in modern fuzzing approaches. In embedded devices this time penalty is even bigger by the restricted resources available on such devices. Furthermore, the tests I performed were often black-box tests and the only way to invoke a test case automatically meaned simulation of user interaction, which is a big effort and hard to implement reliable. Therefore I preferred to move the fuzzing logic into the browser as far as possible. The setting looks as follows:

  • JavaScript code running in the browser performs the selection of parameters and test cases and executes the test cases.
  • Before a test is executed, the test case is logged to a logging server implemented as web service. The logging can occur in different modes:
  • Asynchronously: logging requests are initiated but the test case execution doesn't waits to finish them. This mode is for performance in "normal" runs.
  • Synchronously: the test is not performed before the logging request was finished succesfully. This method is used after a crash to identify the exact test case.
  • On some devices I've tested the backlog of queued logging requests has grown steadily and caused that logging got far behind the test execution. This was solved by an log every n-th test case option.

Moving the fuzzing logic into the browser has the drawback that test cases can cause corruption of data structures in memory that don't lead directly to crashes. This makes the reproduction and identification of the central issue harder but is outweighed by the increased efficiency of the fuzzing process. In particularly glaring cases the execution of test cases in the classic required few seconds per test case while it was possible to execute few hundred test cases per second by moving the fuzzing logic into the browser.

In the next part of this series I will show you how to fool watchdogs that are occasionally implemented on embedded devices.