Recently I took part in a challenge organized by ARM Ltd which involved using a BBC micro:bit as the main processing board in an EMG sensor device. The device sends electrode sensor readings through BLE to an Android phone, and then data are displayed on a real time graph. This project was done in collaboration with Thomas Poulet and George Kopanas. The overall project setup is shown is the picture below:
You can find more info about the EMG sensors and how they are connected to the micro:bit here.
Bluetooth low energy
BBC micro:bit comes with a Bluetooth low energy (BLE) antenna, which is designed for reduced power consumption thus making it perfect for our use as we need to continuously transmit EMG sensor data. The micro:bit is configured as a server that waits for GATT requests and sends data, and our android phone is the client that initiates the connection and receives the data from the EMG sensor. We use Nordic’s nRF UART over BLE in order to transmit data.
Transmitting data through RX Characteristic
Connecting the phone with the micro:bit is pretty trivial on Android. There is a code sample from Google and you can find all the required information on their developer site.
First we have to scan for the avaiable devices and then select the one we want to connect to (in our case micro:bit). After the connection is initialized we enable indication by enabling ENABLE_INDICATION_VALUE flag and we wait for data sent from micro:bit. Data are sent to RX Characteristic (UUID: 6E400003-B5A3-F393-E0A9-E50E24DCCA9E) and then data are plotted to the phone screen using Graphview.
Here are some pictures and a video of how things look when everything is connected:
Recently I just finished my MSc Thesis, titled “Runtime support for approximate computing in heterogeneous systems”. I developed a run-time system in C programming language that supports approximate computations using OpenCL. You can download my thesis here.
Source code will be available in a few weeks.
Energy efficiency is the most important aspect in nowadays systems, ranging from embedded devices to high performance computers. However, the end of Dennard scaling limits expectations for energy efficiency improvements in future devices, despite manufacturing processors in lower geometries and lowering supply voltage. Many recent systems use a wide range of power managing techniques, such as DFS and DVFS, in order to balance the demanding needs for higher performance/throughput with the impact of aggressive power consumption and negative thermal effects. However these techniques have their limitations when it comes to CPU intensive workloads.
Heterogeneous systems appeared as a promising alternative to multicores and multiprocessors. They offer unprecedented performance
and energy efficiency for certain classes of workloads, however at significantly increased development effort: programmers have to spend significant effort reasoning on code mapping and optimization, synchronization, and data transfers among different devices and address
spaces. One contributing factor to the energy footprint of current software is that all parts of the program are considered equally important for the quality of the final result, thus all are executed at full accuracy. Some application domains, such as big-data, video and image processing etc., are amenable to approximations, meaning that some portions of the application can be executed with less accuracy, without having a big impact on the output result.
In this MSc thesis we designed and implemented a runtime system, which serves as the back-end for the compilation and profiling infrastructure of a task-based meta-programming model on top of OpenCL. We give the opportunity to the programmer to provide approximate functions that require less energy and also give her the freedom to express the relative importance of different computations for the quality of the output, thus facilitating the dynamic exploration of energy / quality trade-offs in a disciplined way. Also we simplify the development of parallel algorithms on heterogeneous systems, relieving the programmer from tasks such as work scheduling and data manipulation across address spaces. We evaluate our approach using a number of real-world applications, from domains such as finance, computer vision, iterative equation solvers and computer simulation.
Our results indicate that significant energy savings can be achieved by combining the execution on heterogeneous systems with approximations, with graceful degradation of output quality. Also, hiding the underlying memory hierarchy from the programmer, performing data dependency analysis and scheduling work transparently, results in faster development without sacrificing the performance of the applications.
I just finished my Diploma Thesis and you can find it here.
Heterogeneous systems provide high computing performance, combining low cost
and low power consumption. These systems include various computational resources
with different architectures, such as CPUs, GPUs, DSPs or FPGAs. It is crucial
to have full knowledge of these architectures, but also of the programming models
used in order to increase the performance on a heterogeneous system.
One way to achieve this goal, is the prediction of the execution time on the different
computational resources, using statistical values which we collect with the use of
hardware counters. The purpose of this thesis is to increase the performance of a
heterogeneous system using the data we collected by training a statistical model
which will predict the execution time. Further goal is to use this prediction model
inside a run-time scheduler which will migrate the running application in order to
decrease the execution time and increase the overall performance.
We used various statistical models, such as linear regression, neural networks and
random forests and we predicted the execution time to Intel CPUs and NVIDIA
GPUs, with different levels of success.
This two person project was completed through the course of Embedded Systems at the University of Thessaly, Department of Computer Engineering. In the context of this project we implemented the classic pong game using a Spartan 6 FPGA, and two 3-axis accelerometers. The code is in Verilog and you can find it on github ( link at the bottom of the page ). The project consists two parts. First, the connection with the monitor through the VGA and game logic and the connection of the accelerometers through the SPI interface.
VGA Technology and Implementation
The first part of the project was to connect the FPGA with a monitor using the VGA output. VGA is a video standard mainly used for computer monitors introduced by IBM in 1987.
VGA video is a stream of frames. Each frame is made of horizontal and vertical series of pixels which are transmitted from top to bottom and from left to right, like a beam is traveling through the screen (CRT displays actually used a moving electron beam, but LCD displays have evolved to use the same signal timings as CRT displays). Information is only displayed when the beam is moving forward and not during the time the beam is reset back to the left or top edge of the display.
First we made a VGA controller module that generates the correct signals. The signals that we need to pass to the VGA DAC (Digital to Analog Converter) are:
• Pixel clock
• Vertical Sync
• Horizontal Sync
• 3-bit Red
• 3-bit Green
• 2-bit Blue
Pixel clock defines the time available to display one pixel of information. With different timing values we can achieve several resolutions, such as 800×600 etc. Vertical sync defines the refresh rate of the display and horizontal sync is used to indicate the end of a horizontal line. We use two counters, hcount and vcount that count the pixels in the horizontal and vertical lines. We can determine the location of a pixel in the screen (x,y) by combining these two counters.
Each line of the video begins with an active video region, in which RGB values are output for each pixel in the line. Then a blanking region follows in which a horizontal sync pulse is transmitted in the middle of the blanking interval. The interval before the sync pulse is known as front porch and after the sync pulse back porch.
There are many VGA timing values that can be used, in order to support several resolutions, as we can see in the table below:
For our project we had a resolution of 800×600@72Hz, so we created a 50 MHz clock, from the 100 MHz clock input of the Spartan 6 and the horizontal and the vertical count have a total value of 1039 and 665 respectively. Based on these numbers we calculate the exact time that the hsync and vsync are set active high (both signals on this resolution must be active high) and we connect them to the FPGA pins.
Based on the VGA module we draw on the screen basic shapes such as the paddles and a square dot that represents the ball. The paddle drawing is done at the draw_shape module that given the (x,y) position of the top left pixel, creates a 128×16 pixels rectangle. The same happens with the ball that is 32×32 pixels. Also we have a module that creates the game board; four lines for the perimeter of the screen and one vertical line at the half of the board. Each of these modules, output the pixel locations of each shape. Ball_movement module takes as input the location of the paddles and the ball and does the necessary calculations for the ball movement. Ball moves at a constant speed of one pixel in x axis and one pixel in y axis. If ball hits the up or down board limit or one of the paddles the trajectory is changed. Also in this module we check if the ball hits the right or left limit, and if yes, a signal is generated to indicate that a player has won a point. Whenever a player wins the score is updated and displayed on the screen. If a player’s score reaches 10 points then the game is over and a message indicating which player has lost is shown. Then the game resets to its initial state. Finally this module outputs the pixel locations of the ball and the paddles and they are driven to the output_pixels module that generates the final output that the monitor will display.
A snippet of the code that checks if the ball has hit the paddle:
The numbers showing the score are in seven segment display format output and are generated in the draw_score module. Also we implemented a pause game function by activating the switch T5.
Since Nexys 3 board has a reset button that only erases completely any program loaded, we use switch V8 as a reset signal for our project.
An accelerometer is an electromechanical device that will measure acceleration forces. These forces may be static, like the constant force of gravity, or they could be dynamic caused by moving the accelerometer. There are different types of accelerometers depending on how they work. Some accelerometers use the piezoelectric effect; they contain microscopic crystal structures that get stressed by accelerative forces, which cause a voltage to be generated. Others implement capacitive sensing, that output a voltage dependent on the distance between two planar surfaces.
In our implementation we used a 3-axis (one axis for each direction) digital accelerometer powered by the analog device ADXL345 and took advantage of the force of gravity on y axis, making the paddles move by tilting the accelerometer right or left. We connected the accelerometers through the SPI interfaces. SPI operates in full duplex mode and uses four signals: Slave select (SS), serial clock (SCLK), serial data out (SDO), to the accelerometer and serial data in (SDI), from the accelerometer. Devices communicate in master/slave mode where master initiates the data frame. Our setup contains two shift registers, one in the master and one in the slave and they are connected as a ring. Data is shifted out with the most significant bit first, while shifting a new least significant bit into the same register.We initialize the transfer with a 5Hz clock and we transmit/receive data at 22.4 kHz rate. The accelerometer is configured for +/- 2g operation. To convert the output to g we have to find the difference between the measured output and the zero-g offset and divide it by the accelerometer’s sensitivity (expressed in counts/g or LSB/g). For our accelerometer in 2g sensitivity with 10-bit digital outputs, the sensitivity is 163 counts/g or LSB/g. The acceleration would be equal to: 𝑎=(Aout−zerog)163 g. We didn’t have to make those calculations for the paddle movement. We just take the accelerometer output and we move the paddles accordingly based on the table below:
A fat binary is a collection of binaries put in the same executable. Each time the executable is run usually the kernel chooses the right binary, depending the architecture, and executes it. For example we may have in the same binary code for x86 and x86_64 architecture, and the OS is x86. Or even have in the same fat binary code for a CPU and a GPU program. There are some cons and some pros, but i’m not going to explain them now. There is a good article in wikipedia here.
Two or three years ago, a project by the name fatELF started by Ryan C. Gordon. He made a nice implementation, but his kernel patch was rejected so he dropped it.
So when i wanted to make an implementation of fat binaries, i had to find a work around, and not mess with the kernel.
In the following diagram is my implementation:
Let me try to explain it. First we combine all the binaries to one big file, and put as the first binary the so called “elf_header”. The combine function also adds a header to the end of the file, called “FAT_HEADER”. In there, there are information about the binaries that reside into the fat binary, such as the offset of the binary and an id.
So what does our elf_header do? First of all it is a binary made by us, whose work is to scan the end of the file, searching for the header. If the header exists, it starts to extract the info and gives us the option to run the binary we want. In my implementation it just gives the option to the user to select which binary he wants to execute. This can easily be changed to automatically scan the hardware and run the ELF binary and/or also create threads which execute 2 or more binaries at the same time.
I just wanted to share my implementation, and not a full code. As i said the program asks the user on which binary he wants to run, and it does not put the correct id on each binary. So if you want to use it for a more serious job, you can pass the id as an argument, or use a library such as <libelf.h> to scan automatically the header of the ELF binary and extract any info you want. It’s not that hard ;)
For info about running, first you compile the elf_header, and then the main with the combine function. Then you run the generated code and give as arguments the output file, the elf_header and then the binaries you want to combine.
Usually a malware writer, or a closed source product, use some techniques in order to make the binaries difficult to read. On the one hand, the anti-virus are unable to read the signature of the malware and on the other hand a reverse engineer’s life becomes difficult.
One technique (usually not implemented alone), is to encrypt some portions of the code and decrypt them at runtime, or better decrypt each time the code we want to run and then encrypt it back.
As GPU’s have extremely high computational power, we can have really complex functions for encrypting and decrypting our code. I’ve made a really simple example of a self-decrypting application and i’ll try to explain this step by step.
First of all what is our program going to do? Well it will spawn a shell. The assembly code (we need assembly code so it can be portable) to do that is:
xor ecx, ecx
mov ebx, esp
mov al, 11
You can find codes like this freely available on the internet (this one is written by kernel panik), or you can make your own if you want specific things to be done (or just want to learn). We want our code to be portable, and not containing relative addresses.
So now that we have our assembly code, we compile it to an object file:
nasm shell.asm -f elf32 -o shell.o
Our code for the self-decrypting binary is this one, written in C for CUDA:
Now i have to make some explainations. First of all we have to find the length of the instructions. There are some ways to do this, but there is a project by oblique here:https://github.com/oblique/insn_len that can do that very easily.
Now, some of you may wonder why i am mmaping and memcpying. Well there are some protections around, that prevent us from writing to some portions of memory such as .text. So we have to load our encrypted code, decrypt it and mmap it to a new portion of memory that can be executed. This is where our flags go. After that we are ready to execute our code.
UPDATE NOTE: Ok i don’t really know why i did this, but some of you may wonder, why don’t you just call mprotect? Well you are right. I updated my code on github and you can check it.
Okay i know, it’s a simple xor decryption with a fixed key, not really encrypted, but this is just a proof of concept. You can have a more complex stream cipher function like RC4 ect. Also you do not need to have a key saved in the binary somehow, but brute force until the code “makes sense”. With such a computation power it is pretty easy.
Now we compile our source code with nvcc and link it:
And now we have our executable! But first we have to patch our binary with our encrypted function. The reason why we used stream ciphers is because we don not want to change the size of our function, and make things more complex. One simple way to patch our elf binary is simply by opening it with a hex editor ( i used Bless), and find the code we want to patch. But how? It’s simple:
objdump -d -j .text shell_spawn
and if you search you will see the _shell function:
I want to develop a strong cipher and find a better way to patch my binary, so this is just the idea. If someone wants to go deeper i’d like to hear new ideas. Until then, feel free to comment, point mistakes etc :)
Η ιδέα μου ήρθε καθώς διάβαζα μία συνέντευξη του Dries Buytaert, founder του Drupal. Σε κάποιο σημείο είπε πως είχε φτιάξει ένα web crawler και μαζευε στατιστικά από διάφορες ιστοσελίδες.
So… why not ;)
Η κύρια λειτουργία του είναι να βρίσκει όλα τα links σε μία σελίδα, να τα αποθηκεύει και στην συνέχεια να τα ακολουθεί.
Αρχικά παίρνει τα “feed” urls από ένα αρχείο με όνομα urls (θα βρίσκεται στον ίδιο φάκελο). Σε κάθε νέο host, καλείται η urllib2.info() και παίρνουμε κάποιες πληροφορίες.
import urllib2, re, sys, urlparse
#******************************** Options ********************************#
print "A simple web crawler by mpekatsoula."
print "-h : print help."
print "-n i : i is the number of \"crawls\" you want to do."
print " Enter -1 or leave blank for inf."
print "-o name : the name of the file you want to store the results."
print " If blank the file will be named results."
# Standar values
crawls = -1
results_file = "results"
# Check user input
for arg in sys.argv[1:]:
crawls = sys.argv[int(sys.argv[1:].index(arg))+2]
crawls = int(crawls)
results_file = sys.argv[int(sys.argv[1:].index(arg))+2]
results_file = str(results_file)
# Open the file with the 'feed' urls
feed_urls = open('urls','r')
# Create the file to store the results
results = open(results_file,'a')
# Array that holds the urls to crawl/urls that are crawled/hosts that info has gathered
nexttocrawl = set()
crawled_urls = set()
gathered_info = set()
# We need to have the expressions of a url.
# So we make an object that holds these expressions
# More info for regular expressions in python here: http://docs.python.org/dev/howto/regex.html
expressions = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>')
# Add the feed urls from the file to the array
for line in feed_urls:
# Simple counter
# Get next url and print it. If the array is empty, exit.
crawling_url = nexttocrawl.pop()
print "[*] Crawling...: " + crawling_url
# "Break" the url to components
parsed_url = urlparse.urlparse(crawling_url)
# Open the url
url = urllib2.urlopen(crawling_url)
# Read the url
url_message = url.read()
# Find the new urls
gen_urls = expressions.findall(url_message)
# Store the crawled urls
# Add the new urls to array
for link in (gen_urls.pop() for _ in xrange(len(gen_urls))):
link = 'http://' + parsed_url + link
link = 'http://' + parsed_url + parsed_url.rstrip("\n") + link
elif not link.startswith('http'):
link = 'http://' + parsed_url + '/' + link
if link not in crawled_urls:
if parsed_url not in gathered_info:
# Collect the info
collected_info = str(url.info())
# Here we store the results ;)
#close the files & exit