Tag Archives: linux

Runtime support for approximate computing in heterogeneous systems

Recently I just finished my MSc Thesis, titled “Runtime support for approximate computing in heterogeneous systems”. I developed a run-time system in C programming language that supports approximate  computations using OpenCL. You can download my thesis here.

Source code will be available in a few weeks.

Abstract

Energy efficiency is the most important aspect in nowadays systems, ranging from embedded devices to high performance computers. However, the end of Dennard scaling limits expectations for energy efficiency improvements in future devices, despite manufacturing processors in lower geometries and lowering supply voltage. Many recent systems use a wide range of power managing techniques, such as DFS and DVFS, in order to balance the demanding needs for higher performance/throughput with the impact of aggressive power consumption and negative thermal effects. However these techniques have their limitations when it comes to CPU intensive workloads.

Heterogeneous systems appeared as a promising alternative to multicores and multiprocessors. They offer unprecedented performance
and energy efficiency for certain classes of workloads, however at significantly increased development effort: programmers have to spend significant effort reasoning on code mapping and optimization, synchronization, and data transfers among different devices and address
spaces. One contributing factor to the energy footprint of current software is that all parts of the program are considered equally important for the quality of the final result, thus all are executed at full accuracy. Some application domains, such as big-data, video and image processing etc., are amenable to approximations, meaning that some portions of the application can be executed with less accuracy, without having a big impact on the output result.

In this MSc thesis we designed and implemented a runtime system, which serves as the back-end for the compilation and profiling infrastructure of a task-based meta-programming model on top of OpenCL. We give the opportunity to the programmer to provide approximate functions that require less energy and also give her the freedom to express the relative  importance of different computations for the quality of the output, thus facilitating the dynamic exploration of energy / quality trade-offs in a disciplined way. Also we simplify the development of parallel algorithms on heterogeneous systems, relieving the programmer from tasks such as work scheduling and data manipulation across address spaces. We evaluate our approach using a number of real-world applications, from domains such as finance, computer vision, iterative equation solvers and computer simulation.

Our results indicate that significant energy savings can be achieved by combining the execution on heterogeneous systems with approximations, with graceful degradation of output quality. Also, hiding the underlying memory hierarchy from the programmer, performing data dependency analysis and scheduling work transparently, results in faster development without sacrificing the performance of the applications.

Advertisements

Attempting to make fat binaries on Linux

A fat binary is a collection of binaries put in the same executable. Each time the executable is run usually the kernel chooses the right binary, depending the architecture, and executes it. For example we may have in the same binary code for x86 and x86_64 architecture, and the OS is x86. Or even have in the same fat binary code for a CPU and a GPU program. There are some cons and some pros, but i’m not going to explain them now. There is a good article in wikipedia here.

Two or three years ago, a project by the name fatELF started by Ryan C. Gordon. He made a nice implementation, but his kernel patch was rejected so he dropped it.

So when i wanted to make an implementation of fat binaries, i had to find a work around, and not mess with the kernel.

In the following diagram is my implementation:

Let me try to explain it. First we combine all the binaries to one big file, and put as the first binary the so called “elf_header”. The combine function also adds a header to the end of the file, called “FAT_HEADER”. In there, there are information about the binaries that reside into the fat binary, such as the offset of the binary and an id.

So what does our elf_header do? First of all it is a binary made by us, whose work is to scan the end of the file, searching for the header. If the header exists, it starts to extract the info and gives us the option to run the binary we want. In my implementation it just gives the option to the user to select which binary he wants to execute. This can easily be changed to automatically scan the hardware and run the ELF binary and/or also create threads which execute 2 or more binaries at the same time.

You can find my code on github: https://github.com/mpekatsoula/Fat-binaries

I just wanted to share my implementation, and not a full code. As i said the program asks the user on which binary he wants to run, and it does not put the correct id on each binary. So if you want to use it for a more serious job, you can pass the id as an argument, or use a library such as <libelf.h> to scan automatically the header of the ELF binary and extract any info you want. It’s not that hard ;)

For info about running, first you compile the elf_header, and then the main with the combine function. Then you run the generated code and give as arguments the output file, the elf_header and then the binaries you want to combine.

Example:


gcc elf_header.c -o elf_header

gcc main.c combine.c -o main

./main output elf_header &lt;arg1&gt; ... &lt;argN&gt;

./output