1. ARM bigLittle board http://www.arm.com/products/processors/technologies/biglittleprocessing.php The underlying big.LITTLE software automatically moves workloads to the appropriate CPU based on performance needs, in microseconds, so quickly that it is completely seamless to the user.
It works in tandem with Dynamic Voltage and Frequency Scaling (DVFS), clock gating, core power gating, retention modes, and thermal management to deliver a full set of power control for the SoC. big.LITTLE technology takes advantage of the fact that the usage pattern for smartphones and tablets is dynamic: Periods of high processing intensity tasks, alternate with typically longer periods of low processing intensity tasks. The graph shows the usage of the big CPU cores in burst mode, or for short duration at peak frequency, while the majority of runtime is managed by LITTLE cores at moderate operating frequencies. User space software on a big.LITTLE SoC is identical to the software that would run on a standard SMP processor. ARM has developed a kernel space patch set gives the Operating System awareness of the big and LITTLE cores, and the ability to schedule individual threads of execution on the appropriate processor based on dynamic run-time behavior. The software also keeps track of load history for each thread that runs, and uses the history to anticipate the performance needs of a thread the next time it runs. This software is called Global Task Scheduling.
Hardware coherency with CoreLink CCI-400 is an important part of ARM big.LITTLE processing and allows a single operating system to run across two processor clusters simultaneously. With big.LITTLE Global Task Scheduling (GTS) processes and applications can move dyanmically between the high performance ‘big’ and the high efficiency ‘LITTLE’ cores as demand requires. This technolgy allows can allow up to 8 cores to run at the same time.
Versatile Express Platform
big.LITTLE uses Global Scheduling Task http://www.linaro.org/blog/hardware-update/big-little-software-update/
Some definitions: Memory Level Parallelism or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cachemisses or translation lookaside buffer misses, at the same time.http://en.wikipedia.org/wiki/Memory-level_parallelism
Instruction-level parallelism (ILP) is a measure of how many of the operations in a computer program can be performed simultaneously. The potential overlap among instructions is called instruction level parallelism.http://en.wikipedia.org/wiki/Instruction_level_parallelism
cycles per instruction (aka clock cycles per instruction, clocks per instruction, or CPI) is one aspect of a processor’sperformance: the average number of clock cycles per instruction for a program or program fragment. It is the multiplicative inverse of instructions per cycle.http://en.wikipedia.org/wiki/Cycles_per_instruction
Symmetric multiprocessing (SMP) involves a symmetric multiprocessor system hardware and software architecture where two or more identical processors connect to a single, shared main memory, have full access to all I/O devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes.http://en.wikipedia.org/wiki/Symmetric_multiprocessing