trendingNow,recommendedStories,recommendedStoriesMobileenglish1368065

New approach allows computer programs to run up to 20% faster

A new approach to software development, designed by researchers at North Carolina State University, could allow common computer programs to run up to 20% faster and possibly incorporate new security measures.

New approach allows computer programs to run up to 20% faster

A new approach to software development, designed by researchers at North Carolina State University, could allow common computer programs to run up to 20% faster and possibly incorporate new security measures.
 
For the first time, researchers have found a way to run different parts of some programs, like widely used programs as word processors and Web browsers, at the same time.

This makes the programs operate more efficiently.

The brain of a computer chip is its central processing unit, or “core.” But to utilize these cores, a program has to be broken down into separate “threads” ?" so that each core can execute a different part of the program simultaneously.

The process of breaking down a program into threads is called parallelization, and allows computers to run programs very quickly.

However, some programs are difficult to parallelize, including word processors and Web browsers.

But researchers have developed a technique that allows hard-to-parallelize applications to run in parallel, by using nontraditional approaches to break programs into threads.

Every computer program consists of multiple steps. The program will perform a computation, then perform a memory-management function ?" which prepares memory storage to contain data or frees up memory storage, which is currently in use. It repeats these steps over and over again, in a cycle.

And, for difficult-to-parallelize programs, both of these steps have traditionally been performed in a single core.

“We’ve removed the memory-management step from the process, running it as a separate thread,” said Dr. Yan Solihin.

Under this approach, the computation thread and memory-management thread are executing simultaneously, allowing the computer program to operate more efficiently.

“By running the memory-management functions on a separate thread, these hard-to-parallelize programs can operate approximately 20 percent faster. This also opens the door to development of new memory-management functions that could identify anomalies in program behaviour, or perform additional security checks. Previously, these functions would have been unduly time-consuming, slowing down the speed of the overall program,” said Solihin.

Using the new technique, when a memory-management function needs to be performed, “the computational thread notifies the memory-management thread ?" effectively telling it to allocate data storage and to notify the computational thread of where the storage space is located,” said Dr. Devesh Tiwari, lead author of the paper.

“By the same token, when the computational thread no longer needs certain data, it informs the memory-management thread that the relevant storage space can be freed,” he added.

LIVE COVERAGE

TRENDING NEWS TOPICS
More