Already a member? Log in

Sign up with your...

or

Sign Up with your email address

Add Tags

Duplicate Tags

Rename Tags

Share This URL With Others!

Save Link

Sign in

Sign Up with your email address

Sign up

By clicking the button, you agree to the Terms & Conditions.

Forgot Password?

Please enter your username below and press the send button.
A password reset link will be sent to you.

If you are unable to access the email address originally associated with your Delicious account, we recommend creating a new account.

Links 1 through 10 of 294 by Ken Robson tagged performance

NAPI ("New API") is a modification to the device driver packet processing framework, which is designed to improve the performance of high-speed networking. NAPI works through:

Interrupt mitigation 
High-speed networking can create thousands of interrupts per second, all of which tell the system something it already knew: it has lots of packets to process. NAPI allows drivers to run with (some) interrupts disabled during times of high traffic, with a corresponding decrease in system load.
Packet throttling 
When the system is overwhelmed and must drop packets, it's better if those packets are disposed of before much effort goes into processing them. NAPI-compliant drivers can often cause packets to be dropped in the network adaptor itself, before the kernel sees them at all.
NAPI was first incorporated in the 2.5/2.6 kernel but was also backported to the 2.4.20 kernel.

Share It With Others!

The Internet’s Transmission Control Protocol, or TCP, has proved remarkably adaptable, working well across a wide range of hardware and operating systems, link capacities, and round trip delays.

Share It With Others!

In my previous post, I showed NFS random read latency at different points in the operating system stack. I made a few references to hits from DRAM - which were visible as a dark solid line at the bottom of the latency heat maps. This is worth exploring in a little more detail, as this is both interesting and another demonstration of Analytics.
Here is both delivered NFS latency and disk latency, for disk + DRAM alone:

Share It With Others!

One of the driving forces behind the development of Virtual Memory (VM) was to reduce the programming burden associated with fitting programs into limited memory. A fundamental property of VM is that the CPU references a virtual address that is translated via a combination of software and hardware to a physical address. This allows information only to be paged into memory on demand (demand paging) improving memory utilisation, allows modules to be arbitrary placed in memory for linking at run-time and provides a mechanism for the protection and controlled sharing of data between processes. Use of virtual memory is so pervasive that it has been described as an “one of the engineering triumphs of the computer age” [denning96] but this indirection is not without cost.

Share It With Others!

This package provides R interfaces to a handful of common statistical algorithms. These algorithms are implemented in parallel using a mixture of nVidia's CUDA langauge and CUBLAS library. On a computer equiped with an nVidia GPU some of these functions may be substantially more efficient than native R routines.

Share It With Others!

Huge processing times limit bioinformatics analyzes that are developed in the statistical language R. Therefore we develop a user friendly package, “RGPU”, which uses the video card to speedup bioinformatics analyzes with R by a factor larger than 50.

Share It With Others!

Finding out why your Linux computer performs the way it does has been a hard task. Sure, there is Oprofile, and even ‘perf’ in recent kernels. There is LatencyTOP to find out where latencies happen.

Share It With Others!

ost of my readers will understand that cache is a fast but small type of memory that stores recently accessed memory locations.  This description is reasonably accurate, but the “boring” details of how processor caches work can help a lot when trying to understand program performance.

Share It With Others!

Paper on the use of time series analysis with regularly sampled systems stats.

Share It With Others!

trace is a tracing utility built directly into the Linux kernel. Many distributions already have various configurations of Ftrace enabled in their most recent releases. One of the benefits that Ftrace brings to Linux is the ability to see what is happening inside the kernel. As such, this makes finding problem areas or simply tracking down that strange bug more manageable.

Share It With Others!