On any device that involves circular tracks on a disc the outside tracks will be longer than the inside tracks. If these tracks are used to store data then the outside tracks can store more data than the inside tracks.
CD-ROM drives do this by spinning at different speeds for different parts of the disc. CD-ROMs don't have discrete tracks/cylinders they have a single spiral track. On a CD-ROM each sector will be stored on the same length of track - this is known as Constant Linear Velocity (CLV). This gives the most efficient use of storage space at the cost of performance, as the CD has to change speeds to have the same media speed under the read heads.
Hard drives have always used Constant Angular Velocity (CAV) which is where the angle from the start to the end of each sector is the same, and the rotational speed is constant. If every track has the same number of sectors then the number of sectors per track is limited by the inner-most tracks and the outer tracks will contain much less data.
Since the introduction of hard drive interfaces which seperate logical addresses that operating systems use from the physical storage on the device (this means SCSI, IDE, and any other high-level interfaces that might be out there) it has been possible for hard drives to have more sectors on outer tracks. This is done through a scheme called Zoned Constant Angular Velocity (ZCAV). In this scheme the disk is divided into a series of zones which each have different numbers of sectors and therefore different performance characteristics.
Apparently the convention is for the outside tracks to contain the sectors with lower addresses, so the first partition allocated on a disk is likely to be significantly faster.
The question is of course, how much of the disk is faster, and how much faster is it? I have worked on projects involving tuning of RAID arrays etc and I get the impression that a common thing to do is to guess that a certain part of the hard drive is faster and use that part. I believe that it's not uncommon for administrators to just use the first half of each hard drive without bothering to find out what the performance is! This is an especially bad idea because there is no reason compelling hard drive manufacturers to make the low numbered sectors the fast ones - they could do the exact opposite!
So I wrote a program to find out. I have named this program ZCAV because I wasn't feeling imaginitive (I was waiting for a friend at Denver airport).
This program reads through a hard drive (or any specified device) a number of times and outputs a text file with two numbers on each line, these are the offset within the device and the throughput in K/s. This data will display nicely in gnuplot.
ZCAV currently computes the average over a specified number of runs, and if there are more than 2 runs then the best 1/3rd and the worst 1/3rd of the results are discarded. My theory at the moment is that this will remove the results from the edges of the normal distribution (where the results are far from the normal) and the average will be more representative. I will have to consult a statistics text for more information on this. I wrote the code to do this on a hunch and found that the results were much neater that way.
My results from running ZCAV