Pages

Monday, April 04, 2011

Overcloking:How and Why???

OVER-CLOCKING: HOW AND WHY???


AUTHORS

Ghanshyam Verma, Shivangi Sinha
KITS, Ramtek
Nagpur, India
ghanshyam.verma@ovi.com, shivangi.sinha@live.com



(paper presented at SPANDAN, 2011 - YCCE, Nagpur)



Abstract—This document is a summary of a passion termed as ‘OVER-CLOCKING’. The document contains a brief description of how overclocking of a component could be achieved, considerations one should keep in mind while over-clocking a system and the limitations, advantages, disadvantages associates with over-clocking.
Keywords-overclocking, FSB, CPU Multiplier, Clock rate, downclocking, stress tests, cooling, functional stability, limitations, advantages, disadvantages.

I. over-clocking: an Introduction

If you think overclocking sounds like an ominous term, you have the right idea. Overclocking is the process of running a computer component at a higher clock rate (more clock cycles per second) than it was designed for or was specified by the manufacturer, usually practiced by enthusiasts seeking an increase in the performance of their computers.
Overclocking is a popular technique for getting a little performance boost from your system, without purchasing any additional hardware. People who overclock their components mainly focus their efforts on processors, video cards, motherboard chipsets, and random-access memory (RAM).
Most times overclocking will result in a performance boost of 10 percent or less. For example, a computer with an Intel Pentium III processor running at 933MHz could be configured to run at speeds equivalent to a Pentium III 1050MHz processor by increasing the bus speed on the motherboard. Overclocking will not always have the exact same results. Two identical systems being overclocked most likely will not produce the same results. One will usually always overclock better than the other.
It is done through manipulating the CPU multiplier and the motherboard's front side bus (FSB) clock rate until a maximum stable operating frequency is reached.
There are many considerations to be taken into account before overclocking a system, otherwise in most of the cases it results in a very unstable system.
And if we are discussing overclocking than underclocking or downclocking could by no mean be over looked. It is the practice of modifying a synchronous circuit's timing settings to run at a lower clock rate than it was specified to operate at. It may be said to be the computer equivalent to drive a car at a speed below the speed limit. Usually, underclocking is used to reduce a computer's power consumption and heat emission, sometimes also to increase the system's stability and compatibility. Underclocking may be implemented by the factory, but many computers and components are end user underclockable.

II. a brief history


overclocking is nearly as old as the PC itself. Intriguingly, it was actually PC manufacturers rather than enthusiasts that got the ball rolling. Back in 1983, ever-conservative IBM capped early versions of its eponymous PC at a mere 4.7MHz in the interests of stability.
Soon enough, however, clones of the IBM PC shipped with 8088-compatible processors running at a racy 10MHz. Thus the battle for the highest clockspeed was started.
The next big step was the arrival of the Intel 486 processor and the introduction of much more user friendly overclocking methods. It was the latter DX2 version of the 486, launched in 1989, that debuted the CPU multiplier, allowing CPUs to run at multiples of the bus frequency and therefore enable overclocking without adjusting the bus frequency. While the adjustment of bus speeds usually entailed little more than flicking a jumper or DIP switch, changing the multiplier setting often required a little chip modding with a leaded pencil or at worst perhaps some soldering work. One way or another, impressive overclocks of certain clones of Intel's 486 chip from the likes of Cyrix and AMD were possible. For example, AMD's 5x86 of 1995, a chip based on 450nm silicon, could be clocked up from 133MHz to 150Mhz.
Intel, of course, has long been the master of making smaller transistors. In 1996 it introduced the Pentium Pro. Firstly, this was a much more sophisticated CPU than any before thanks to out-of-order instruction execution. But it also boasted tiny (for the era) 250nm transistors. 200MHz versions of the Pentium Pro were known to hit 300MHz, an extremely healthy 50 per cent overclock.
However, the Pentium Pro was a painfully expensive chip. In 1998, Intel released the original Celeron, a budget-orientated processor with a cut down feature set including no L2 cache. Stock clocked at 266MHz, retail examples of the chip were sometimes capable of as much as 400MHz. Big clocks on a small budget was possible for the first time.

III. over-clocking: how???

It is done through manipulating the CPU multiplier and the motherboard's front side bus (FSB) clock rate until a maximum stable operating frequency is reached, although with the introduction of Intel's new X58 chipset and the Core i7 processor, the front side bus has been replaced with the QPI (Quick Path Interconnect); often this is called the Baseclock (BCLK). While the idea is simple, variation in the electrical and physical characteristics of computing systems complicates the process. CPU multipliers, bus dividers, voltages, thermal loads, cooling techniques and several other factors such as individual semiconductor clock and thermal tolerances can affect it.
Before we go into details, lets have a look at the common terms we might hit while discussing overclocking.
· Clock rate-The clock rate is the rate in bits per second (measured in hertz) or the frequency of the clock in any synchronous circuit, such as a central processing unit (CPU). For example, a crystal oscillator frequency reference typically is synonymous with a fixed sinusoidal waveform, a clock rate is that frequency reference translated by electronic circuitry (AD Converter) into a corresponding square wave pulse [typically] for digital electronics applications. In this context the use of the word, speed (physical movement), should not be confused with frequency or its corresponding clock rate. Thus, the term "clock speed" is a misnomer. A single clock cycle (typically shorter than a nanosecond in modern non-embedded microprocessors) toggles between a logical zero and a logical one state. Historically, the logical zero state of a clock cycle persists longer than a logical one state due to thermal and electrical specification constraints.
· Clock/cpu multiplier- the clock multiplier (or CPU multiplier or bus/core ratio) measures the ratio of an internal CPU clock rate to the externally supplied clock. A CPU with a 10x multiplier will thus see 10 internal cycles (produced by PLL-based frequency multiplier circuitry) for every external clock cycle. For example, a system with an external clock of 133 MHz and a 10x clock multiplier will have an internal CPU clock of 1.33 GHz. The external address and data buses of the CPU (often collectively termed front side bus or FSB in PC contexts) also use the external clock as a fundamental timing base, however, they could also employ a (small) multiple of this base frequency (typically two or four) in order to transfer data faster.
· Dynamic voltage scaling- It is a power management technique in computer architecture, where the voltage used in a component is increased or decreased, depending upon circumstances. Dynamic voltage scaling to increase voltage is known as overvolting; dynamic voltage scaling to decrease voltage is known as undervolting. Undervolting is done in order to conserve power, particularly in laptops and other mobile devices, where energy comes from a battery and thus is limited. Overvolting is done in order to increase computer performance, or in rare cases, to increase reliability.
· Dynamic frequency scaling- It is another power conservation technique that works on the same principles as dynamic voltage scaling. Both dynamic voltage scaling and dynamic frequency scaling can be used to prevent computer system overheating.
Now as mentioned at the beginning of this section, over-clocking is done through manipulating the CPU multiplier and the motherboard’s front side bus (FSB). . So for example, take a Pentium 4 2.4GHz (2400MHz). It is designed to run at 2400MHz, no less, no more, but if you're lucky you can run it at a higher speed, say 2600MHz, which would obviously make it perform like a proper Pentium 4 2.6 GH z. The question is, how do we manage to do this, and can it be done to every processor? First we'll take a quick look at how clock speed (the 2400MHz part of our Pentium 4) is derived. All current processors are much the same, they have a multiplier, and a front side bus (FSB). In the case of the Pentium 4, the FSB or bus s peed is 100, 133 or 200MHz depending on what model of Pentium 4 you have. Our Pentium 4 2.4c processor uses the fastest 200MHz FSB, which Intel quote as 800MHz due to their marketing of the `Quad Pumped' bus, but we can ignore that. So, to achieve 2400M Hz from a 200MHz bus, we multiply 200 by 12, which gives the final clock s peed. So, For a Pentium 4 2.53GHz (which uses the older 133MHz bus) this would be And it works the same way for Athlons too, for instance, an Athlon XP 2500+ which is in reality running at 1.83GHz, uses a 166MHz FSB, has a multiplier of 11. So in theory, if we had other components that could support it, and the processor itself was capable of running at the increased speed, we could change the FSB from 166MHz to 200MHz.
Most motherboards today have a section in the BIOS which allows you to make changes to the bus speed, processor voltage, memory voltage, memory speed etc. Once in the BIOS, have a look for `Frequency / Voltage Control', `Softmenu', `Advanced' or similar menus, as these will be the areas that contain all the overclocking controls. Once in these sections of your BIOS, you can start experimenting; try increasing the FSB/Bus speed a few MHz at a time, saving and exiting the BIOS and booting in to your operating system to check if the overclocking has worked. Common overclocks are increasing a Pentium 4 2.4c to 3GHz, that's going from 200MHz FSB to 250MHz. Similar overclocks can often be achieved with other Pentium 4s, you just have to experiment, and always expect the worst!
There is also the question of voltages. A typical Pentium 4 runs at 1.525v, but by increasing the voltage to say, 1.6v, you may achieve a higher clock speed. It is best to try overclocking at default voltage until you find a limit, and then try increasing the voltage a little bit at a time to see if that helps. It is not advisable to increase voltage more than 10% over the default value. Increasing voltage will also increase the temperature of your processor, which in turn may limit its overclockability. Overclocking itself will increase the processor temperature, as the processor has to do more work. Make sure you have a decent heatsink/fan, or you could try more exotic cooling solutions such as water or phase change. Due to the nature of electronics, processors are usually able to run faster at lower temperatures, so if your processor is already running at 60oC you will probably find that you can' t overclock very much, If at all. Finishing of, you need to remember a few basic things. For instance, memory speed is usually directly related to FSB, so if you increase the FSB on your Pentium 4 to 220MHz, you also increase the memory frequency to 220MHz. Many motherboards allow you to use a `memory divider', which can let th e memory run slower than the FSB. So, if the memory runs at the same speed as the FSB, the divider is 1:1. Common dividers are 5: 4 and 3:2. To work out the memory frequency from the divider, you divide the FSB by the first number, and then multiply by the second number. For example if we run a 200MHz FSB and use the 5:4 divider we get “(200/5) x 4= 160Mhz” or DDR320. These dividers can be useful at higher speeds to bring the memory speed down to specification. And just because your system can boot at its new higher speed, don't assume that it's 100% stable. You must test the system before continuing any important work.

IV. testing an overclocked system

Another important point worth mentioning here is, how would you determine the success of a overclocked system. Well one cannot simply determine it by running a 3-D game and then deriving the conclusion that, the system responded faster compared to non-overclocked situation. In most of the cases the result of overclocking is less than the 10% increase of original clock rate. And more importantly its rather naïve and childish to test overclocked systems using games or applications that require faster clock rates. A more professional approach is to use one of the many tools available online for this purpose. And if you can spare a few minutes for manual comparison the windows task manager will also suffice.
And to be more professional in you approach try something like Prime95 which you can use to really torture test your PC. If you need a utility to check what speed and settings you're running your system at, look no further than CPU-Z which provides CPU speed, FSB, memory timings and frequency, CPU voltage and many other things. A widely used benchmark utility is ‘SiSoftware Sandra’ which includes benchmarks for the CPU, Memory, Hard Disk, CDROM, Network etc. This is useful to check that your tweaking has actually made some difference!
Some of the benchmarks used to compare overclocking success are: memory bandwidth, clock rates, databases, scientific computing, memory bandwidth, 3Dmark and many more.
Popular stress tests include Prime95, Everest, Superpi, OCCT, IntelBurnTest/Linpack/LinX, SiSoftware Sandra, BOINC, Intel Thermal Analysis Tool and Memtest86.

V. a few considerations

There are several considerations when overclocking. First is to ensure that the component is supplied with adequate power to operate at the new clock rate. However, supplying the power with improper settings or applying excessive voltage can permanently damage a component.
A few considerations that one should always be sure of putting on the check list are mentioned in following sub-sections.

A. Cooling

All electronic circuits produce heat generated by the movement of electrical current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and Thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. The relationship between chip voltage and TDP is exponential due to the fact that as the chip warms, the resistance increases. This increased heat requires effective cooling to avoid damaging the hardware. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Because most stock cooling systems are designed for the amount of power produced during non-overclocked use, overclockers typically turn to more effective cooling solutions, such as powerful fans, larger heatsinks, heat pipes and water cooling. Size, shape, and material all influence the ability of a heatsink to dissipate heat. Efficient heatsinks are often made entirely of copper, which has high thermal conductivity, but is expensive. Aluminium is more widely used; it has poorer thermal conductivity, but is significantly cheaper than copper. Heat pipes are commonly used to improve conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost.
Some of the most common cooling mechanisms and devices used are: air cooling, liquid submission cooling, waste heat reduction, conductive and radiative cooling, spot cooling, passive heat sink cooling, active heat sink cooling, Peltier cooling or thermoelectric cooling, water cooling, heat pipe, phase-charge cooling, liquid nitrogen, liquid helium, soft cooling, undervolting, integrated chip cooling technology, use of rounded cables, use of exotic thermal conduction compounds, heat sink lapping and more.

B. Stability and functional correctness

As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk is silent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave (even under normal operating conditions) without becoming totally unusable.
In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor. Achieving good fault coverage requires immense engineering effort; even with all of the resources dedicated to validation by manufacturers, faulty components and even design faults are not always detected. A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected.To further complicate matters, in process technologies such as silicon on insulator, devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs.
In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest. The hope is that any functional-correctness issues with the overclocked component will show up during these tests, and if no errors are detected during the test, the component is then deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days. An overclocked computer is sometimes described using the number of hours and the stability program used, such as "prime 12 hours stable for an test run using prime 95 for 12 hours”.

VI. limitations

The utility of overclocking is limited for a few reasons:
  • Personal computers are mostly used for tasks which are not computationally demanding, or which are performance-limited by bottlenecks outside of the local machine. For example, web browsing does not require a high performance computer, and the limiting factor will almost certainly be the bandwidth of the Internet connection of either the user or the server. Overclocking a processor will also do little to help increase application loading times as the limiting factor is reading data off the hard drive. Other general office tasks such as word processing and sending email are more dependent on the efficiency of the user than on the performance of the hardware. In these situations any performance increases through overclocking are unlikely to be noticeable.
  • It is generally accepted that, even for computationally-heavy tasks, clock rate increases of less than ten percent are difficult to discern. For example, when playing video games, it is difficult to discern an increase from 60 to 66 frames per second (FPS) without the aid of an on-screen frame counter. Overclocking of a processor will rarely improve gaming performance noticeably, as the frame rates achieved in most modern games are usually bound by the GPU at resolutions beyond 1024x768. One exception to this rule is when the overclocked component is the bottleneck of the system, in which case the most gains can be seen.

VII. Advantages

Here comes the most important question. What will we be getting in return after so much of hustle and bustle. Lets have a look at few advantages the overclocked machines can provide:
  • The user can, in many cases, purchase a lower performance, cheaper component and overclock it to the clock rate of a more expensive component.
  • Higher performance in games, encoding, video editing applications, and system tasks at no additional expense, but at an increased cost for electrical power consumption. Particularly for enthusiasts who regularly upgrade their hardware, overclocking can increase the time before an upgrade is needed.
  • Some systems have "bottlenecks," where small overclocking of a component can help realize the full potential of another component to a greater percentage than the limiting hardware is overclocked.
  • Overclocking can be an engaging hobby in itself and supports many dedicated online communities. The PCMark website is one such site that hosts a leader-board for the most powerful computers to be bench-marked using the program.
  • A new overclocker with proper research and precaution or a guiding hand can gain useful knowledge and hands-on experience about their system and PC systems in general.

VIII. disadvantages

Many of the disadvantages of overclocking can be mitigated or reduced in severity by skilled overclockers. However, novice overclockers may make mistakes while overclocking which can introduce avoidable drawbacks and which are more likely to damage the overclocked components (as well as other components they might affect).
  • The lifespan of a processor may be reduced by higher operating frequencies, increased voltages and heat, although processors rapidly become obsolete in performance due to technological progress.
  • Increased clock rates and/or voltages result in higher power consumption.
  • Even with adequate CPU cooling, the excess heat produced by an overclocked processing unit increases the ambient air temperature of the system case; consequently, other components may be affected. Also, more heat will be expelled from the PC's vents, raising the temperature of the room the PC is in - sometimes to uncomfortable levels.
  • Overclocking has the potential to cause component failure.
  • With sub-zero cooling methods such as phase-change cooling or liquid nitrogen, extra precautions such as foam or spray insulation must be made to prevent water from condensing upon the PCB and other areas. This can cause the board to become "frosted" or covered in frost. While the water is frozen it is usually safe, however once it melts it can cause shorts and other malignant issues.
  • More common than hardware failure is functional incorrectness. Although the hardware is not permanently damaged, this is inconvenient and can lead to instability and data loss. In rare, extreme cases entire filesystem failure may occur, causing the loss of all data.





IX. why overclocking???
When overclocking is so much of work, and success rate is also very low. And even if succeeded one cannot measure the percentile of success. Hence the question arises why to take so much of pain and risk, when one can easily upgrade his/her system using parts available in the market. Overclocking can be an engaging hobby in itself and supports many dedicated online communities. The PCMark website is one such site that hosts a leader-board for the most powerful computers to be bench-marked using the program. And to add to this it provides higher performance in games, encoding, video editing applications, system tasks without paying any additional fee. Overclockers have a dedicated website, www.hwbot.com, where they can submit their score and get a rank.

X. conclusion

Bob Colwell calls over-clocking an "uncontrolled experiment in better-than-worst-case system operation.” It may not involve the adrenaline rushing stunts of real extreme sports. But geeks’ ‘XTREME’ sport called overclocking is second to none when it comes to passion, skills, excitement and triumphs.
And the rest-how and why- are just the details of this highly rushing sport. In words of few overclockers it’s the promise of extra performance for free.
Whatever the advantages and disadvantages of overclocking be, the overclokers surely form a very distinct genre of computer geeks. Even back home in India we have few prominent overclockers- Harshal Pravin Tank and Madhusudan Banik to name a few.

XI. bibiliography

While preparing this document we referred the works of following authors:
1. Wainner, Scott; Robert Richmond (2003). The Book of Overclocking. No Starch Press.
2. J. M. Rabaey. Digital Integrated Circuits. Prentice Hall.
3. How to overclock-chillblast (pdf)
4. Javed Anwer- Times News Network.

No comments:

Post a Comment