cancel
Showing results for 
Search instead for 
Did you mean: 

GPU 670M - my settings and usage

c_man
Level 11
Disclaimer: This is what I do, based on my experience. It might not apply to everyone and/or everything. Also, even if there is nothing dangerous involved, you are responsible for any unpleasant outcome.

The reading will be quite long, apologies (yeah, I know, too much Spartacus is not good for my health).

Let's see first who is the biggest enemy of the GPU.

While some consider this to be very high temperature, that is only part true. It's not exactly the temperature itself (up to a point), but the difference between minimum and maximum temperature. As the GPU cools down there will be some microfractures in the solder (well eco solder in the ball grid arrays is good for us, but not good for them and this can get more tech, but I'm not exactly this kind of person so I'll stop here) and in time, the connections at GPU level will not work anymore.

This problem can be fixed a several ways. Some put their cards into the oven to remake those connections. While this might work, it is not 100% safe. There is also the option of going to a pro guy with special equipment. If he does the job right, you will use the card for a long time. If not, maybe it will last for 6 months.

How to prevent this?

Well, make sure that the difference from low to high is not big and, most important, cooling cycles should be rare. Limit them as much as possible. I'm not saying you should not use the GPU at full power if needed, but if you game, then game. Do not exit the game every 5 min. to do something and cool down the GPU. Also, if you do not need the extra power, don't stress the GPU for nothing. I will show you what I do. Of course untill I had this practice, some cards died on me very fast.

I will use 670M as an example since most of you have this.

Programs:

- NvidiaInspector - download here;
- HWiNFO64 - download here;
- Furmark - download here;
- Heaven benchmark - download here;
- 3Dmark11 - download here;

NvidiaInspector is the OC prog I use.

HWiNFO64 will give you a lot of info about your system.

Furmark is a stress app for GPU. DO NOT USE IT FOR LONG PERIODS OF TIME. It will damage your GPU. I only use it to put some load on GPU and do some initial testing.

Heaven is a nice benchmark that will help us determine some OC limits.

3Dmark11 will help us compare results.

The 670M has 4 working performance stages » P-states. They are active depending on load.

The first one is P12 - minimum power consumption.



This will set lowest clocks used. As you can see, it's 51/135MHz.

The second one is an intermediate state, P8 - video playback.



The third one is another intermediate state, P1 - balanced 3D performance.



The last one is P0 - maximum 3D performance.



As you can see the GPU clock is grey. You cannot change them. But you can change the Shader clock. So let's see what happens.

The default value for Shader clock is 1240MHz (the truth is that the numbers we see are not 100% accurate, but since we all see them, the reference is valid and I will work with it).

I'll change that to 660MHz and hit Apply Clocks and Voltage.

You might be looking at your value and say that it bottoms at 365Mhz. Just click Unlock Min (next to P-states scroll menu).



Now during this Windows session, when P0 will get active, the maximum GPU clock will be 330MHz.

If I want to access this value in the future without starting the app, I have the option to create a shortcut on Desktop with Create Clocks Shortcut.



If I want to use this everytime Windows starts, I have the option with right click on the same button.



Remember that for every P-state you will have to make a different shortcut.

At this point since P0 is the maximum performance, this is the one that I need to change to OC the card and get more performance (captain obvious here). I'll get to this later.

Underclocking

What if performance is not what I want, but more battery power or less heat.

Well, I have 2 performance states that I need to change, as the first 2 are already low. I need to change P0 and P1 and like I've said, I'll have to make Shortcuts for each (remember to hit Apply first, Shortcut second). Let's try it.

I'll set P0 to 135Mhz. Remember to Apply.



If I open Furmark and start Burn-in test, the system will consider that I need P0 and:



I only do this for a few seconds to trigger P0. To stop Furmark, hit ESC.

If you change to P1 you will see that it has 365Mhz. I don't want to have a higher value so I change it to 135Mhz.



135Mhz was just a random value. If I open a 4K video right now, the system will activate P8 state. This means I can go with P0 and P1 as low as 74Mhz without any problems. If the system can play 4K video, it can do most routine stuff under battery usage. This combined with Battery Saving (tweaked for low brightness, camera and ODD off) in Power4Gear and no keyboard lighting should give a maximum amount of battery time or minimum heat with still decent performance.

Don't forget to down clock the Memory as well, but in P0 state with current driver it does not go lower than 1500.

When you want the default values back, just click Apply Defaults for every P-state.

Overclocking

Let's see how I OC this.

Now you should really run Furmark for the first time with stock clocks to compare temps with other members. Use the Burn-in benchmark 1920x1080 for 15min. I have about 75°C at room temp 33°C. I've seen on this forum temps above 90°C. If you have those, please solve the cooling problem and then OC.

If everything is fine, run Furmark again with Burn-in test, Resolution 1920x1080 and 8xMSAA for 10 min. Note the temperature.

I've said that for maximum performance the target is P0 so this is what I need to change.

I will use 20Mhz steps to increase value from 620 up (remember we need to change Shader clock, so there it will be 40Mhz). After every increase I start Heaven to see if I have any artifacts. I don't use Heaven for anything else. Artifacts should look like fireworks mostly or something similar. When you see them, stop and decrease the value.

Do the same for Memory clock.

Some GPUs can OC more, some less. Don't worry, it's normal. My card can run stable above 755 GPU/1650 Memory, but I've set this as top mark and so far I have used it only with Max Payne 3. With other games I run much lower clocks, for example I play Inversion at 365/1500 Mhz.

Adjust power as needed and remember to keep the difference between minimum and maximum temp as little as possible, when you can.

After you have set the best OC values with Heaven, run Furmark for the second time with Burn-in test at 1920x1080 and 8xMSAA. Let it run for 10 min. and compare results with stock. If it's within ~5°C more, it's fine. If it's above 10°C more, if you still need to use those values, do a in game temp check.

In the end ru 3Dmark 11 with basic settings and check the score. This is your maximum performance P0-state for the most demanding games. You can compare scores here to see how close you are to the next best GPU.

Using NvidiaInspector you should have on the desktop the shortcuts you need to get quick access to any setting without starting the app. Remember that for every P-state you need a shortcut.

I know there are ways to force a P-state or to run more shortcuts at once, but I like the dinamic behaviour and the control that individual shortcuts give.
163 Views
33 REPLIES 33

UltimaRage
Level 9
Common G80 card (Generally the 8800GTS and the 8800GTX): 60 C idle, 90 C load. 30 C difference.

660M: 50 C idle, 65 C load. 15 C difference.

Recognize that things have changed, please.

At this point, I know the idle and load temperatures of the 660M. I don't need to know about the G75 for that. Temperature is the thing that affects electronic mitigation, more specifically quick temperature changes. Let's just talk about the facts, so we can actually come up with a conclusion that is relevant to today's technology. As you should know, the G80 cores were notorious for the solder wearing out, like the early Xbox 360's were. I haven't heard of people having to use the oven trick for GPU's much anymore... because they simply don't run as hot as they used to.

Newer model Xbox 360's are far more reliable. By shrinking the transistor size for the CPU and GPU in the machine, they produce less heat.

You are relying too much on anecdotal evidence... and old anecdotal evidence at that.

I would much rather look at the facts of the situation of electronic mitigation in graphic cards.

Properly made electronics are made to last for decades.

The G80 and early G92 cards had naturally high failure rates. Everything isn't equal.

2008 article:

http://www.theinquirer.net/inquirer/news/1004378/why-nvidia-chips-defective

"Modern chips consume electricity in an uneven manner, as different parts of the chip use power at different rates. Sometimes parts of the chip are never used at all for a given workload. If you have a modern GPU and don't game or are smart enough to not run Vista, you will likely never touch the transistors that do all the 3D work. Think about it this way, there are hot spots on the chip as well as cold spots, it is uneven and changing constantly."


From the article. When talking about this subject, you must also consider things like that. You are biased because you are a repair tech, you must also recognize that as well, correct?

It just isn't good to cause new buyers unnecessary worry about something that isn't as big of an issue as it once was.
G75VW-BBK5
i7
8 GB
660m

Not all 660M run at 65°C full load. I've writen about this several times. I had non-defective laptops with most new Kepler cards medium range (650, 660, 640) that would hit 90°C and over under full system load. It's not the GPU itself, but the poor cooling design. At some point with those there might be 45° difference. Let's talk about real life stuff. I know people just presume Kepler runs extremly cool. The card itself might, but that card does not make a laptop alone. And right now I know nothing about Kepler behavoiur in time. It might be extremly good or not. What I know is what happend in the past and that now I can at least pay attention to this minor thing. It's not that hard to do.

I know what happens, I know how it's fixed. Is as simple as that. I am not biased at all. That is a decent article to make an idea about what's going on. You should also read the comments and related stuff. Instead of going all that tech, I keep an eye on a very simple thing that I know off. The article is far too complex and covers too much info. While some parts relate to this problems, other do not and if they cause a failure, that cannot be repaired just as simple or maybe at all.

You don't have to believe me, I don't have to convince you. I've stated from the start of OP, this is what I do and the reasons are also known.

You can't really convince me as I have my own first hand experience in the past (don't imagine years ago) and I can't presume anything about today's tech since you can't say based on limited experience and paper specs how reliable a product is. Maybe it has a weakness of some sort. I don't know. Maybe it doesn't. I am not paranoid to note every shut down or stuff like that, I'll just keep an eye on it and instead of exiting a game 500 times a day, I restrain myself to about 10 😄 And hope that this GPU won't die in a 2-3 years as others did. If they fixed it, nothing to lose. If they didn't or if it's something new maybe my effort will be for nothing, but I did something.

About high temps, people think that old tech runs like lava and new tech like ice. What is the max temp you think my 8800 GTX had while gaming?

mrwolf
Level 10
I found out by some more reading and a technician friend of mine that this issue is alot more prevalent in GPU's that have a lead-free solder as this is what becomes easily brittle and can cause tiny little cracks that effects the GPU.
If our GPU's are using a lead solder than apparently this reduces the issue big time and causes the GPU to last 10x as long than with the lead free solder..

Just thought i would let you guys know..

Anyone know what kind of solder our 670M is using..? im pretty sure it is of the highest quality but just checkin..


c_man
Level 11