cancel
Showing results for 
Search instead for 
Did you mean: 

GPU 670M - my settings and usage

c_man
Level 11
Disclaimer: This is what I do, based on my experience. It might not apply to everyone and/or everything. Also, even if there is nothing dangerous involved, you are responsible for any unpleasant outcome.

The reading will be quite long, apologies (yeah, I know, too much Spartacus is not good for my health).

Let's see first who is the biggest enemy of the GPU.

While some consider this to be very high temperature, that is only part true. It's not exactly the temperature itself (up to a point), but the difference between minimum and maximum temperature. As the GPU cools down there will be some microfractures in the solder (well eco solder in the ball grid arrays is good for us, but not good for them and this can get more tech, but I'm not exactly this kind of person so I'll stop here) and in time, the connections at GPU level will not work anymore.

This problem can be fixed a several ways. Some put their cards into the oven to remake those connections. While this might work, it is not 100% safe. There is also the option of going to a pro guy with special equipment. If he does the job right, you will use the card for a long time. If not, maybe it will last for 6 months.

How to prevent this?

Well, make sure that the difference from low to high is not big and, most important, cooling cycles should be rare. Limit them as much as possible. I'm not saying you should not use the GPU at full power if needed, but if you game, then game. Do not exit the game every 5 min. to do something and cool down the GPU. Also, if you do not need the extra power, don't stress the GPU for nothing. I will show you what I do. Of course untill I had this practice, some cards died on me very fast.

I will use 670M as an example since most of you have this.

Programs:

- NvidiaInspector - download here;
- HWiNFO64 - download here;
- Furmark - download here;
- Heaven benchmark - download here;
- 3Dmark11 - download here;

NvidiaInspector is the OC prog I use.

HWiNFO64 will give you a lot of info about your system.

Furmark is a stress app for GPU. DO NOT USE IT FOR LONG PERIODS OF TIME. It will damage your GPU. I only use it to put some load on GPU and do some initial testing.

Heaven is a nice benchmark that will help us determine some OC limits.

3Dmark11 will help us compare results.

The 670M has 4 working performance stages » P-states. They are active depending on load.

The first one is P12 - minimum power consumption.



This will set lowest clocks used. As you can see, it's 51/135MHz.

The second one is an intermediate state, P8 - video playback.



The third one is another intermediate state, P1 - balanced 3D performance.



The last one is P0 - maximum 3D performance.



As you can see the GPU clock is grey. You cannot change them. But you can change the Shader clock. So let's see what happens.

The default value for Shader clock is 1240MHz (the truth is that the numbers we see are not 100% accurate, but since we all see them, the reference is valid and I will work with it).

I'll change that to 660MHz and hit Apply Clocks and Voltage.

You might be looking at your value and say that it bottoms at 365Mhz. Just click Unlock Min (next to P-states scroll menu).



Now during this Windows session, when P0 will get active, the maximum GPU clock will be 330MHz.

If I want to access this value in the future without starting the app, I have the option to create a shortcut on Desktop with Create Clocks Shortcut.



If I want to use this everytime Windows starts, I have the option with right click on the same button.



Remember that for every P-state you will have to make a different shortcut.

At this point since P0 is the maximum performance, this is the one that I need to change to OC the card and get more performance (captain obvious here). I'll get to this later.

Underclocking

What if performance is not what I want, but more battery power or less heat.

Well, I have 2 performance states that I need to change, as the first 2 are already low. I need to change P0 and P1 and like I've said, I'll have to make Shortcuts for each (remember to hit Apply first, Shortcut second). Let's try it.

I'll set P0 to 135Mhz. Remember to Apply.



If I open Furmark and start Burn-in test, the system will consider that I need P0 and:



I only do this for a few seconds to trigger P0. To stop Furmark, hit ESC.

If you change to P1 you will see that it has 365Mhz. I don't want to have a higher value so I change it to 135Mhz.



135Mhz was just a random value. If I open a 4K video right now, the system will activate P8 state. This means I can go with P0 and P1 as low as 74Mhz without any problems. If the system can play 4K video, it can do most routine stuff under battery usage. This combined with Battery Saving (tweaked for low brightness, camera and ODD off) in Power4Gear and no keyboard lighting should give a maximum amount of battery time or minimum heat with still decent performance.

Don't forget to down clock the Memory as well, but in P0 state with current driver it does not go lower than 1500.

When you want the default values back, just click Apply Defaults for every P-state.

Overclocking

Let's see how I OC this.

Now you should really run Furmark for the first time with stock clocks to compare temps with other members. Use the Burn-in benchmark 1920x1080 for 15min. I have about 75°C at room temp 33°C. I've seen on this forum temps above 90°C. If you have those, please solve the cooling problem and then OC.

If everything is fine, run Furmark again with Burn-in test, Resolution 1920x1080 and 8xMSAA for 10 min. Note the temperature.

I've said that for maximum performance the target is P0 so this is what I need to change.

I will use 20Mhz steps to increase value from 620 up (remember we need to change Shader clock, so there it will be 40Mhz). After every increase I start Heaven to see if I have any artifacts. I don't use Heaven for anything else. Artifacts should look like fireworks mostly or something similar. When you see them, stop and decrease the value.

Do the same for Memory clock.

Some GPUs can OC more, some less. Don't worry, it's normal. My card can run stable above 755 GPU/1650 Memory, but I've set this as top mark and so far I have used it only with Max Payne 3. With other games I run much lower clocks, for example I play Inversion at 365/1500 Mhz.

Adjust power as needed and remember to keep the difference between minimum and maximum temp as little as possible, when you can.

After you have set the best OC values with Heaven, run Furmark for the second time with Burn-in test at 1920x1080 and 8xMSAA. Let it run for 10 min. and compare results with stock. If it's within ~5°C more, it's fine. If it's above 10°C more, if you still need to use those values, do a in game temp check.

In the end ru 3Dmark 11 with basic settings and check the score. This is your maximum performance P0-state for the most demanding games. You can compare scores here to see how close you are to the next best GPU.

Using NvidiaInspector you should have on the desktop the shortcuts you need to get quick access to any setting without starting the app. Remember that for every P-state you need a shortcut.

I know there are ways to force a P-state or to run more shortcuts at once, but I like the dinamic behaviour and the control that individual shortcuts give.
28,552 Views
33 REPLIES 33

c_man
Level 11
It's up to you. Info is easy to find, I know people in service, I know my laptops. I really don't have to convince anyone. It could take a lot of effort and I gain nothing from it. Look for topics like this is you want http://www.overclockers.com/forums/showthread.php?t=606658, you will find very different cards. Or any other keywords I gave so far.

PS. I know how Google works, the keywords are there for a reason.

PS2. I was hoping to get more feedback about the changes done for GPU to use less battery power.

UltimaRage
Level 9
The G80 cores got FAR, FAAAAR hotter than ANY GPU currently out.... Not really conclusive in regards to current GPU tech, which has gotten vastly better since then.

How does my 8800GTS G80 still work fine, 4-5 years later?


My issue is that you are passing this subject off as being conclusive, which it is not. There are many factors. Low quality solder has a lower melting point than higher quality solder. This is a fact.

Again, if this is truly a great way to extend hardware life, than it should be no big deal to find someone from AMD, Intel, or Nvidia saying something about it.

It is frustrating how you ignore that 0 to 50 C when starting up and vice versa when shutting down will likely have far more of an impact because the temperature changes are far more drastic, than say idle to load which is commonly 50 to 65-70 C.

Showing an instance of someone's GPU going out when we don't know the usage scenarios doesn't make sense either. We don't know if this person gamed for 8 hours a day, or had gaps between gaming from alt - tabbing

All I am saying, is don't say it as if it is fact.

There are too many variables as GPU usage is never the same - such as when playing a game with tessellation turned off, or playing a DX9 game.

I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size.


I have always kept my machines on 24/7, because if this is a problem, then the zero to idle temps would have far more of an effect than idle to load.
G75VW-BBK5
i7
8 GB
660m

UltimaRage wrote:
The G80 cores got FAR, FAAAAR hotter than ANY GPU currently out.... Not really conclusive in regards to current GPU tech, which has gotten vastly better since then.

How does my 8800GTS G80 still work fine, 4-5 years later?


My issue is that you are passing this subject off as being conclusive, which it is not. There are many factors. Low quality solder has a lower melting point than higher quality solder. This is a fact.

Again, if this is truly a great way to extend hardware life, than it should be no big deal to find someone from AMD, Intel, or Nvidia saying something about it.

It is frustrating how you ignore that 0 to 50 C when starting up and vice versa when shutting down will likely have far more of an impact because the temperature changes are far more drastic, than say idle to load which is commonly 50 to 65-70 C.

Showing an instance of someone's GPU going out when we don't know the usage scenarios doesn't make sense either. We don't know if this person gamed for 8 hours a day, or had gaps between gaming from alt - tabbing

All I am saying, is don't say it as if it is fact.

There are too many variables as GPU usage is never the same - such as when playing a game with tessellation turned off, or playing a DX9 game.

I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size.


I have always kept my machines on 24/7, because if this is a problem, then the zero to idle temps would have far more of an effect than idle to load.


I don't care about shuting down since this is one time a day. OK, maybe for some 5 times a day. During the same day you might have 50 cooling cycles. So what is 5 compared to 50? Will it makes sense to keep it up 24/7? I guess not, you gain very little.

I know my routine. I know other people's routine. I know input from service.

No manufacturer will shoot itself in the leg over this. If Nvidia would release such info, how many would stop buying their products? Or do you think that AMD, out of the goodness of their hearts would jump to support them? Not going to happen.

You have one card. I know personal many cases and I looked into it as I wanted to prevent this as much as possible. That's about it. I don't have to convince anyone. I mean, it's not my discovery here. It's more or less common knowledge once you had your first.

mrwolf
Level 10
How hot do the G80's go..?

I think c_man is just trying to state some issues that have actually happened to alot of people so i guess its not fully conclusive nor is it inconclusive.. But at the end of the day it makes sense that rapidly cooling many many times can take its toll on the hardware as years progress.


UltimaRage
Level 9
G80's can go as hot as 90 C for the normal operating temperature during load.
G75VW-BBK5
i7
8 GB
660m

UltimaRage wrote:
G80's can go as hot as 90 C for the normal operating temperature during load.


This is a 2006 release card, right? What solder do you have?

UltimaRage
Level 9
Incredible claims require incredible proof.

Again, it is rather short-sighted to pass this off as conclusive.

As I said above....

"I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size."

c_man wrote:
This is a 2006 release card, right? What solder do you have?


Whatever solder Nvidia used of course.

Card was released in Dec 2007.

Please explain how the 8800GTS has lasted 5 years, as it is a card that gets far hotter than anything currently out.

You may have had this experience in the past, but to think that the situation wouldn't improve as operating temps go down doesn't make sense.

Things change. The tech world is not static.
G75VW-BBK5
i7
8 GB
660m

c_man
Level 11
Please don't be offended by this. It's a public place, I am here to exchage ideas, experiences, no one has to be right about anything. We all have our own background. It's not a contest.

Please explain how me and the people I know and the people my friend from service knows had this with a bunch of cards? I cannot explain one card versus many.

I know you think, "hey, I have a desktop card that is working for years, no way this is true. What do I care if there are a lot of people who had it or if some guys decided to make the equipment to fix it. Mine still works so I'm right".

I am you most of the times. When I hear people complain about something and I never had that, I say "those guys must have done something wrong". Well, sometimes they didn't or they didn't know about it.

Why don't you just look it up over the Internet and talk to those people if you never heard of this? Also notice the cards. Mostly mobile Nvidia. You don't exactly fit. Or maybe you got lucky, who knows. It's just one card. Again there might be a lot more still working, but focus mainly on gaming laptops with Nvidia mobile cards (Alienware, Clevo and so on). Have you ever considered that it might be that very high temp. that keept it going? There is little lab info on this. I guess no one wants to start up a fire. I wish you were right. I wish my laptop and my friends laptops never to have this. Or had, since these GPUs are so damn expensive and this is if you are lucky to have MXM. For some it's even more complicated.

I can't say much about desktops, I've stopped using them years ago.

In the past the solder was different. Eco stream is more of a recent thing and everyone is free to use whatever they want, as long as it's not harmful. Yes, tech changes and for the better, but this Eco brakes, Eco this, Eco that has a price, just as Non-eco had. Nothing is perfect.

I'm just saying there is a problem. Everyone thinks is the high temperature is the only thing that matters. It's not. Will it hit everyone on the planet? Most likely no. Do you fit the pattern of exiting games every 5 min. or so? If you do, then please consider the problem. Do I really care if you don't? No. I'm not an alarmist here, I don't care about shut downs. How many can one have in one day anyway.

It's strange as people are really interested about this. Thus me saying that no one will release something over it. It's just too negative for this kind of market and it's one image hard to repair after.

In the end, I apologize for anything that sounded wrong. I am latin, we light up fast as you may know.

UltimaRage
Level 9
The issue is that you are ignoring a lot of what I am saying. I am not trying to argue, in fact I am trying to bring more to the conversation by adding things that you haven't mentioned.

Extreme temperature changes would cause the contraction and expansion to a more extreme degree than less extreme temperature changes. That's just what happens with materials.

Also... my experience isn't just because of having that card for so long.

As I said....


"I am a game developer, and if this was a truly large issue (as we go from idle to load CONSTANTLY), then we would see more evidence for this with such a large sample size."

Unless developers are getting some special hardware (They aren't), I don't see how you can dismiss my experience in regards to that as well.

Also, the G75 is new. It runs cooler than any other laptop in the past has, for the most part.

It's just strange that you won't think about how tech has changed and how that changes the outcome of the mechanics of electronic mitigation. I know it is real, that isn't what I am talking about. I also know many developers who only upgrade when you have to upgrade, for development purposes. Like going from DX10 cards to DX11 cards.

All developers consistently go from idle to load hundreds of times per day, and that is not an exaggeration. With mission critical parts like a GPU, we can't have room for error.

For instance, DX10 cards to DX11 cards. Pretty much all higher end DX11 cards run cooler than most higher end DX10 cards, because the nm of transistors goes down with each GPU generation.
G75VW-BBK5
i7
8 GB
660m

c_man
Level 11
At this point there is nothing we know about G75. I'll be able to say if it's better or not after 2-3 years of usage. During this time I'm going to take care of it as best as I can, with the info I have. I'm not going to over do it, but I'll keep in mind some things.

I am not saying that you are wrong and I am right. What I know is that this problem is real. In service there are lots of cards that died because of this. I had it. I have friends. There are other people over the internet. There are machines designed to fix this. So ...