cancel
Showing results for 
Search instead for 
Did you mean: 

R9 280x DirectCU II Top Artifacts!!

d1versify
Level 7
HEllo i just upgraded from an 7870 to the R9280X-DC2T-3GD5

I followed these steps for drivers

1. Ran Catalyst control center from C:/ AMD and did a custom uninstall. Removed all
2. Restarted computer
3. Ran Driver Fusion and deleted everything from AMD
4. Restarted computer
5. Installed latest 13.12 R9 drivers from AMD site

I get artifacts (shapes, lines , squares) flashing everywhere in screen , when in Chrome, in Dota 2 and Planetside 2. Haven't tested in more games , but I guess it will be the same.
The card will need a replacement? I have a Corsair CX600 , AMD FX 8350, Corsair Vengeance 2x4gb 1600 mhz
I read somewhere about GPU bios update. I ran the GPU ASUS TWEAK software that came in with the CD, and i tried to update something but i got a message ROM programming fail. But then it seemed to installed the update and forced me to restart the computer.
But still i get artifacts.
What do you suggest?
606 Views
678 REPLIES 678

what are your idle temps and load temps, plz also mention the clock of gpu..............

RicRic
Level 7
Yes, and the VRM temperatures please. As we all know, the VRM chips are the main root causes for this huge failure rate of Asus R9 Series (and even 79XX series). Not necessary due to heat, but also to bad quality chips or overclocked out of chip manufacturer limit.

PS: For those who do not know, the VRM temps you can record with GPU-Z at sensors tab.

Is there anyone familiar with hardware voltmod?

I have two 280X DC2T, one is with artifact problem on the frequencies from 1400MHz and above and 1.6v on VRAM.

I've tried almost everything. Really.

1. Tried to speed up the fans.
2. Tried to flash latest bioses.
2b. Tried flash different bioses from The Stilt.
2c. Tried to set maximum timings I've found for all the performance modes, tried to write in even bigger values for them.
3. Tried to apply additional cooler for vram chips in the bottom of the card which are not under the fan streams.
3b. Tried to apply 15x15x3 heatsinks with 3M solution to all memory chips except the one below the top heat pipe.
3c. Tried to apply a 40x10x0.8 metal plate (soldered up to become 1.8mm of thin) to that last one with 3M (required to remove upper strength plate).
3ca. Tried to apply additional 15x15x6 heatsink to the other end of that plate with 3M. That heatsink becomes very hot! Unfortunately, my multimeter doesn't have thermal sensor to check.
3cb. Tried to cool it actively to affordable temperatures.
4. Tried to raise the memory voltage with ROG Connect ports to 1.75v - every next step just makes the artifacts appear faster.
4b. Tried to raise VDDI voltage. Why not to try when i've already went so far?
5. Tried to make bad memory areas unavailable for regular applications by keeping them in use by a custom tool (which should be started on each boot). Within such tool I've loaded all the video memory with textures and tried to render all of them to check the results. Released all of them except the ones with wrong crcs and to not let them fall out of the video memory, this tool render them in a cycle after the initial test, making single render operation in each 10 seconds, which should not impact gaming performance.
5b. Found, that the problems appears during writing to memory, so redesigned the tool, so it now works with only one texture but a lots of framebuffers in which it renders the sample. Works much better, i've even designed an option to repeat the test without releasing previously captured bad blocks. But I've ran into a problem that I can't resolve - when the problem area appears in "first" 100 megs of VRAM which are used for Windows desktop videobuffer and application window videobufffers. My application cannot allocate these memory blocks, but the games can somehow, and in a result rare artifacts still appears in games. Maybe switching to fullscreen mode will help, but it will require to desing new console output, and the application already takes 5 minutes per one scan - I'm just tired off and can't imagine I'll have to perform 1-2 scans on each boot (and five minutes of Assassin's Creed Unity between them to warm up the rest of problem areas, as it's warming capabilities is unreachable for my tool). Decided to stop with this idea.

Of course I've tried all of these methods in combinations of one with anothers. All this didn't help to get perfect results. Even wasn't able to get stability on 1500MHz.

So far really looks like it just should be undervolted. Or cooled with different cooler which allows to apply huge heatsinks on VRAM chips to keep them below 50-60*C.

Please help with hardware downvolt to 1.5v.

P.S. Always trusted ASUS before. Now will never buy it again. It's rediculous example of the work of their engineers, their quality control and their RMA which replaces bad cards with another bad cards as seen from some posts there.

RicRic
Level 7
Its defective VRAM directly from the Asus factories. The quality check of these parts received from Asus suppliers was very low. There is no point in trying anything because you cannot fix a defective VRAM chip nor by software or hardware solution. Not even the vendor of the chips cant do this.
And the proof is the many ten of thousands of customers world wide that reported, when they sent their defective cards back , they received same card with same issues or another replacement card/cards with same issues.

dotachin
Level 7
There is no point in trying anything because you cannot fix a defective VRAM chip nor by software or hardware solution.
It's already possible to fix it just be lowering clocks to 1300-1400. As increase of voltage just makes the artifacts appear sooner, and as 1.5 is stock voltage for 1500MHz, it's at least worth trying to reduce the voltage somehow.

It's also known that there's almost no artifact reports from the customers of 1.5v-cards like DirectCU or DirectCUIITop-V2, but such cards often if not always use same memory chips.

Ishan
Level 7
Hey Guys,

After a year i would like to report back some stuff on my ASUS-R9-280x-DC2T Graphics Card.

As you guys can see i had some serious artifacts problem when i bought this card:
https://rog.asus.com/forum/showthread.php?43057-R9-280x-DirectCU-II-Top-Artifacts!!&p=413919&viewful...

And i had to RMA the card. After that the new card arrived in a few weeks and i picked it up from the ASUS customer care centre. I tested the card vigorously for 1-2 hours under every possible GPU benchmark. But, as some of you may know, the real problem with the card isn't detected by that but by actual gaming for long hours.

Since i'm not in college any more and doing my own business, i rarely get time for gaming like in my previous days. I tested the card for 3-4 hours after few months, only to see that the card again showed me such artifacts, although a system restart fixed that but i would get them again after 1-2 hours of gaming. So, i used to game by doing this half-arse measure. Ultimately i concluded that the problem was clearly heat related and the overvoltage settings of the HYNIX chip in the VBIOS.

So, now since i have gotten back into gaming and almost game for 4-6 hours daily, i thought that i needed to find a clear solution for this instead of getting frustrated and gaming under...sort of fear that my card would artifact again, hence ruining my gaming experience.

So i decided i would either:
1) Do some thorough research on it and buy\put some chip heatsinks to fix the heat issue.
or
2) Buy a GTX 970 card instead and just chalk it up as bad decision to buy this card.

But fortunately i found a great fix for this. Which is VBIOS flashing, which i have heard of many times, but never bothered with it or tried it myself. I just used ATI Winflash and flashed the regular DC2 VBIOS on it, which runs the hynix ROC chip on 1.5V, which is as it should be according to its manufactured settings:
https://rog.asus.com/forum/showthread.php?43057-R9-280x-DirectCU-II-Top-Artifacts!!&p=413922&viewful...

So ever since i flashed my card, i'm gaming without any issue. Obviously card isn't running on (GPU=1070 MHz, Mem Clock=6400 MHZ@1.6V) like before but 2-6 fps loss isn't gonna kill me when i get zero issues now (Now running GPU=1000 MHz, Mem Clock=6000 Mhz@1.5V ). I have been playing+recording heavy games using bandicam for regular 4-6 hours on a daily basis and i never shut down my system. So flashing did the job for me. 😄

I advise everyone to try this last resort method.

Ishan wrote:
I advise everyone to try this last resort method.
Can you please give some kind of direct link to the bios you've tried?

So the problem is/was basically bios conflict and not an overheating issue?

Ishan
Level 7
@dotachin
Of course mate. 🙂

I used the regular DCII vBIOS shown in techpowerup vBIOS collection:
http://www.techpowerup.com/vgabios/index.php?architecture=&manufacturer=Asus&model=R9+280X&interface...


My GPU memory at full usage when playing shadow of mordor on ultra settings:


My recorded videos when i was playing shadow of mordor, meaning i was stressing my card to its limits since i was gaming+recording it as well:


PS: As i said before, i used ATI Winflash for it, unlocked it to flash a different one and then flashed it with the regular DCII version one.

Ishan wrote:
I used the regular DCII vBIOS shown in techpowerup vBIOS collection:
http://www.techpowerup.com/vgabios/index.php?architecture=&manufacturer=Asus&model=R9+280X&interface...
Thank you, I've tried that bios before, and now repeat the test (required to change Subsystem ID to flash, did you do the same?). It didn't change the actual memory voltage for me. Are you sure it did that for you? AFAIK, you can check the real memory voltage with multimeter at VMDD point, or in MSI Afterburner, but only in it's monitors - vmem slider has no connection with vmem on that card.

This time I've performed the complete tests of that bios, not just one look at the voltage. Started with my usual profiles - 1100@1.15v with 1600 and 1700MHz for memory. Same artifacts.

Short test with the bios profile (1000@1.144v with 1500MHz for memory) shown no artifacts, but I've already found that 900+MHz on core is required to load the memory enough to give artifacts on 1700MHz, so i'm not surprised with stable 1000/1500. The thing is that core clock is more important for usual performance on that card: 1000/1500 will be usually slower than 1100/1300.

Also found that strange voltage 1.144v on core in that bios does not solve artifacting for me (used 1100@1.144/1500).

So that bios may not be a final solution, as its effect is based on memory workload decrease, reached with core underclock. Theoretically there could be situation when future games will reach higher memory workloads, making artifacts appear again, as the problem itself is not solved.

By the way, let me share with you the most perfect testing formula I've found for my card. It's Assassin's Creed Unity with best settings and 2xMSAA used in the following way:
1. Cold start and running forward for 1-2 minutes to warm the vram by loading new data into it, not just reading the same data loaded on initial level load.
2. Restart the game to reload absolutely all the data and same forward run, artifacts will appear in 1-2 minutes if not instantly.
This game uses all available video memory, and looks like it uses a lot of loaded data for each frame. Also I've found that the reason of artifacting was in errors during data load(write) in memory - not on read, as once artifacts appear, I wasn't able to remove them just by lowering clocks to 1500,1000 or even 600MHz on memory.
Shadow of Mordor didn't show me artifacts as good as Assassin's Creed Unity, 3DMark or Ryse.