I see a lot of posts like this on the rtx30 cards and often times folks dont realize that GPU boost 3.0 in these cards can cause a lot of confusion. I want to try and share a few things that may help you, or maybe it will help some others who come across this thread.
The way the gpu boost 3.0 works, and how aggressive it is, its hard to compare cards unless you are comparing apples to apples. You may be able to hit the same clocks you see in reviews, but you need to test under the same rendering load they are testing under to see it. For example. I have a Strix RTX 3090. In call of duty black ops cold war, With my +60 core overclock I see 2055mhz a lot. If I close CoD and load up 3d Mark and run Port Royal benchmark, my average clock in that benchmark will be around 1950mhz, on the same +60 core overclock. If I load up another game title I may see the card sit at 1995mhz most of the time. Under some graphics loads I have even seen 1905mhz, on the same +60 core overclock. What +60 means is that wherever your card lands on the clock/voltage curve under a given render load, the card will add +60mhz to the clock. GPU Boost 3.0 is already so aggressive, that overclocking ampere, and even my 2080 turing was very similar, can be very hard to nail down if you arent 100% aware of what is happening. I can run +120 core overclock in a lot of games, and then load up a different game and crash in 2 minutes. How high the GPU clocks is entirely dependent on the current rendering load. Not just the game you are playing, but at what resoution, and at what settings. The biggest decider though is the game you play. If for example you see someone hit 2055mhz in call of duty on your exact card, and then you put in the same overclock parameters in afterburner and run assassins creed odyssey, you are likely not going to see the same 2055mhz clock. The thing though is that neither would they see those same clocks in a different game. Some games render load is different, and the core will clock higher naturally. Other rendering loads can be more demanding and cause a 100mhz swing down to 1950mhz without you touching anything. To be honest, the way these cards function now is so advanced, and the clock switching so constant and so aggressive, that its taken a lot of the perceived head room out of GPU overclocking. On my 3090 strix I can jack the power limit to 123%, which will allow the card to pull nearly 500w of power by itself. It generates massive heat at this full power limit, and I bet I don't gain even 1-2% additional performance over my standard 107% power limit that I tend to run at. That extra 1-2% costs me about 8 degrees Celsius higher temps and much louder fan noise....Its not worth it. These cards are running right at their top end out of the box most times, especially these OC cards.
Back to your direct point however- how are you verifying your clocks? Run different games and loads. If you do, you should see some very different clock results. If you still feel youre not where you should be then look into the board power draw - how much power is the board pulling in various loads compared to reviews you are looking at. These cards, if they have the power and thermal room they will clock up. If its not clocking any higher then look at why. HWINFO or GPUZ can tell you if the card is hitting voltage or power limit, and let you know what is limiting you. VERY important though that you are comparing apples to apples here. You can not take someone showing a 2100mhz clock in one game and compare it to you clocking at 1995mhz in a different game, because the fact is that that is completely normal behaviour on these cards. You have to compare same exact game, so that the rendering load is the same across both cards, so that the cards will respond the same. Only then can you truly see where one card is perhaps a better overclocker.