Disclaimer: I don't work in the industry so all of the following is 2nd hand or speculation.
Often, power and voltage differences come down to binning. Nvidia takes higher quality silicon and sells it as the TI version(higher bin). The non-TI version will have different binning in terms of expected clocks and required voltages (i.e. might require more voltage and run hotter to reach the same clocks). Individual samples might work just fine but as a whole Nvidia only guarantees them to work within a certain spec. Now, individual manufacturers could take it upon themselves to do additional engineering, validation and binning to ensure the silicon can work at higher speeds. This is where you see things like a "Strix" version of the same silicon but higher performance. That costs money so those products cost more.
On the flip side, manufacturers can buy silicon from Nvidia, run it at rated clocks and power, and keep RMA rates low without the cost of extra validations.
Or sometimes it's just a differentiation to encourage people to consider a more expensive product. Product margins are often razor thin on the low end so artificial segmentation allows companies to stay in business. An annoying but necessary evil.
At any rate, manufacturers will almost never increase power and thermal limits after a product has been purchased. There is only a very minor reputational benefit vs an unknown risk of increased failures and RMAs for products that have not been tested to work that way.
BTW, even if you KNOW the silicon can run at the higher wattages with no ill effects it can still cause the fans to run at higher RPMs for longer, which increases the failure rate of the fans by some non-zero amount. In a higher margin product the manufacturer might decide the extra margin covers those additional RMA costs but doesn't on a product with lower margin. So power target is more than just silicon quality.
A bus station is where a bus stops. A train station is where a train stops. On my desk, I have a work station…