cancel
Showing results for 
Search instead for 
Did you mean: 

Will there be Rampage V Formula, Gene and Impact ?

Orduu
Level 7
Will there be Rampage V Formula later or there will be only Extreme also will there be Rampage V Gene and Impact if yes then when ? I am also interested if there will be X99 Deluxe variant with bundled NFC2 , WLC and Thunderbolt 2 like happened with Z97 Deluxe.
23,435 Views
29 REPLIES 29

FreakinApplePie
Level 7
Well, I hope so 😛

Menthol
Level 14
I wouldn't expect an Impact board, the CPU socket is huge

Chino wrote:
Just wait and see. 😉


Whoa! Such enthusiasm 😮

Chino wrote:
Just wait and see. 😉


For what, the GENE or the IMPACT? 😮

mark0409mr01
Level 7
Hope for there to be a Rampage V GENE, love the idea of multiple M.2 x4 PCIe slots for storage! i'm a big fan of the mATX form factor.

Other manufactures have got an X99 mATX solution available so I can't imagine ASUS will be too far behind and are cooking up something special!

I would love a µATX board, as well, to move on from my EVGA X58 SLI Micro. My ideal motherboard would be a no-frills µATX that has full functionality of the processor and chipset while maximizing power efficiency and minimizing latency. The stripped down nature would make it almost a hybrid between a workstation board and consumer performance board (kind of like a µATX version of Gigabyte's GA-X79S-UP5-WIFI).

Processor & Memory

- Full support for single socket Xeon E5 v3 processors
- 4-channel point-to-point DDR4 memory
- ECC & non-ECC UDIMM/RDIMM memory support
- Support for NV-memory when it becomes available (new pins on memory interface are enabled)
- Support for at least DDR4-3200 MHz (I don't want the motherboard to be the limiter of memory performance instead of the DIMM)
- Support for Trusted Platform Module 1.2+

I/O

- No switches, bridges, or adapters aside from the chipset (C610), ethernet controller (Intel I350), and wireless card (M.2 card)
- Ethernet controller would get 4 of the PCI-e 2.0 lanes from the chipset and support remote wake-up calls
- Wireless card would get the other 4 PCI-e 2.0 lanes from the chipset in an M.2 slot
- Wireless card should be able be upgradable to support a tri-band IEEE 802.11ad (2.4GHz + 5GHz + 60GHz) (if not implement - there are tri-band modems available now) and Bluetooth 4.x (ZigBee and Z-Wave with Thread, AllJoyn, and MQTT support, too, if possible) wireless module with soft access point capability
- 2 x 16x PCI-e 3.0 slots in slot positions 1 and 3 (second slot gets downgraded to 8x slot with 28 lane processor)
- 2 x M.2 slots for SSDs with 4 PCI-e 3.0 lanes each from the processor (Port 1a & 1b on IOU2; 2nd M.2 slot not enabled with 28 lane processor)
- 2 x USB 2.0 through internal header and remaining 6 available from chipset on rear panel (preferably with USB Type-C ports with some Type-C to Type-A adapters included...These boards should be expected to operate for at least 3 years and there definitely will be many devices with USB Type-C ports on them coming out in that time period)
- 2 x USB 3.0 through internal header and remaining 4 available from chipset on rear panel (Same thing with USB Type-C ports...)
- All SAS/SATA ports available from chipset exposed (Just like the ASRock X99 µATX boards, except the two ports above the screw mounting hole should be pointing up for flexibility and it won't get in the way of any expansion card)
- PS/2 Keyboard/Mouse interface through chipset
- Audio through chipset
- Thunderbolt header for expansion card

Thermal

- Keep in mind that µATX boards are compact and going into compact cases where air is not as free flowing, especially on the top where the VRMs are. Keep the heat sinks very low profile with a heat pipe over the VRMs leading to another heat sink alongside the memory. Plus, these overly large heat sinks on consumer motherboards are rather gaudy/tacky. A further point to add is that the CPU power connector should be located as close to the memory as possible...cables impede airflow, too.
- I'd be in favor of using a large number of power phases for greater efficiency (and power handling) instead of dissipating heat generated through heat sinks from less efficient components.

The small form factor with no extraneous components and relatively simplistic design should lead to reduced validation effort required, extra reliability, and lowered overall cost. This should more than offset the costs of the C610, I350, wireless card, and higher number of power phases.

You won't need the networking coming from processor PCI-E lanes; that's usually handled by the chipset, which is connected to the processor on a DMI 2.0 4x link at 20Gb/s. Plenty to handle GigE and WiFi.

2 M2 slots would be nice, but they'd have to be vertical. Probably won't get that, though.

I'd like to see 4 16x PCI-E slots, in some combination of (2-16x,8x), (16x,3- 8x), (16x, 2- 8x, 2- M.2), (2- 16x, 2- M.2), and have *all* running from the CPU, not a crappy pseudo 2.0 8x from the chipset.

If there's room, add 5he crappy 2.0 8x somewhere as an additional mPCIE.

If wishes are being granted: make ram extenders so I can use 8 dimms somehow, or allow for 32gb dimms on a Haswell-e.

KILL the ps/2 port!

2 on board usb3 headers would kick ass.

And as always, I'd recommend a 10GbE, as schizat needs rolling on that front.

Krista Wolffe wrote:
You won't need the networking coming from processor PCI-E lanes; that's usually handled by the chipset, which is connected to the processor on a DMI 2.0 4x link at 20Gb/s. Plenty to handle GigE and WiFi.


Yes, the PCH has provisions for an ethernet controller, but if you look at the components I spec'd, the I350 is an enterprise grade ethernet controller with dual GigE ports that can team. I'm also interested in remote wake-up...another enterprise feature that is useful to the consumer (**Ahem** ASUS HomeCloud).

Krista Wolffe wrote:
2 M2 slots would be nice, but they'd have to be vertical. Probably won't get that, though.


I was thinking they could lie flat in the vacant PCI-e 2 & 4 slots. There is also the possibility of putting them on the back side, similar to the Z97I-Plus.

Krista Wolffe wrote:
I'd like to see 4 16x PCI-E slots, in some combination of (2-16x,8x), (16x,3- 8x), (16x, 2- 8x, 2- M.2), (2- 16x, 2- M.2), and have *all* running from the CPU, not a crappy pseudo 2.0 8x from the chipset.


The PCI-e connection from the processor can handle 10 groupings of 4x PCI-e 3.0 lanes, each with their own PCI-e controller. These are further grouped into IOU0 and IOU1 each with 2 groups of 2 groups of 4 lanes (for a total of 16 lanes) and IOU2 with 2 groups of 4 lanes (8 lanes). I'm not sure how dynamic the configurations are...if you can have a trace go to multiple locations...but I'm doubting this. It seems like there will need to be a PCI-e switch in the middle to do the allocations. Anytime you add a switch, it means more trace routing, power consumption heat generation, link latency, testing and validation, and cost. But if it easy to dynamically allocate connections from the processor itself, then yeah, put in some extra PCI-e slots and have slot 1 share bandwidth with slot 2 and slot 3 with slot 4.

Krista Wolffe wrote:
If there's room, add 5he crappy 2.0 8x somewhere as an additional mPCIE.


mPCIE is being phased out. M.2 devices are coming onto the market to replace them.

Krista Wolffe wrote:
If wishes are being granted: make ram extenders so I can use 8 dimms somehow, or allow for 32gb dimms on a Haswell-e.


I still need to understand the memory topology for DDR4. So far, all I've seen is that it is point-to-point, and not multi-drop capable, but that is obviously not quite the case, especially with ASUS talking about using T-Topology in their new boards. So, if there is room, which there isn't much on µATX boards, then I suppose 8 DIMM slots would be nice.

Krista Wolffe wrote:
KILL the ps/2 port!


I partially agree. The only hold back is N-key rollover. Once this can be completely mitigated, then yes, get rid of it.

Krista Wolffe wrote:
2 on board usb3 headers would kick ass.


You'd just subtract from the USB 3.0 connectors on the back panel, unless you start driving 3rd party adapters (like Marvell) to expand the number of ports using the PCI-e 2.0 lanes off of the PCH.

Krista Wolffe wrote:
And as always, I'd recommend a 10GbE, as schizat needs rolling on that front.


There are a couple issues with 10GbE. First is cost. While cards for computers are in somewhat the realm of attainability for consumers and SMBs, switches, however, are not. The cheapest 8-port switch is just under $1000. If you want any smarts in it, you're going to start to look at several thousands of dollars for your networking. Second is power. While it is power efficient for an enterprise to move to higher bandwidth switches, they are replacing many GbE switches. The value proposition for a consumer, from a power perspective, is less attractive. Consumers don't have a mesh network with many levels supporting all sorts of devices. The 10 GbE controllers/PHY are well over 10x more power hungry and become the more dominant consumer of power in a system with only one or a few switches/routers.

I think a viable consumer alternative is Thunderbolt. It's taking native system interfaces and binding them in a stream that can be daisy-chained to multiple devices (Alpine Ridge with Thunderbolt 3 is changing it up a little and adding HDMI 2.0 and USB 3.0 to the mix of I/O through Thunderbolt on top of PCI-e and DisplayPort). While a lot of people complain about the cost of Thunderbolt, it is a whole lot more appealing compared to 10 GbE in terms of cost, bandwidth, and power. Intel does need to step it up and get costs down a little, adoption up, and get back to their original intent with Light Peak, sooner than later.

I think once Intel starts stacking non-volatile memory on their processors, connected with through-silicon vias with cache 3/4-like latencies, it will free up a lot of area on the package to pack in a lot more PCI-e lanes. Ideally this would result in around 120-128 lanes for enthusiast/professional chips and 64 lanes for consumer chips. This will probably be PCI-e 4.0 by this time, at which point signal integrity across any interface becomes an issue (they minimized overhead in 3.0, so now they can only turn to bumping up the signalling rate), so some of the daughter board tricks ASUS is using won't work.