Krista Wolffe wrote:
You won't need the networking coming from processor PCI-E lanes; that's usually handled by the chipset, which is connected to the processor on a DMI 2.0 4x link at 20Gb/s. Plenty to handle GigE and WiFi.
Yes, the PCH has provisions for an ethernet controller, but if you look at the components I spec'd, the I350 is an enterprise grade ethernet controller with dual GigE ports that can team. I'm also interested in remote wake-up...another enterprise feature that is useful to the consumer (**Ahem** ASUS HomeCloud).
Krista Wolffe wrote:
2 M2 slots would be nice, but they'd have to be vertical. Probably won't get that, though.
I was thinking they could lie flat in the vacant PCI-e 2 & 4 slots. There is also the possibility of putting them on the back side, similar to the Z97I-Plus.
Krista Wolffe wrote:
I'd like to see 4 16x PCI-E slots, in some combination of (2-16x,8x), (16x,3- 8x), (16x, 2- 8x, 2- M.2), (2- 16x, 2- M.2), and have *all* running from the CPU, not a crappy pseudo 2.0 8x from the chipset.
The PCI-e connection from the processor can handle 10 groupings of 4x PCI-e 3.0 lanes, each with their own PCI-e controller. These are further grouped into IOU0 and IOU1 each with 2 groups of 2 groups of 4 lanes (for a total of 16 lanes) and IOU2 with 2 groups of 4 lanes (8 lanes). I'm not sure how dynamic the configurations are...if you can have a trace go to multiple locations...but I'm doubting this. It seems like there will need to be a PCI-e switch in the middle to do the allocations. Anytime you add a switch, it means more trace routing, power consumption heat generation, link latency, testing and validation, and cost. But if it easy to dynamically allocate connections from the processor itself, then yeah, put in some extra PCI-e slots and have slot 1 share bandwidth with slot 2 and slot 3 with slot 4.
Krista Wolffe wrote:
If there's room, add 5he crappy 2.0 8x somewhere as an additional mPCIE.
mPCIE is being phased out. M.2 devices are coming onto the market to replace them.
Krista Wolffe wrote:
If wishes are being granted: make ram extenders so I can use 8 dimms somehow, or allow for 32gb dimms on a Haswell-e.
I still need to understand the memory topology for DDR4. So far, all I've seen is that it is point-to-point, and not multi-drop capable, but that is obviously not quite the case, especially with ASUS talking about using T-Topology in their new boards. So, if there is room, which there isn't much on µATX boards, then I suppose 8 DIMM slots would be nice.
Krista Wolffe wrote:
KILL the ps/2 port!
I partially agree. The only hold back is N-key rollover. Once this can be completely mitigated, then yes, get rid of it.
Krista Wolffe wrote:
2 on board usb3 headers would kick ass.
You'd just subtract from the USB 3.0 connectors on the back panel, unless you start driving 3rd party adapters (like Marvell) to expand the number of ports using the PCI-e 2.0 lanes off of the PCH.
Krista Wolffe wrote:
And as always, I'd recommend a 10GbE, as schizat needs rolling on that front.
There are a couple issues with 10GbE. First is cost. While cards for computers are in somewhat the realm of attainability for consumers and SMBs, switches, however, are not. The cheapest 8-port switch is just under $1000. If you want any smarts in it, you're going to start to look at several thousands of dollars for your networking. Second is power. While it is power efficient for an enterprise to move to higher bandwidth switches, they are replacing many GbE switches. The value proposition for a consumer, from a power perspective, is less attractive. Consumers don't have a mesh network with many levels supporting all sorts of devices. The 10 GbE controllers/PHY are well over 10x more power hungry and become the more dominant consumer of power in a system with only one or a few switches/routers.
I think a viable consumer alternative is Thunderbolt. It's taking native system interfaces and binding them in a stream that can be daisy-chained to multiple devices (Alpine Ridge with Thunderbolt 3 is changing it up a little and adding HDMI 2.0 and USB 3.0 to the mix of I/O through Thunderbolt on top of PCI-e and DisplayPort). While a lot of people complain about the cost of Thunderbolt, it is a whole lot more appealing compared to 10 GbE in terms of cost, bandwidth, and power. Intel does need to step it up and get costs down a little, adoption up, and get back to their original intent with Light Peak, sooner than later.
I think once Intel starts stacking non-volatile memory on their processors, connected with through-silicon vias with cache 3/4-like latencies, it will free up a lot of area on the package to pack in a lot more PCI-e lanes. Ideally this would result in around 120-128 lanes for enthusiast/professional chips and 64 lanes for consumer chips. This will probably be PCI-e 4.0 by this time, at which point signal integrity across any interface becomes an issue (they minimized overhead in 3.0, so now they can only turn to bumping up the signalling rate), so some of the daughter board tricks ASUS is using won't work.