Also, if you are intending to stream real-time stuff over a large (and presumably fairly busy) network, QoS becomes important, particularly if you have a mix of network speeds.
It's all gigabit now, except a few game consoles. I'm alone on the network, and the busy server to server traffic is on a separate network just for them (they are all in the same room, so that's easy). QoS is not a problem. I choke my disks before I choke the network.
I had assumed that, given what you said about your network (reliability etc), you would be using commercial-grade managed switches, not the unmanaged SoHo variery.
I did use them, but honestly, I found them more trouble than it was worth, even though I hardly had any use for their features, so when I switched to gigabit, I went for simple unmanaged switches instead, putting all the logic into the computers, turning the network into "stupid cables" that don't do anything "smart". A nice side effect was that it was also much cheaper.
however these forums are searched by many users (both new and old) and a complete discussion is, IMHO, always warranted for the benefit of those who come after
Of course.
For a start managed switches would allow you to trunk up your current small number of available cables to provide a backbone for good bandwidth around your physical network. Secondly it would allow you to determing what traffic gets prioritised and routed where.
As stated above, I have no performance issues on my internal network, just my internet connections, and I'm pretty much screwed there until they run an optic fibre or two by my house. I'm nowhere near choking the network, even if I have several servers moving files over it. This is partly due to it all being gigabit, partly because I've used slower 5400 rpm disks for cooling reasons (that's really a problem, in the summer, my server room temp tends to rocket) and because I find them more reliable. I need lots of reliable storage, not speed, and the hardware I have delivers that.
The key component in my network infrastructure and separation of subnets is my d-link gigabit managed switch. I also use them for work. Not had a single problem in two years.
D-Link is not a favourite of mine, they tend to overheat and start sending garbage, so I've switched to cheap NetGear at home (never failed yet, and even if they do, I get two for the price of one brand switch) and HP at work.
Once again, our needs are different. Regardless of how it may seem, my network environment is far from haphazard, it has grown and adapted from a need, including two total rebuilds during the last ten years (I've taken the chance to do it when I've moved).
I'm not quite following you as to why you cannot separate the DMZ type servers from regular old desktops/media directors.
It would be a single point of failure, and it would make it difficult to load balance my internet connections. It would also require a major rethink in the network, which is always a potential risk.
I also have a feeling that it might mess up remote administration of the servers, and I do not want 15 monitors and keyboards in the server room...
If this is the case, it's still easily doable without a spof (for the servers).
True, but there would be two problems:
* If the spof goes down, I will not be able to reach the servers, so it will not help me much that they keep running.
* I don't like the idea of piping all traffic between file servers and clients through a single machine.
Oh, one other thing I didn't see mentioned, leave the default IP range for lmce as 192.168.80.x. If you try messing around and giving it a different ip range you will be causing yourself headaches.
Well, that's some headaches that I'll need to get anyway in that case, given that my network is currently configured for 192.168.0.*, including a bunch of servers which are not on DHCP (Smoothwalls, development servers, DB servers, the inside IF on the web servers and mail server and probably some other servers as well). I like having servers on a fixed IP, as it gives me a chance to reach them, even if name services should die. Reconfiguring those will be a pain in the ass anyway.
My plan is to use a class B address range, using, for instance, 192.168.1.* for DHCP, and 192.168.0.* for static IP's, but letting them all coexist on the same network. It's a little bit cleaner than today, where I, for historical reasons, use 0-16 and 240-255 for static adresses, and 17-239 for DHCP.