D
DaveK67
I'm in the process of setting up a Windows Storage Server based backend for a high volume data generator. It has pretty specific requirements and I'm not getting the performance from the general config we have now. I'm not an IT guy so have been working with a large storage vendor - but I'm not getting the performance I want.
I have a machine that generates "Up to 100GB" data outputs every x minutes (outputting at less than 100MB/s). Once that raw data is generated I immediately need to be able to access those raw files from 2 high performance workstations as fast as possible (I'd like 1GB/s read, 500MB/s write for each). After that initial processing the resulting files can be moved to spinny nl storage for mid term archival.
My storage vendor has set up a 2 node Win2016 Cluster with iSCSI to the vendors NAS solution on the backend. The NAS has 10TB SSD Cache and 200TB of NL Storage. We've configured 3 50TB volumes on the cluster and tested. We're only seeing 800-900MB/s read and 400-500MB/s write to the cluster from one workstation... and that throughput is shared (the second workstation shares that limited throughput) which puts us well below where I'd like to be. I also have noticed that the cluster isn't load balancing (all the vol1 traffic hits one node) so it's just providing failover. Client Network is dual 10GB ethernet... we've tried teaming but are actually getting better performance with single 10GB port per node - one node to each switch.
I asked the vendor if we could add NVMe storage (512gb each would be more than sufficient) to the two Win2016 nodes to provide a local very high performance cache tier (in front of the iSCSI NAS) which would handle 100% of the high throughput processing and then the NAS would only have to deal with archiving loads. They didn't seem to think that was possible... but I'm not sure why. Is there a configuration that would allow me to have a fast cache on the Win Server nodes with iSCSI for the capacity tier? It looks like that's easy to do with Storage Spaces with DAS... I can't do the same thing with iSCSI (or can I and they just don't know how to configure it)?
Continue reading...
I have a machine that generates "Up to 100GB" data outputs every x minutes (outputting at less than 100MB/s). Once that raw data is generated I immediately need to be able to access those raw files from 2 high performance workstations as fast as possible (I'd like 1GB/s read, 500MB/s write for each). After that initial processing the resulting files can be moved to spinny nl storage for mid term archival.
My storage vendor has set up a 2 node Win2016 Cluster with iSCSI to the vendors NAS solution on the backend. The NAS has 10TB SSD Cache and 200TB of NL Storage. We've configured 3 50TB volumes on the cluster and tested. We're only seeing 800-900MB/s read and 400-500MB/s write to the cluster from one workstation... and that throughput is shared (the second workstation shares that limited throughput) which puts us well below where I'd like to be. I also have noticed that the cluster isn't load balancing (all the vol1 traffic hits one node) so it's just providing failover. Client Network is dual 10GB ethernet... we've tried teaming but are actually getting better performance with single 10GB port per node - one node to each switch.
I asked the vendor if we could add NVMe storage (512gb each would be more than sufficient) to the two Win2016 nodes to provide a local very high performance cache tier (in front of the iSCSI NAS) which would handle 100% of the high throughput processing and then the NAS would only have to deal with archiving loads. They didn't seem to think that was possible... but I'm not sure why. Is there a configuration that would allow me to have a fast cache on the Win Server nodes with iSCSI for the capacity tier? It looks like that's easy to do with Storage Spaces with DAS... I can't do the same thing with iSCSI (or can I and they just don't know how to configure it)?
Continue reading...