Problems with S2D (direct storage spaces)

J

Juanjo-Golden

Hello,

I would like to ask you a question, whenever I deploy S2D in a FailOver Cluster with 2 nodes, a shared folder as Witness, and storage in Mirror mode ... I have a problem, when I finish the configuration and give everything "OK", when I test a node's failure, there is always a problem with one of the nodes:

If the HV2 node "fails", the HV1 node recovers correctly, starts the VMs that had the HV2 turned on, and takes charge of the CSV and the Volume. But if it happens the other way around, that is, the HV1 fails, the HV2 is unable to put the CSV and the Disk online, resulting in the failure of the Failover and not being able to recover it, from the failure of a node.

It seems strange to me, since I have deployed this twice, and in both times, the same thing always happens, only that the first time went to the "reverse", the node HV2 was the one that recovered correctly, while the node HV1 failed at the time of putting online the CSV and the Volume, if the HV2 failed.

I can not think of many ideas in order to solve the ruling. Because it is supposed to "always get going again" because both servers have the same specifications, the disks are configured as JBOD and have 10Gbps NICs for their communication.

I do not know if it can be related, but the time difference between the "node failure" tests was about 10-12min, with respect to the emulation of the failure in node 1, and node 2. Although I also tried the first time, with a difference of 1 day, and failed the same...

Continue reading...
 
Back
Top Bottom