T
TrentMcClaskey
Hitting a snag when enabling replication for a cluster that spans 2 data centers. The data centers are linked by a 40GB connection, the cluster nodes are all on the same subnet (Nodes 1 and 2 share storage and nodes 3 and 4 share storage).
This setup is being used on two different SQL roles and each instance with data being replicated has its owns disks for replication logs and data. We run Prod roles on the nodes in data center 1 and test roles on the nodes in the other data center.
We are setting up replication to allow the prod roles to run on nodes 3 and 4 in case of a DC failure/storage failure.
The wizard in cluster manager goes through all the initial screens fine, allows us to select all the disks. After the SR groups are created the configuration fails with:
Failed to create replication.
ERROR CODE : 0x80131500;
NATIVE ERROR CODE : 7.
Replication groups Replication 1 and Replication 2 do not match. Possible reasons:
- log size in those two groups are different.
- data partition sizes are different in those two groups.
- data partition physical sector size are different in those two groups.
- log partition logical sector size are different in those two groups.
- write consistency setting are different in those two groups.
The event logs (System and StorageReplica) are not showing anything useful.
All partitions/volumes in windows match:
Source Volumes (grouped in Same CSV disk in Clustering) attached to Nodes 1 and 2 (DC1)
Databases2 CSVFS Fixed Healthy OK 129.85 GB 605.16 GB
DBBackup2 CSVFS Fixed Healthy OK 97.83 GB 195.78 GB
TransLog2 CSVFS Fixed Healthy OK 62.54 GB 185.2 GB
SourceLog Volume attached to Nodes 1 and 2 (DC1)
SrcRepLog2 NTFS Fixed Healthy OK 184.1 GB 184.27 GB
DestLog Volume attached to Nodes 3 and 4 (DC2)
DestRepLog2 NTFS Fixed Healthy OK 184.1 GB 184.27 GB
Destination Data Volumes (Shows as available storage in clustering) attached to Nodes 3 and 4 (DC2)
New Volume NTFS Fixed Healthy OK 185.03 GB 185.2 GB
New Volume NTFS Fixed Healthy OK 195.6 GB 195.78 GB
New Volume NTFS Fixed Healthy OK 604.93 GB 605.16 GB
Was hoping someone else had encountered this and might be able to provide some insight on what they did to get around it.
Continue reading...
This setup is being used on two different SQL roles and each instance with data being replicated has its owns disks for replication logs and data. We run Prod roles on the nodes in data center 1 and test roles on the nodes in the other data center.
We are setting up replication to allow the prod roles to run on nodes 3 and 4 in case of a DC failure/storage failure.
The wizard in cluster manager goes through all the initial screens fine, allows us to select all the disks. After the SR groups are created the configuration fails with:
Failed to create replication.
ERROR CODE : 0x80131500;
NATIVE ERROR CODE : 7.
Replication groups Replication 1 and Replication 2 do not match. Possible reasons:
- log size in those two groups are different.
- data partition sizes are different in those two groups.
- data partition physical sector size are different in those two groups.
- log partition logical sector size are different in those two groups.
- write consistency setting are different in those two groups.
The event logs (System and StorageReplica) are not showing anything useful.
All partitions/volumes in windows match:
Source Volumes (grouped in Same CSV disk in Clustering) attached to Nodes 1 and 2 (DC1)
Databases2 CSVFS Fixed Healthy OK 129.85 GB 605.16 GB
DBBackup2 CSVFS Fixed Healthy OK 97.83 GB 195.78 GB
TransLog2 CSVFS Fixed Healthy OK 62.54 GB 185.2 GB
SourceLog Volume attached to Nodes 1 and 2 (DC1)
SrcRepLog2 NTFS Fixed Healthy OK 184.1 GB 184.27 GB
DestLog Volume attached to Nodes 3 and 4 (DC2)
DestRepLog2 NTFS Fixed Healthy OK 184.1 GB 184.27 GB
Destination Data Volumes (Shows as available storage in clustering) attached to Nodes 3 and 4 (DC2)
New Volume NTFS Fixed Healthy OK 185.03 GB 185.2 GB
New Volume NTFS Fixed Healthy OK 195.6 GB 195.78 GB
New Volume NTFS Fixed Healthy OK 604.93 GB 605.16 GB
Was hoping someone else had encountered this and might be able to provide some insight on what they did to get around it.
Continue reading...