As a result of hurricane Matthew, our business shutdown all servers for 2 times.

As a result of hurricane Matthew, our business shutdown all servers for 2 times.

One of many servers ended up being an ESXi host with a connected HP StorageWorks MSA60.

We noticed that none of our guest VMs are available (they’re all listed as “inaccessible”) when we logged into the vSphere client,. As soon as we consider the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, nevertheless the drives all reveal up as “unconfigured disk”.

We rebooted the host and tried going in to the RAID config energy to see just what things appear to be after that, but we received the message that is following

“An invalid drive movement ended up being reported during POST. Changes into the array setup after a drive that is invalid can lead to loss in old setup information and articles regarding the initial rational drives”.

Of course, we are extremely confused by this because absolutely nothing had been “moved”; absolutely absolutely absolutely nothing changed. We simply driven up the MSA and also the host, and also been having this problem from the time.

We have two main questions/concerns:

The devices off and back on, what could’ve caused this to happen since we did nothing more than power? I needless to say have the choice to reconstruct the array and commence over, but i am leery in regards to the risk of this taking place once more (especially since I have have no clue exactly what caused it).

Can there be a snowball’s opportunity in hell that I am able to recover our array and guest VMs, alternatively of getting to reconstruct everything and restore our https://datingmentor.org/woosa-review/ VM backups?

I’ve two main questions/concerns:

  1. The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to rebuild the array and commence over, but i am leery concerning the chance of this taking place once again (especially it) since I have no idea what caused.

A variety of things. Do you realy schedule reboots on your entire gear? Or even you should just for this explanation. The only host we now have, XS decided the array was not prepared over time and did not install the primary storage space amount on boot. Always good to understand these plain things in advance, right?

  1. Will there be a snowball’s opportunity in hell that I am able to recover our guest and array VMs, rather of getting to reconstruct every thing and restore our VM backups?

Perhaps, but i have never ever seen that specific mistake. We are speaking extremely experience that is limited. Dependent on which RAID controller its attached to the MSA, you could be in a position to see the array information through the drive on Linux making use of the md utilities, but at that point it really is faster simply to restore from backups.

A variety of things. Would you schedule reboots on all your valuable gear? Or even you want to just for this explanation. Usually the one host we now have, XS decided the array wasn’t prepared over time and did not mount the primary storage space amount on boot. Always good to understand these plain things in advance, right?

I really rebooted this host times that are multiple a month ago whenever I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at across the exact same time because I added more RAM to it. Once more, after powering every thing right straight back on, the raid and server array information ended up being all intact.

A variety of things. Would you schedule reboots on your entire gear? Or even you should for only this reason. Usually the one host we now have, XS decided the array was not prepared over time and did not install the storage that is main on boot. Constantly nice to learn these things in advance, right?

I really rebooted this server times that are multiple a month ago whenever I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing right back on, the raid and server array information ended up being all intact.

Does your normal reboot routine of the host come with a reboot associated with MSA? would it be that they had been driven right back on into the wrong purchase? MSAs are notoriously flaky, likely this is where the presssing issue is.

We’d phone HPE help. The MSA is really an unit that is flaky HPE help is very good.

I really rebooted this host numerous times about a month ago once I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at across the time that is same I added more RAM to it. Once again, after powering every thing straight right back on, the server and raid array information had been all intact.

Does your normal reboot routine of one’s host add a reboot regarding the MSA? Could it be which they had been powered straight right straight back on into the order that is incorrect? MSAs are notoriously flaky, likely that’s where the problem is.

I would phone HPE help. The MSA is really an unit that is flaky HPE help is very good.

We unfortuitously do not have a reboot that is”normal” for almost any of our servers :-/.

I am not really certain exactly exactly what the proper purchase is :-S. I might assume that the MSA would get driven on very first, then ESXi host. Should this be proper, we’ve already tried doing that since we first discovered this problem today, together with problem continues to be :(.

We do not have help agreement with this host or the connected MSA, and they are most likely solution of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), thus I’m unsure simply how much we would need to invest to get HP to “help” us :-S.

We really rebooted this host times that are multiple a month ago once I installed updates about it. The reboots went fine. We additionally entirely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing right straight back on, the raid and server array information had been all intact.

Please follow and like us:
error