Knowledge Base

These articles provide basic information and helpful recommendations concerning data access and recovery.

Download software

How to: Identify the order of drives in an XFS-based NAS

Generally, NAS storages, like Buffalo Terastation, Iomega StorCenter and Synology, depend on software RAID configurations built on data partitions (the largest partitions) of each drive. These NAS devices use the XFS file system distributed across the data partitions.

To successfully assemble RAID configuration for further data recovery, users need to know the correct order of disks that constitute RAID system the NAS relies on.

The article below explains how to identify the order of drives in a four-disk XFS-based NAS of Buffalo Terastation, Iomega StorCenter, Synology and similar NAS models.


Ways and means

Before you start recovering data from your XFS-based NAS and, if necessary, reconstruction of the embedded RAID, you should know RAID parameters and the order of RAID drives.

The best means to identify the drive order is content analysis of the RAID drive using known data fragments at data partition starts. CI Hex Viewer offers the most effective means and techniques for content analysis. At the same time, some powerful data recovery applications offer easier ways to identify RAID parameters – automatic RAID detection.

NAS storages don't provide direct logical access to their file systems and XFS-based NAS devices are not an exclusion, thus, you should begin with storage disassembling and connecting its drives to a PC for recovery. Please read HOW TO: Connect IDE/SATA drive to a PC for recovery for the instructions.

XFS-based Network Attached Storages usually apply Multiple Devices (MD) software RAID configurations. Such RAID configurations are created with the well-known mdadm utility and are capable of describing linear (JBOD), multipath, RAID0 (stripe), RAID1 (mirror), RAID5 and RAID6 configurations. This utility creates pseudo-partitions with metadata information sufficient to build RAID automatically.

SysDev Laboratories advise their UFS Explorer software as powerful utilities which support automatic detection, reconstruction and data recovery from software RAID configurations. UFS Explorer RAID Recovery was specially developed to work with complex RAID systems. UFS Explorer Professional Recovery offers a professional approach to data recovery. This software features embedded tools for RAID recovery. Other UFS Explorer products work with RAID systems via plug-in modules. For more detailed information, please, go to http://www.ufsexplorer.com/products.php.

We advise UFS Explorer RAID Recovery for your NAS as the software specially created to work with RAID

To build RAID automatically with UFS Explorer RAID Recovery you should:

  • Run the software;
  • Make sure that all the NAS drives (or disk image files) are opened;
  • Select ANY data partition of the software RAID to add it to a virtual RAID;
  • Once the partition is added and MD metadata is detected, the software will ask whether you want to try to assemble RAID automatically;
  • Press 'Yes'to build RAID automatically: The software will load disk partitions in the correct order and with correct RAID parameters;
  • Press 'Build' to add this RAID to UFS Explorer for further operations.

Note: If RAID parameters of the NAS were reset to a different RAID level, drive order or stripe size, the previous RAID configuration requires manual detection. Press 'No' in the software dialog, refuse automatic RAID assembling and use the manual specification of RAID parameters.

Disk content analysis

The best way to detect RAID parameters and precisely identify the order of RAID drives is to conduct in-depth analysis of disk contents. CI Hex Viewer software provides effective means for qualitative low-level data analysis. This software is distributed free of charge.

To prepare for content analysis you should carry out the following actions:

  1. Connect the drives to a PC for recovery;

Linux users: do not mount file systems from NAS drives!
Mac users: avoid any diagnosis, repair and similar operations on disks using disk utilities!

  1. Boot the PC, install and run the CI Hex Viewer software;

Windows XP and below: run the software as an Administrator;
Windows Vista/7/8/10 with UAC: run the software as an Administrator using the context menu;
macOS: sign in as the system Administrator when the program starts;
Linux: from the command line run 'sudo cihexview' or 'su root -c cihexview'.

  1. Click 'Open Disk Storage'(Ctrl+Shift+”O”); open the data partition of each NAS drive.

Each NAS drive has the same partition structure: 1-3 small “system” partitions (with the total size of about several gigabytes) and a large data partition (usually over 95% of the total drive capacity). For further information about partitions layout please visit the following web-page.

RAID configuration and advanced detection of drives order

To start disk content analysis, open the hexadecimal view of each data partition of all the NAS drives in CI Hex Viewer.
You can see an example of content analysis for a default RAID5 configuration with the 64 KB stripe size and the XFS file system.

XFS start

Fig. 1. XFS file system start (superblock).

The starting block (or superblock) of the XFS file system contains an “XFSB” string at the start, values of file system parameters and many zeros. A valid superblock never contains any non-zero data at a range from 0x100..0x200 bytes. This property makes it easy to identify superblock validity.

I-nodes block

Fig. 2. XFS I-nodes block.

In this XFS file system the I-nodes block lies at an offset of 64 KB. In RAID0 and RAID5 layouts with the default 64K stripe size the I-nodes block locates at a zero offset of the data partition of Drive2.

I-nodes can be identified by the “IN” string (“49 4E” byte sequence) at the start of each 256 (0x100) byte blocks. Each I-node describes a file system object.

The upper digit of the third byte defines the object type. 4X byte indicates a directory and 8X – a file.

In Figure 2 the first I-node indicates a directory and the second one – a file.

Parity block

Fig. 3. RAID5 parity block.

The parity block contains a mixture of data from data blocks of other drives. It may look like “trash” with visible fragments of data from data blocks.

Even if the parity block contains a valid “XFSB” string, unlike the superblock, it contains non-zero data at 0x100...0x200 bytes range; that makes it different from the superblock. Please also note that the parity block usually contains much more non-zero bytes.

Now, using this known content and assuming that the starting block is the first block of the data partition of the given drive, you can define the RAID configuration:

RAID5:

  • Only one first block will contain the superblock (Fig.1);
  • If the stripe size is 64 KB (usual for Terastation), one of the first blocks will contain I-nodes; the first I-node indicates a directory (root directory). If the root directory contained few files, their names are given in the I-node body (as in Fig.2);
  • The starting block of the third drive will contain the data or I-nodes table;
  • The starting block of the fourth drive will contain parity (Fig. 3);
  • Applying the XOR operationto bytes from the starting blocks of each disk at the same byte position gives zero result.

One can define the RAID5 configuration as RAID with only one superblock in the starting block and parity. The XOR operation over the bytes of each starting block at the same byte position gives zero result.

The drives order is as follows: the drive with the superblock is the first one; the drive with the root directory – the second; the drive with parity – the fourth; the remaining drive – the third. The parity check procedure includes the following steps:

  1. Choose a partition offset with non-zero data;
  2. Run a calculator (e.g. Windows standard calculator);
  3. Choose 'View' as 'Scientific' or 'Programming', switching from the 'Dec' to the 'Hex' mode;
  4. Type in the hexadecimal digit from the first drive and press the 'Xor' button;
  5. Type in the hexadecimal digit from the next drive at the exactly same offset and press 'Xor' again;
  6. Repeat the procedure till the last drive. Before you enter the digit from the last drive, the calculator must show the same number as at the specified position of the last disk. The 'Xor' operation will give zero.

A non-zero value for any of the offsets indicates either a calculation error or absence of parity.

RAID0: 

  • Only one first block contains the superblock (Fig.1);
  • If the stripe size is 64 KB (usual for Terastation), one of the first blocks will contain I-nodes; the first I-node must indicate a directory (root directory). If the root directory contains files, their names are given in the I-node body (as in Fig.2);
  • Other first blocks do not contain other superblocks or parity;
  • Other drives may contain more I-nodes in the first block.

One can define the RAID0 configuration as RAID with only one superblock in the starting block and without parity.

The drives order is the following: the drive with the superblock is the first one; the drive with the root directory is the second one. The 3rd and the 4th drives can be not identified at once, but you can try both and find out which of them is the right one.

RAID10/0+1: 

  • The first blocks of two drives contain a valid superblock (Fig.1);
  • Other two drives contain data in the starting block and in case of the 64 KB stripe size – I-nodes.

One can define the RAID10/0+1 configuration as RAID with two superblocks in the starting blocks.

The drives order is as follows: the drive with the superblock is the first one, the drive without a superblock (data or I-nodes) – is the second one. This configuration has two such pairs and both of them can be used for data recovery.

RAID1 andmulti-part storage:

  • First blocks of each drive contain a valid superblock (Fig.1).

One can define RAID1 and multi-part storage as RAID with superblocks in all the starting blocks.

The drives order is the following: Any drive from RAID1 gives all the data. In case of a multi-part storage each drive has a separate valid file system.

If content analysis gives a contradictory result and you are still unsure about the drives order, try all combinations and choose the matching one.

Note: UFS Explorer software doesn't modify the data on the storage. You can try different RAID combinations until you get the appropriate one.

Final notes

In case of physical damage it is strongly recommended to bring your NAS to a specialized data recovery laboratory in order to avoid data loss.

If you feel unsure about conducting data recovery operations from your NAS by yourself or not confident about the RAID configuration of your NAS, do not hesitate to turn to professional services provided by SysDev Laboratories.

For data recovery professionals SysDev Laboratories offers expert NAS storage analysis on a commercial basis.

Last update: 05.07.2018